text
stringlengths
65
113k
Retina and optic nerve are sites of extra-cerebral manifestations of Alzheimer's Disease (AD). Amyloid-&#x3b2; (A&#x3b2;) plaques and neurofibrillary tangles of hyperphosphorylated tau protein are detected in eyes from AD patients and transgenic animals in correlation with inflammation, reduction of synapses, visual deficits, loss of retinal cells and nerve fiber. However, neither the pathological relevance of other post-translational tau modifications-such as truncation with generation of toxic fragments-nor the potential neuroprotective action induced by their in vivo clearance have been investigated in the context of AD retinal degeneration. We have recently developed a monoclonal tau antibody (12A12mAb) which selectively targets the neurotoxic 20-22&#xa0;kDa NH<sub>2</sub>-derived peptide generated from pathological truncation at the N-terminal domain of tau without cross-reacting with its full-length normal protein. Previous studies have shown that 12A12mAb, when intravenously (i.v.)-injected into 6-month-old Tg2576 animals, markedly improves their AD-like, behavioural and neuropathological syndrome. By taking advantage of this well-established tau-directed immunization regimen, we found that 12A12mAb administration also exerts a beneficial action on biochemical, morphological and metabolic parameters (i.e. APP/A&#x3b2; processing, tau hyperphosphorylation, neuroinflammation, synaptic proteins, microtubule stability, mitochondria-based energy production, neuronal death) associated with ocular injury in the AD phenotype. These findings prospect translational implications in the AD field by: (1) showing for the first time that cleavage of tau takes part in several pathological changes occurring in vivo in affected retinas and vitreous bodies and that its deleterious effects are successfully antagonized by administration of the specific 12A12mAb; (2) shedding further insights on the tight connections between neurosensory retina and brain, in particular following tau-based immunotherapy. In our view, the parallel response we detected in this preclinical animal model, both in the eye and in the hippocampus, following i.v. 12A12mAb injection opens novel diagnostic and therapeutic avenues for the clinical management of cerebral and extracerebral AD signs in human beings.
In a small fraction of patients, intracranial meningiomas arise as multiple and spatially distinct masses therefore presenting a unique management challenge [ , , ]. A recently-published, (Surveillance, Epidemiology, and End Results) SEER-based study has reported that patients with multiple meningiomas (MM) have substantially reduced overall survival when compared to patients with single meningiomas [ ]. Patients may develop multiple meningiomas in sporadic or hereditary forms. Familial syndromes that are commonly associated with MM are neurofibromatosis type 2 (NF2) and familial meningiomatosis in patients with germline NF2 and SMARCB1 mutations, respectively [ , ]. While the mutational landscape of single meningiomas has been extensively studied [ – , , ], understanding of the molecular pathogenesis of sporadic MM remains incomplete. Older studies and case reports have reported molecular testing in patients with sporadic MM that have principally been focused on tumors with NF2 mutations [ , – ]. However, to our knowledge, no molecular profiling in a case series of spatially separated MM, composed of different histological subtypes, has been performed. The objective of this study is to elucidate the genetic features of sporadic MM, defined as the presence of ≥ 2 spatially separated synchronous or metachronous lesions. This series includes 17 resected sporadic meningiomas from eight patients (seven females and one male) that were identified by a record search for patients with MM. All patients presented with synchronous, spatially separated meningiomas without evidence of tumor bridging, as reviewed on MR-imaging. The patients had no significant prior radiation exposure and the tumors did not arise in patients who met the clinical criteria for the diagnosis of familial schwannomatosis or neurofibromatosis type 2 [ ]. In addition, upon reviewing cranial and spinal MR images, no patient had other intra- or extra-cranial tumors associated with hereditary meningioma syndromes such as schwannomas or ependymomas. Fresh frozen tumor tissue was available from all 17 meningiomas and was retrieved from the archives of the Institute for Pathology at the University Hospital Dresden upon approval of the local ethics committee. Two board-certified pathologists confirmed the pathologic diagnosis of each case. All tumors were classified according to the 2016 WHO classification of tumors of the central nervous system [ ]. The tumor DNA was purified using AllPrep DNA Universal Kit for fresh frozen tissue (Qiagen, Germantown MD) following the manufacturer’s instructions. The regions of interest were amplified using a custom designed amplikon panel according to the protocol “QIAseq Targeted DNA V3 Panel, May 2017” (QIAGEN, Hilden, Germany). The panel was custom-designed by our group and manufactured by QIAGEN. The panel covers either mutation hotspots or—where loss of function is a known mechanism of action—whole genes. The following meningioma-relevant genes are included: AKT1, ATRX, CDKN2A, KLF4, NF1, NF2, PIK3CA, PIK3R1, POLR2A, PTEN, SMARCB1, SMO, STAG2, SUFU, TP53, TRAF7 , and TERT promotor. During library preparation unique molecular barcodes and sample specific indices were incorporated according to the protocol. Indexed libraries were then quantified using a Qubit dsDNA HS Assay Kit (Thermo Fisher Scientific, MA, USA) and paired end sequenced (2x200 bp) on Illumina MiSeq platform. HG19 was used as reference genome for bioinformatic analyses. The bioinformatics evaluation was performed using the Biomedical Workbench from CLC (12.0.3) using a customized analysis algorithm with following filters: coverage >/=100, allele frequency >/=5%. Notably, we performed internal NGS controls for identity check and cross contamination checks to assure the assignment of the correct samples. The average age at presentation was 60 years (range 43–75 years) which is comparable with the age of patients with single sporadic meningiomas [ ]. Six patients (75%) underwent two surgeries within 2 years for tumor resection, whereas in two patients the meningiomas were removed at the same time (patients 1 and 7). Fourteen meningiomas were WHO grade 1 (82.3%) and the remaining three tumors were WHO grade 2. This is consistent with previous reports of the predominance of WHO grade 1 among MM [ , ]. Most importantly, the same mutation was not identified in separate tumors from the same patient, suggesting genomically distinct molecular drivers and an independent origin of these multiple lesions. All but two cases harbored TRAF7 , AKT1 , SMO or PIK3CA mutations (Fig.  ). The most frequent driver mutations in our series were TRAF7 (n = 5), PIK3CA H1047R and E545G (n = 3), AKT1 E17K (n = 3), NF2 (n = 2), SMO L412F (one case) and NF1 (one case). We did not detect a known driver mutation in only one meningioma (MM #3, Site B; Table  ). Interestingly, with the exception of one patient (MM #5), all tumors from the same patient were different histopathological subtypes (Table  ). Illustrative cases from three patients with seven meningiomas are shown. No separate tumors within individual patients shared driver mutations Patients’ and tumor characteristics R right, L left, M midline The low frequency of NF2 mutations in our MM series stands in contrast to previous studies that included hereditary cases arising in the setting of NF2 [ , , , ]. Those studies identified a high prevalence of NF2 mutations (up to 83%) and supported a monoclonal origin for MM [ , ]. Our findings in a cohort of 17 MM arising in patients without NF2 support a model in which sporadic MM can arise independently from one another, while a subset of MM may result from somatic NF2 mosaicism [ ]. Each of the meningiomas in our study exhibited features that are commonly seen in solitary meningiomas, demonstrating strong associations between the genetic alteration, the histologic subtype and the anatomic location [ , , ]. The high frequency of known and targetable drivers of meningioma in our cohort suggests that a large fraction of MM may be candidates for study in clinical trials evaluating targeted therapies, such as the ongoing multicenter phase II study (ClinicalTrials.gov NCT02523014) that investigates the efficacy of afuresertib in AKT1 -mutant, vismodegib in SMO -mutant and the focal adhesion kinase (FAK) inhibitor GSK2256098 in NF2 -mutant meningiomas. Given the inter-tumor and intra-patient heterogeneity that we observe in the setting of MM, target lesions should be genomically characterized and not assumed to share molecular alterations with separately resected lesions. Taken together, our molecular analysis supports the genomic divergence of sporadic MM and presumably their independent origin. Our findings have important clinical implications for this patient population and suggests molecular stratification of each meningioma lesion in patients with sporadic MM to improve the design of meningioma clinical trials and help improve patient management.
Nemaline myopathy (NM) is one of the most common non-dystrophic genetic muscle disorders. NM is often associated with mutations in the NEB gene. Even though the exact NEB -NM pathophysiological mechanisms remain unclear, histological analyses of patients’ muscle biopsies often reveal unexplained accumulation of glycogen and abnormally shaped mitochondria. Hence, the aim of the present study was to define the exact molecular and cellular cascade of events that would lead to potential changes in muscle energetics in NEB -NM. For that, we applied a wide range of biophysical and cell biology assays on skeletal muscle fibres from NM patients as well as untargeted proteomics analyses on isolated myofibres from a muscle-specific nebulin‐deficient mouse model. Unexpectedly, we found that the myosin stabilizing conformational state, known as super-relaxed state, was significantly impaired, inducing an increase in the energy (ATP) consumption of resting muscle fibres from NEB -NM patients when compared with controls or with other forms of genetic/rare, acquired NM. This destabilization of the myosin super-relaxed state had dynamic consequences as we observed a remodeling of the metabolic proteome in muscle fibres from nebulin‐deficient mice. Altogether, our findings explain some of the hitherto obscure hallmarks of NM, including the appearance of abnormal energy proteins and suggest potential beneficial effects of drugs targeting myosin activity/conformations for NEB -NM. ## Supplementary Information The online version contains supplementary material available at 10.1186/s40478-022-01491-9. ## Introduction Nemaline myopathy (NM) is among the most common non-dystrophic genetic muscle disorders, with an estimated incidence of 1 in 20,000 live births [ , ]. Clinical symptoms of NM include hypotonia, muscle weakness and fatigue [ , ]. In the severe form, neonatal death may ensue whilst milder forms range from delayed motor developmental milestones to requiring a wheelchair, or even late-onset mild muscle dysfunction in adulthood [ , ]. In all forms of NM, respiratory compromise is a risk throughout life [ , ]. NEB mutations account for more than 50% of all NM cases [ ]. These mutations result in shorter forms or haplo-insufficiency of the giant protein, nebulin, known to be an integral component of the thin filaments in skeletal muscle [ , ]. Following this, we and others have observed that with shorter forms or reduced levels of nebulin, actin filament activation is incomplete. Subsequently, myosin motors cannot bind properly to actin monomers, which depresses the force-generating capacity of muscle fibres, thus causing muscle weakness in NEB -NM [ , , , ]. Hence, we have previously targeted the force production of myosin proteins in NM utilizing a recombinant adeno-associated viral vector-related gene therapy and showed its promise in an animal model with a mutation in the NM-causing gene, ACTA1 [ ]. However, this approach failed to restore muscle function in NEB -NM mouse models (unpublished data). This suggests that, despite major advancements in our understanding of the disease, NM pathophysiology is complex and far from being fully understood and consequently the design and implementation of accurate therapeutic interventions remains challenging [ ]. Remarkably, ultrastructural and histological observations from NEB -NM patients and from relevant murine models not only shed light on the presence of nemaline rods (an important diagnostic tool for NM) but also on glycogen deposits and misshapen mitochondria with noticeable pleomorphism, concentric christae and increased subsarcolemmal crescents [ , ]. In line with these observations, muscle glycolytic pathways have been found altered [ ]. These findings indicate an under-appreciated potential change in muscle energetics and metabolism in NEB -NM as well as in other forms of NM caused by other gene mutations. Further support comes from clinical observations reporting that children and adolescents with NM are often lean despite their inability to engage in fast motor activities. Inefficient binding of the force-producing myosin molecules to actin filaments may contribute to altered energetics and metabolism in NM muscles by subtly increasing the energy (ATP) cost of contraction [ , ]. Nevertheless, other more prominent pathological ATP-consuming mechanisms are likely to occur in NEB -NM. In the present study, we initially set forth to explore this hypothesis and study the involvement of resting myosin energetics, as an underlying NM mechanism. Myosin has multiple chemo-mechanical states [ ]. In addition to several active states, two distinctive relaxed states exist; myosin heads that are detached from actin filaments, and do not produce force, can be in either ‘super-relaxed’ or ‘disordered-relaxed’ states [ , ]. In the super-relaxed state, myosin heads interact with the thick filament backbone restricting their interaction with actin. In the disordered-relaxed state, myosin molecules are not immobilized and can weakly bind actin allowing a fast transition to the active state when actin filaments are switched on. The fraction of myosin heads in disordered-relaxed and super-relaxed conformations correlates with the rate of ATP usage, with the ATPase activity of myosin heads in the disordered-relaxed configuration being ten times higher than this in the super-relaxed state [ , ]. Thus, in the present study, we further hypothesized that in NEB -NM, the proportion of myosin molecules in the super-relaxed state is disrupted, impacting the basal ATP consumption of skeletal muscle, ultimately modifying the level of proteins involved in energy-producing pathways and contributing to the disease phenotype. To test this hypothesis, we used skeletal myofibres extracted from a wide spectrum of NM patients as well as from a muscle-specific nebulin conditional knockout mouse model (cNEB KO). We then performed a combination of biophysical assays, cell biology techniques and proteomics analyses. Interestingly, in line with our hypotheses, we found that in relaxed muscle fibres from NM patients, the myosin-stabilizing structural state is altered, with a potential causal involvement of myosin binding proteins such as regulatory light chains and myosin-binding protein C that are known to be involved in sequestering the super-relaxed state. We also observed that the increase in basal myosin ATP consumption may remodel muscle energy proteins, altogether paving the way to therapies related to myosin for NM. ## Materials and methods ### Human subjects Muscle biopsy specimens were obtained from 26 NM patients with known mutations in either NEB, ACTA1 , TPM2 or TPM3 ; and 12 age-matched controls with no history of neuromuscular disease. Eleven additional NM patients had an extremely rare, late-onset acquired myopathy termed sporadic late-onset NM (SLONM) that is known to have similar histopathological abnormalities as genetic NM and that progresses subacutely [ ]. All tissue was consented, stored, and used in accordance with the Human Tissue Act, UK, under local ethical approval (REC 13/NE/0373). Details of all the 49 individuals are given in Additional file : Table S1. All samples were flash-frozen and stored at − 80 °C until analyzed. ### Nebulin knockout mouse model The conditional muscle-specific nebulin knockout mouse model used in the present study has previously been published in detail [ ]. Briefly, mice were on a C57BL/6 J background. Floxed mice were bred to a MCK-Cre strain that expresses Cre recombinase under the control of the Muscle Creatine Kinase (MCK) promoter. Mice that were positive for MCK-Cre and homozygous for the floxed nebulin allele were nebulin deficient (cNeb KO). Mice with one nebulin wild-type allele and being either MCK-Cre positive or negative served as controls. All the experiments were approved by the University of Arizona Institutional Animal Care and Use Committee (09-056) and were in accordance with the United States Public Health Service’s Policy on Humane Care and Use of Laboratory Animals. At 6 months of age, six cNeb KO and six control female mice were weighed, anesthetized with isoflurane, and sacrificed by cervical dislocation. Tibialis cranialis skeletal muscles were then dissected and flash-frozen in liquid nitrogen before being stored at − 80 °C for later analysis. ### Solutions As previously published [ ], the relaxing solution contained 4 mM Mg-ATP, 1 mM free Mg , 10  mM free Ca , 20 mM imidazole, 7 mM EGTA, 14.5 mM creatine phosphate and KCl to adjust the ionic strength to 180 mM and pH to 7.0. Additionally, the rigor buffer for Mant-ATP chase experiments contained 120 mM K acetate, 5 mM Mg acetate, 2.5 mM K HPO , 50 mM MOPS, 2 mM DTT with a pH of 6.8. The lambda phosphatase solution (New England Biolabs) was prepared by a 100-fold dilution into the relaxing solution to yield 4 U lambda phosphatase/µl [ ]. The solution for extracting myosin regulatory light chains (RLC) contained 20 mM EDTA, 50 mM KPr, 10 mM potassium phosphate buffer with a pH of 7.1 [ ]. Finally, the solution for extracting myosin-binding protein C (MyBP-C) contained 10 mM EDTA, 31 mM Na HPO , 124 mM NaH PO , with a pH of 5.9 [ , ]. ### Muscle preparation and fibre permeabilisation Cryopreserved human and mouse muscle samples were immersed in a membrane-permeabilising solution (relaxing solution containing glycerol; 50:50 v/v) for 24 h at − 20 °C, after which they were transferred to 4 °C and bundles of approximately 50–100 muscle fibres were dissected free. These bundles were kept in the membrane-permeabilising solution at 4 °C for an additional 24 h (to allow for a proper skinning/membrane permeabilisation process). After these steps, bundles were stored in the same buffer at − 20 °C for use up to 1 week [ , ]. ### Mant-ATP chase experiments On the day of the experiments, bundles were transferred to relaxing solution and single myofibres were manually isolated. Their ends were individually clamped to half-split copper meshes designed for electron microscopy (SPI G100 2010C-XA, width, 3 mm), which had been glued to glass slides (Academy, 26 × 76 mm, thickness 1.00–1.20 mm). Cover slips were then attached to the top (using double-sided tape) to create flow chambers (Menzel-Glaser, 22 × 22 mm, thickness 0.13–0.16 mm) [ ]. Muscle fibres were mounted at a relaxed length (with their sarcomere length checked using the brightfield mode of a Zeiss Axio Scope A1 microscope, approximately at 2.20 µm). Similar to previous studies [ ], all experiments were performed at 25 °C, and each fibre was first incubated for 5 min with a rigor buffer. A solution containing the rigor buffer with 250 μM Mant-ATP was then flushed and kept in the chamber for 5 min. At the end of this step, another solution made of the rigor buffer with 4 mM unlabelled ATP was added with simultaneous acquisition of the Mant-ATP chase. For fluorescence acquisition, a Zeiss Axio Scope A1 microscope was used with a Plan-Apochromat 20x/0.8 objective and a Zeiss AxioCam ICm 1 camera. Frames were acquired every 5 s for the first 90 s and every 10 s for the remaining time with a 20 ms acquisition/exposure time using a DAPI filter set, and images were collected for 5 min. Three regions of each individual myofibre were sampled for fluorescence decay using the ROI manager in ImageJ as previously published [ ]. The mean background fluorescence intensity was subtracted from the average of the fibre fluorescence intensity (for each image taken). Each time point was then normalized by the fluorescence intensity of the final Mant-ATP image before washout (T = 0). These data were then fit to an unconstrained double exponential decay using Graphpad Prism 9.0: where P1 is the amplitude of the initial rapid decay approximating the disordered-relaxed state with T1 as the time constant for this decay. P2 is the slower second decay approximating the proportion of myosin heads in the super-relaxed state with its associated time constant T2 [ ]. ### Immunofluorescence staining and imaging To avoid any potential misinterpretation due to the type of myosin heavy chain, for the human Mant-ATP chase experiments, we assessed the sub-type using immunofluorescence staining as previously described [ ]. Briefly, flow-chamber mounted myofibres were stained with an anti-β-cardiac/skeletal slow myosin heavy antibody (IgG1, A4.951, sc-53090 from Santa Cruz Biotechnology, dilution: 1:50) and an anti-slow myosin binding protein C antibody (IgG, SAB3501005 from Sigma, dilution: 1:200). Myofibres were then washed in PBS/0.025% Tween-20 and incubated with secondary antibodies: goat anti-mouse IgG1 Alexa 555 and goat anti-rabbit IgG Alexa 488 (from ThermoScientific, dilution 1:1000), respectively, in a blocking buffer. After washing, muscle fibres were mounted in Fluoromount. To identify the type of fibres, images were acquired using a confocal microscope (Zeiss Axiovert 200, 63 × oil objective) equipped with a CARV II confocal imager (BD Biosciences) [ , ]. To obtain myosin filament length and myosin-binding protein C (MyBP-C) localisation measurements, mounted muscle fibres were acquired with a 100 × oil objective and an instant Structured Illumination Microscope (iSIM) system. To improve contrast and resolution (by two-fold compared to confocal microscopy), distributed deconvolution (DDecon) was then applied from the acquired images with a specific plugin for ImageJ (National Institutes of Health, Bethesda, MD) [ ]. Note that DDecon is a super-resolution light microscopy technique that addresses light scattering, differences in refractive index, glare, and background noise. It also allows the computation of filament lengths with a precision of 10–20 nm [ , ]. All line scans were background corrected. Distances and lengths were finally calculated by converting pixel sizes into µm using the scale for each image [ , ]. ### Western blotting Lysates of the flash-frozen human muscle biopsy specimens from three control subjects and three NEB -NM patients were prepared via hand-homogenization in a modified NP-40 lysis buffer (10 mM NaH PO , pH 7.2, 2 mM EDTA, 10 mM NaN , 120 mM NaCl, 0.5% deoxycholate, 1% NP-40) supplemented with complete protease inhibitor (Roche, Indianapolis, IN) and Halt phosphatase inhibitor (Thermo Scientific, Waltham, MA) cocktails. NuPage LDS sample buffer and reducing agent (Invitrogen, Waltham, MA) were added to 30 μg of protein lysate, boiled at 95 °C for 5 min, and fractionated by 4–12% SDS-PAGE. Protein was transferred to nitrocellulose membrane, blocked with 5% milk (RPI, Mt Prospect, IL) in TBST, and probed with the appropriate primary antibody: anti-slow myosin binding protein C (sMyBP-C, SAB3501005, Sigma-Aldrich, St. Louis, MO), anti-GAPDH (G7895, Sigma-Aldrich, St. Louis, MO), and custom phospho-sMyBP-C specific antibodies against mSer-59/hSer59 and mThr-84/hSer-82 as described previously [ ]. Blots were then incubated with the appropriate horseradish peroxidase-conjugated secondary antibody (Cell Signaling Technology, Danvers, MA) and ECL substrate (Thermo Scientific, Waltham, MA). Densitometry was performed with ImageJ software. Total sMyBP-C blots produced a non-specific band of higher protein mass. Only the bottom specific band of the correct size was used for quantification. Relative sMyBP-C phosphorylation was calculated based on the sample’s total level of sMyBP-C following normalization to GAPDH loading control (Additional file : Fig. S1A–C). ### Enzymatic isolation and culture of intact mouse single muscle fibres All animal procedures associated with enzymatic isolation of single muscle fibres were carried out at King’s College London in accordance with the UK Home Office regulations and in compliance with the European Community Directive published in 1989 (86/609/EEC). Two adult mature C57BL/6J mice were euthanized using cervical dislocation at 8 weeks of age. As previously published [ ], Extensor digitorum longus (EDL) skeletal muscles were dissected, leaving tendons intact at both the proximal and distal ends. Subsequently, muscles were digested in 2 mg/mL collagenase I (Sigma Aldrich) in Dulbecco’s modified Eagle’s medium (DMEM; Invitrogen) for 105 min. Single fibres were released via trituration with a wide-bore glass pipette and hypo-contracted fibres and debris were removed following serial washes. Freshly isolated fibres were moved into six well plates; after 1 h they were dosed with either 100 µM of piperine or dimethyl sulfoxide (DMSO), which served as control, which was left for 3 days at 37 °C and 5% CO . These fibres were then subjected to LC–MS/MS. ### LC–MS/MS identification and quantitative analysis of protein abundance As previously published [ , ], five samples for each experimental group were prepared. Each sample consisted of five single muscle fibres into a single centrifuge tube containing 30 μL Tris-Triton lysis buffer (10 mM Tris, pH 7.4, 100 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1% Triton X-100, 10% glycerol, 0.1% SDS, 0.5% deoxycholate, protease inhibitor cocktail III (1:100), phosphatase inhibitor cocktail mix (1:100) at an unknown protein concentration). Sample volume was reduced by half in a SpeedVac (ThermoFisher Scientific) and subsequently mixed in a 1:1 ratio with Laemmli buffer (2 × conc.), vortexed and boiled at 96 °C for 10 min. To stack the protein complement and remove chemical interference from the lysis buffer, samples were centrifuged at 14,000 rpm for 3 min prior to loading in 10% BisTris gels (Gel 1—ThermoFisher Scientific #19072670-1957; Gel 2—#19072670-1965; Gel 3—#19072670-1966; Gel 4—#19072670-1977). Gels were then stained overnight with Imperial protein stain (ThermoFisher Scientific #24615). In-gel reduction, alkylation and digestion with trypsin was performed prior to subsequent isobaric mass tag labelling. Each sample was treated individually with labels (TMT10plex) added at a 1:1 ratio [ ]. For analysis by LC–MS/MS, TMT labelled peptide samples were resuspended in 60 μL of resuspension buffer (2% ACN in 0.05% FA) with 10 μL sample injected in triplicate (30 μL total volume). Chromatographic separation was performed using an Ultimate 3000 NanoLC system (ThermoFisher Scientific). Peptides were resolved by reversed phase chromatography on a 75 μm × 50 cm C18 column using a three-step gradient of water in 0.1% formic acid and 80% acetonitrile in 0.1% formic acid. The gradient was delivered to elute the peptides at a flow rate of 250 nl/min over 250 min. The eluate was ionised by electrospray ionisation using an Orbitrap Fusion Lumos (ThermoFisher Scientific) operating under Xcalibur v4.1. #### Database searching Raw mass spectrometry data from the triplicate injection were processed into peak list files using Proteome Discoverer (ThermoScientific; v2.2) (PD 2.2). The data was processed and searched using the Mascot search algorithm (v2.6.0) and the Sequest search algorithm [ ] against the Uniprot Mouse Taxonomy database (36,483 entries). Within the consensus processing module, the reporter ion intensity values (absolute area under the peak) for each peptide spectral match are grouped with peptides and calculated at the protein level identification as a grouped abundance. All grouped abundances at protein level are normalised using total peptide amount which has previously been corrected based on the highest peptide abundance present in one channel, thus all channels have the same total abundance. #### Bioinformatics and data visualisations Following processing with Proteome Discoverer, the resultant file was exported into Perseus (v1.6.3) for qualitative and quantitative data analysis. Metascape [ ] was utilised for gene ontological (GO) analysis, which was subsequently visualised using cytoscape [ ]. DAVID bioinformatic database was used for ligand binding analysis [ ]. Further data visualisation utilised Biovinci v3.0.9 and Graphpad Prism v9. #### Statistical analysis Multiple myofibres were studied for each subject. Hence, as previously published [ ], we used mixed linear models to statistically analyze the data. These models assumed that each subject had its own mean measurement (with a normal distribution between subjects) and that each measurement within a subject was also normally distributed around this mean. The p values tested the hypotheses that there were differences in these mean measurements between groups. Data are then presented as means ± standard deviations. Graphs were prepared and analyzed in Graphpad Prism v9. Statistical significance was set to p  < 0.05. T-tests or One-way ANOVA with Tukey post-hoc were run to compare groups [ ]. ## Results ### Lower fraction of myosin molecules in the super-relaxed state with aberrant ATP consumption in human NEB -NM We first assessed the proportion of myosin heads in the super-relaxed state in human control and NM samples. Since this conformational state strongly correlates with the rate of ATP consumption in resting muscle fibres [ ], we used a Mant-ATP chase protocol. As the scientific literature indicates that myofibres from NM patients mainly express the cardiac/skeletal slow myosin heavy chain [ ], we restricted our analysis to this type of fibres. A total of 427 muscle fibres were tested (8–10 myofibres for each of the 12 controls and for each of the 37 patients—list of subjects in Additional file : Table S1). NEB -NM patients overall exhibited faster ATP consumption indicating significantly lower levels of myosin heads in the super-relaxed state when compared with controls (Fig.  A–C). Despite this alteration, the actual ATP turnover time of super-relaxed myosin molecules was not affected (Fig.  D, E). We repeated similar experiments in NM human tissue with mutations in the ACTA1 , TPM2 or TPM3 genes (associated with defects in actin or actin-binding proteins, Additional file : Table S1). Interestingly, ACTA1 -NM individuals displayed similar features as NEB -NM patients (Fig.  A–E), whilst TPM2 -NM and TPM3 -NM were indistinguishable from controls (Fig.  A–E). To gain insight into whether these alterations are specific to NEB (and ACTA1 ) gene mutations or related to NM-related histopathological changes, we studied myofibres from patients with an acquired form of the disease known as sporadic late-onset NM (SLONM). SLONM contrasts with typical genetic NM, since it occurs in the absence of any known mutations in NM-related genes. The onset of SLONM is also usually different as it starts in adulthood and progresses rapidly in a limb-girdle and axial pattern [ ]. Nevertheless, histopathologically, nemaline rods, sarcomeric disarray and reduced cellular force-generating capacity have been observed [ ]. SLONM patients did not exhibit any significant difference in the number/ATP turnover rate of myosin heads in the super-relaxed state when compared with controls, TPM2 -NM and TPM3 -NM patients (Fig.  A–E). As the age range for the 37 patients with genetic or acquired NM varied between 1 and 70 years old, we tested whether our results are age dependent. Plotting myosin conformational state as a function of age did not reveal any long-term secondary adaptation (Fig.  F). Myosin structure and relaxed conformational states in humans. Typical Mant-ATP chase experimental data show exponential decays for muscle fibres isolated from all the different groups ( A ). The proportion of myosin molecules in the disordered-relaxed (P1, B ) and super-relaxed states (P2, C ) as well as their respective ATP turnover lifetimes (T1, D and T2, E ) are presented. F Shows the P1 data as a function of age. By staining and imaging myosin using super-resolution microscopy ( G , scale bar = 10 µm) myosin filament length is calculated ( H ). Dots are individual subject’s average data. Means and standard deviations also appear on histograms. *Denotes a difference ( p  < 0.05) when compared with controls (CTL) To further evaluate if the changes in the presence of NEB mutations are due to disarray of myosin filaments, we used super-resolution microscopy followed by DDecon analysis. Thirty-five myofibres were tested from four controls and three NEB -NM patients (5 fibres per subject). Regular striated arrays of myosin filaments were observed for both patients and controls (Fig.  G). Additionally, the length of these filaments had ranges that were consistent with the inter-individual and inter-muscle heterogeneity reported previously (Fig.  H) [ ]. These results indicate that, in the presence of NEB mutations, myosin heads are not properly sequestered (onto the myosin filament backbone) in resting skeletal muscle, consuming unusually large quantities of ATP. Our results also suggest that these alterations are not specific to NEB mutations but rather to myofilament-linked mechanisms that are common with ACTA1 mutations. NM-associated histopathological disruptions such as the presence of nemaline rods may not be sufficient to drive the myosin maladaptations. #### The decreased number of super-relaxed myosin molecules may be linked to some alterations affecting myosin-binding proteins in human NEB -NM and in a nebulin conditional knockout mouse model Myosin-binding protein C (MyBP-C) acts as a linker for myofilaments (between actin and myosin filaments); its role in stabilizing the myosin super-relaxed state has recently been highlighted by mutations in its core leading to deleterious hypertrophic cardiomyopathy [ ]. Cardiac MyBP-C has then been extensively studied, whilst skeletal MyBP-C requires further attention [ ]. As a proof-of-concept, to explore MyBP-C’s potential functional role in the increased disordered-relaxed state of myosin heads in NEB -NM, we first partially ablated the endogenous MyBP-C [ , ] by using a published protocol consisting of soaking individual muscle fibres in an extracting buffer for 1-h at room temperature [ , ]. The precise amount of MyBP-C ablated is thought to be more than 70% [ , ] but was not directly evaluated in the present study as we did not get reproducible western blot/antibody data from single muscle fibres. A total of 91 myofibres were tested from five controls and five NEB -NM patients (8–10 fibres per subject). With the Mant-ATP chase protocol, MyBP-C partial ablation significantly decreased the number of myosin heads in the super-relaxed state in controls but not in NEB -NM patients (Fig.  A, B). Hence, MyBP-C partial absence may alleviate the differences between NEB -NM patients and controls. Besides MyBP-C, other myosin-binding proteins may be involved. Myosin regulatory light chains (RLC) bind to the lever arm region of myosin heads and play an important role in maintaining the integrity of the super-relaxed state [ ]. To, once again, explore whether RLC may also contribute to the changes seen in NEB -NM patients, we extracted the endogenous RLCs by incubating myofibres in a well-recognized extracting buffer for 30 min at 4 °C [ ]. As for MyBP-C, the exact level of RLC extracted is thought to be more than 90% [ ], nevertheless, we did not assess it here as we did not have reliable western blot/antibody results from individual myofibres. Fibres going through this process were then thoroughly washed with the rigor solution before running Mant-ATP chase experiments. A total of 72 myofibres were tested from five controls and five NEB -NM patients (7 to 8 fibres per subject). As for MyBP-C partial ablation, RLC partial extraction significantly lowered the proportion of myosin molecules in the super-relaxed state in controls but not in NEB -NM patients (Fig.  A, B), indicating that RLC partial ablation may reduce the differences between NEB -NM patients and controls. As myofilament proteins are subject to multiple post-translational modifications impacting their function (especially MyBP-C and RLC), we explored whether modulating the phosphorylation status of the myofilament proteins would have effects on myosin head conformation in the patients. For that, we incubated myofibres in a lambda phosphatase solution for 1-h at room temperature [ ]. The extent to which the lambda phosphatase solution lowers phosphorylation in individual myosin/myosin-binding proteins remains unclear as in the present study, we did not run western blots confirming the dephosphorylation. In total, 86 muscle fibres were used from five controls and five NEB -NM patients (8–10 fibres per subject). Mant-ATP chase experiments then revealed that the phosphatase treatment increased the proportion of myosin molecules in the super-relaxed conformation in NEB -NM patients (Fig.  A, B). Importantly, the lambda phosphatase treatment dampened the differences observed between NEB -NM patients and controls. Modulation of Myosin Regulatory light chains (RLC) and Myosin-binding protein C (MyBP-C) levels and phosphorylations in humans and mice. The proportion of myosin heads in the disordered-relaxed (P1, A ) and super-relaxed states (P2, B ) are presented for humans. Dots are individual subject’s average data. C Is a typical super-resolution image (A, scale bar = 10 µm), and resultant MyBP-C filament length ( D ). Additionally, the proportion of myosin molecules in the disordered-relaxed (P1, E ) and super-relaxed states (P2, F ) for wild-type (WT) and transgenic (cNeb KO) mice are shown. Dots are individual mouse’s average data. Means and standard deviations also appear on all histograms. *Denotes a significant difference ( p  < 0.05) when compared with controls/WT with similar treatment. # refers to a significant difference ( p  < 0.05) when compared with before treatment for similar group (CTL/NEB-NM or WT/cNeb KO) As MyBP-C and RLC may have some functional implications in the disrupted super-relaxed state of NEB -NM patients, we imaged MyBP-C localization/disarray by applying super-resolution microscopy and DDecon analysis. 35 myofibres were tested from four controls and three NEB -NM patients (5 fibres per individual). Similar to myosin filaments, we observed regular MyBP-C striations in both patients and controls (Fig.  C). Nevertheless, strikingly, the length of each MyBP-C segment was found subtly increased in NEB -NM patients when compared with controls (Fig.  D). This suggests that MyBP-C localization extends beyond the C-zone in patients. To pursue MyBP-C investigations with the leftover human tissue, we measured the global content and phosphorylation levels (S59 and T84) for slow MyBP-C using Western blotting and antibodies known to work with human muscle samples. We found tendencies towards lower total abundance (Additional file : Fig. S1A) and higher phosphorylations (Additional file : Fig. S1B-C), even though data appeared patient-specific and thus overall variable. To validate all the above human results, we intended to assess whether similar changes are recapitulated in a relevant mouse model of NM. Whilst the conventional NEB knockout model and a model in which exon 55 of the NEB gene is deleted exist, mice die within days after birth due to complex developmental defects and abnormalities [ ]. As patients with NEB mutations often survive to adulthood with considerably milder myopathic phenotypes than the two mouse models described above, to investigate the consequences of NEB mutations/decreased myosin super-relaxed state, we took advantage of a conditional nebulin KO mouse model (cNeb KO) where muscle-specific deletions are present from birth [ ]. We used 94 muscle fibres from five cNeb KO and five control mice (8 to 10 fibres per animal). We verified the presence of a myosin super-relaxed state destabilization in cNeb KO mice using the Mant-ATP chase protocol (Fig.  E, F). Additional 154 mouse myofibres (5–6 fibres per mouse) were run where MyBP-C was partially ablated or RLC extracted or phosphorylation down-regulated using the same experimental protocols as for humans. Interestingly, we observed the same significant differences as for humans strengthening our findings (Fig.  E, F). Overall, our results indicate a potential role of MyBP-C deletion and/or RLC extraction and/or dephosphorylation in disrupting the myosin super-relaxed state and related ATP consumption in resting muscle fibres from NEB -NM patients and from cNeb KO mice. #### The lower proportion of myosin heads in the super-relaxed conformation is associated with a metabolic remodeling in nebulin conditional knockout mouse model To gain insights into the consequences on energy metabolism/usage of our findings, we pursued additional animal model experiments. We isolated individual limb (tibialis cranialis) muscle fibres from cNeb KO and control mice and ran a proteomics analysis through quantitative LC–MS/MS tandem mass spectrometry. To this end, we utilised manually dissected single fibres to reduce the influence of proteins from other tissue and cell types and to more closely correlate our findings with the above single fibre observations. We were able to quantify 617 proteins, of which, further filtration by p-value ( p  < 0.05) revealed that 250 proteins were differentially expressed, of these 111 and 139 were upregulated in the cNeb KO and control mice, respectively (Fig.  A, Additional file : Table S2). We generated a volcano plot to visualize differentially upregulated proteins, annotated with the top 10 most significant proteins in each experimental group. Importantly, where a protein is denoted as more highly expressed in controls, it can also be interpreted as cNeb KO mice possessing a reduction in expression. The most significantly upregulated proteins belonged to the cNeb KO group and consisted of ZASP (Z-band Alternatively Spliced PDZ-motif) or LIM domain-binding protein 3 and alpha-actinin-2 with the latter possessing the highest log2FC (Fig.  B). Moreover, we observed a significant change in myosin binding proteins H and C (Fig.  B–D). Further, all proteins with a log2fc > 1.5 were visualised in the heatmap in (Fig.  E), which highlighted a number of proteins involved in cytoskeletal structure but also and importantly metabolic pathways. To more accurately determine functional associations, we carried out gene ontology (GO) analysis on all proteins that passed p-value ( p  < 0.05) filtration using the Metascape analysis resource (Fig.  A, B, Additional file : Table S3). Biological functions associated with metabolism appear distinct between cNeb KO and control muscles. A suppression of proteins involved in glucose catabolic processes was observed in cNeb KO mice when compared with controls. This was accompanied by an increase in aerobic respiration, aerobic electron transport chain and Tricarboxylic Acid Cycle (TCA) in cNeb KO mice when compared with controls. These findings, coupled with an increase in proteins associated with the transition between fast and slow fibre type pathways, indicate an alteration in ATP production in cNeb KO muscle towards higher yielding aerobic pathways. Fibres for these proteomic studies were all derived from the tibialis cranialis skeletal muscles where the fibre type is predominantly fast twitch. However, to determine whether fibre type sampling may be the underlying cause of these proteomic differences fibre type classification using myosin quantities was performed [ ]. All samples were categorised as fast glycolytic fibres (IIx/IIb) (Additional file : Table S4). Additionally, following Pearson correlation analysis (Additional file : Fig. S2), all samples were more similar based on genomic background than fibre type. Comparative analysis of protein changes. A Venn diagram depicting detected proteins that possessed a significantly greater expression in WT (red) and cNEB KO (blue)respectively, as well as those unchanged between experimental groups. B Volcano plot displaying Log2 fold change against Log10 p -value. Dark blue dots indicate FDR (q value) < 0.05 whilst light blue dots indicate p  < 0.05 and black dots indicate p  > 0.05. The top 10 most significant proteins for each experimental group have been annotated in blue (cNEB KO) or red (WT) using protein names except were due to size constraints gene names were used (MRP-L27, MRCK alpha, FOXRED2, PARP1, DLST, SR-Beta and VEGFR2 (39S ribosomal protein L27-mitochondrial, Serine/threonine-protein kinase MRCK alpha, FAD-dependent oxidoreductase domain-containing protein 2, Poly[ADP-ribose] polymerase 1,2-Oxoglutarate Dehydrogenase Complex Component E2, Signal recognition particle receptor subunit beta, Vascular endothelial growth factor receptor 2, respectively). Myosin binding protein C and H were also annotated and their abundances are highlighted in the volcano plots in C , D . E A heat map was created to illustrate the proteins with the greatest fold change, all proteins included possessed a Log2 FC > 1.5. For readability gene names rather than protein names were included in the heap map the conversion between gene name and protein names can be found in Additional file : Table S2. ***Indicates p  < 0.001 Ontological analysis of the proteins differentially expressed between WT and cNEB KO and proteomic differences induced following piperine administration. A Ontological associations between 111 proteins upregulated in cNEB KO (blue) and 139 proteins upregulated in WT mice (red) were established using Metascape and visualised using Cytoscape. Grey lines indicate a direct interaction, circle size is determined by enrichment and circle colour is determined by p value. Proteins upregulated in the WT mice can also be considered as down regulated in the cNEB KO. WT and cNEB KO networks were created separately with identical enrichment and p value scaling parameters. For graphical representation both WT and cNEB KO networks were scaled to match the enrichment key. The clusters with the highest enrichment are documented in the table shown in B, with the enrichment, p value and proteins present within the cluster. All full list of all clusters and proteins present is available in Additional file : Table S2 ( B ). C Venn diagram depicting detected proteins that were significantly expressed in control (purple) and following piperine administration (green), as well as those unchanged between experimental groups. Bar graphs depicting significant differences present in the three proteins which possessed significant differences between either WT and cNEB KO or between Control and Piperine administration, the two datasets were not subjected to comparative statistics. D Pie charts represent the Uniprot binding terms associated with the significantly up regulated proteins in either control or piperine administration obtained using DAVID bioinformatics database Next, we wanted to determine whether the changes seen above in cNeb KO mice were related to the decrease in the number of myosin heads in the super-relaxed state. To achieve this, we used mouse extensor digitorum longus control muscles (this is a fast-twitch limb skeletal muscle as tibialis cranialis) and incubated them with piperine, an alkaloid found in pepper which has previously been observed to reduce the number of myosin molecules in the super-relaxed state [ , ]. After three days of incubation in 100 µM of piperine (optimal concentration to destabilise the super-relaxed state—confirmed in Fig.  E, F), we again carried out LC–MS/MS tandem mass spectrometry. A total of 260 proteins were detected following low TMT exclusion. However only 42 significant protein changes were observed following p -value ( p  < 0.05) filtration. The Venn diagram revealed that the 34 proteins significantly increased expression in the control group, whist only 8 were observed to have increased significantly following piperine administration (Fig.  , Additional file : Table S5). Of these significant proteins, only three were also differentially expressed in the cNeb KO mice versus control database (Myosin-4, DNA-dependent protein kinase catalytic subunit and actin, alpha skeletal muscle). Due to the low number of significant differences, performing similar GO analysis as with the previous dataset was not possible, instead we aimed to determine whether there was an upregulation of proteins associated with specific protein–ligand interactions that may be ultimately responsible for the differences associated with the previously observed, chronic decrease in the myosin super-relaxed state. Indeed, while control fibres possessed an array of different ligand binding, piperine administration predominantly upregulated proteins which possessed ATP/nucleotide binding (Fig.  D, Additional file : Table S6). These proteins may therefore be involved in detecting early dysregulation of ATP utilization present following the disturbance of myosin conformational states. ## Discussion Our study is one of the first to characterise the myosin super-relaxed state in human skeletal muscle, as most of the scientific literature thus far is on cardiac tissue. We demonstrate that isolated muscle fibres from humans diagnosed with NEB -NM have a surprising destabilization of myosin super-relaxed state and excessive energy consumption. Consistent with these observations, we indicate that such ATP overconsumption has potential consequences on the myofibre proteome of a mouse NM model. ### Myosin super-relaxed state destabilization as a pathological contributor and/or a compensatory process The super-relaxed conformation is a highly conserved and regulated state [ ]. Its dysregulation in the context of skeletal muscle diseases is a novel finding. A reduction in the super-relaxed state is involved in the aetiology of other genetic diseases of cardiac muscle, such as hypertrophic cardiomyopathy (HCM). HCM is estimated to affect at least 1 in 500 individuals and is primarily caused by mutations in genes encoding the human β-cardiac/skeletal slow myosin heavy chain ( MYH7 ) or cardiac MyBP-C ( MYBPC3 ) [ , ]. The subtle pathogenic amino acid substitutions in the β-cardiac/skeletal slow myosin heavy chain mesa region destabilize the inter-head motif area crucial for forming and preserving the super-relaxed state [ , , ], whilst variants in MYBPC3 cause truncations and haploinsufficiency in cardiac MyBP-C, releasing its restricting power on myosin heads and lowering the number of myosin molecules in the super-relaxed conformation [ ]. All these are recognised as major components of the hyper-contractile HCM pathophysiology accounting for impaired cellular relaxation and enhanced force-generating capability [ ]. Hence, here, it is reasonable to postulate that the decreased level of myosin heads in the super-relaxed state that we observed in NEB -NM (and ACTA1 -NM) patients contribute to the aetiology of hypo-contractile NM through under-appreciated metabolic changes. More precisely, its involvement may be complex and may initially be a compensatory mechanism by which muscle fibres have more myosin heads available for actin binding to account for the depressed actin filament activation and cellular force-producing capacity [ , , , ]. Indeed, disordering of myosin heads is proposed to facilitate the interaction of myosin with actin [ ]. These interactions would be in weakly bound states that do not generate force but would contribute to stiffness [ ]. In the long-term, this contractile over-compensation may become detrimental. Myosin super-relaxed state destabilization in NEB -NM (and ACTA1 -NM) patients may have major consequences on ATP consumption and muscle metabolism, straining energy resources. This would be in line with the glycogen deposition and misshapen mitochondria observed in some patients and in most of the mouse models [ , ]. ### Consequences: metabolic reprogramming when the myosin super-relaxed state is downregulated Although more comprehensive studies are warranted, shifting myosin heads away from their super-relaxed conformation means excessive energy consumption and most likely explains the profound abnormalities in energy usage seen in NEB -NM (and ACTA1 -NM) patients’ muscle biopsy specimens [ , ]. Skeletal muscle depends on a large number of pathways to produce ATP with cellular respiration being the most efficient machinery, supplying more than 90% of the basal energy requirements [ ]. In the present study, we observed, in the presence of abnormal nebulin content, a metabolic reprogramming consisting of a shift away from glycolytic pathways to mitochondrial oxidative phosphorylation to meet the increased energy demands. This may have potential whole-body consequences. On average, adult humans utilise 8 MJ day . Most of this utilisation, known as the basal metabolic rate, is required for basic cellular functions. Even though the resting skeletal muscle metabolic rate per unit volume is low (0.5 W kg ), it accounts for approximately 25% of the obligatory whole-body thermogenesis [ ]. Here, the disordered myosin heads in patients may generate a greater overall thermogenesis [ ]. Shifting myosin heads away from their super-relaxed conformation by as little as 10% may induce an increase in thermogenesis and energy usage by 0.7 MJ day [ ]. Over a period of a year this would lead to a weight loss of 7 kg of fat [ ]. Shifting heads towards a disordered conformation by 20%, as found in the present work, would double skeletal muscle thermogenesis and would increase the whole-body basal metabolic rate by 16% [ ]. This would explain clinical findings reporting NM patients being lean or underweight. ### Causes: myosin-binding protein disruption as a potential contributor of the decreased number of super-relaxed myosin molecules In contrast to HCM, in the present study, the downregulation of the super-relaxed conformation in NEB -NM (and ACTA1 -NM) cannot be attributed to the mutations but rather to indirect processes that could interfere with the levels of myofilament proteins. Our proteomics analysis has confirmed reductions of fast MyBP-C and fast RLC contents together with an up-regulation of myosin essential light chains (ELC) 1/3 in the presence of nebulin mutation. Thus, here, we explored the potential roles of myosin-binding proteins in NEB -NM. We observed significant functional differences when MyBP-C is partially ablated, RLC extracted or myofilament dephosphorylated. According to the literature, modulating the numbers of cardiac RLC or MyBP-C modifies the number of myosin molecules in the super-relaxed state by destabilizing the thick filaments, untethering myosin heads [ , , ]. Moreover, when comparing the phosphorylated state of cardiac MyBP-C to its dephosphorylated state, it has been shown that the phosphorylated state promotes a higher myosin order whilst the phosphomimetic state favours disordered myosin indicative of a decreased proportion of myosin heads in the super-relaxed state [ ]. Considering all these findings, it is tempting to suggest that RLC or MyBP-C are involved in the depression of the super-relaxed conformation in NEB -NM. The low number of patients tested for our MyBP-C- and RLC-related experiments as well as the absence of precise characterisations of MyBP-C and RLC deletions in our functional assays are obvious limitations here. Hence, further studies specifically focusing their attention on these aspects are required. ## Conclusion Taken together, our data show that, in resting muscle fibres from NEB -NM patients, the myosin-stabilizing conformational state is disrupted. Our findings also suggest that the subsequent significant increase in basal ATP consumption leads to a modification of the myofibre proteome, more specifically of energy proteins. Our results then give new unexpected insights into unexplained NEB -NM pathological features, namely odd appearance of energetic proteins, and further highlight the potential benefits of drugs targeting myosin activity in NM patients. ## Supplementary Information
This is the first description of slowly progressive Niemann-Pick disease type C (NPC) without the typical lysosomal storage in bone marrow and viscera in two descendants of a group of 17th century French-Canadians. The index patient was a married 43-year-old woman with onset of dementia in her thirties, later followed by the development of ataxia and athetoid movements. Her autopsy disclosed frontal lobe atrophy, neurolysosomal storage with oligolamellar inclusion and tau-positive neurofibrillary tangles. Of the 119 family members screened, only a married 42-year-old sister displayed symptoms of a dementia. Both women displayed vertical supranuclear ophthalmoplegia; expressive aphasia; concrete, stimulus-bound, perseverative behavior; and impaired conceptualization and planning. Cultured fibroblasts showed decreased cholesterol esterification and positive filipin staining, but no mutation was detected in coding or promoter regions of the NPC1 gene using conformation sensitive gel electrophoresis and sequencing. Sequencing showed a homozygous gene mutation that is predicted to result in an amino acid substitution, V39M, in the cholesterol binding protein HE1 (NPC2). Adult-onset NPC2 with lysosomal storage virtually restricted to neurons represents a novel phenotypic and genotypic variant with diffuse cognitive impairment and focal frontal involvement described for the first time.
Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disorder. Accumulating evidence has shown that 43kDa TAR-DNA-binding protein (TDP-43) is the disease protein in ALS and frontotemporal lobar degeneration. We previously reported a familial ALS with Bumina bodies and TDP-43-positive skein-like inclusions in the lower motor neurons; these findings are indistinguishable from those of sporadic ALS. In three affected individuals in two generations of one family, we found a single base-pair change from A to G at position 1028 in TDP-43, which resulted in a Gln-to-Arg substitution at position 343. Our findings provide a new insight into the molecular pathogenesis of ALS.
The timing and yield of metabolic studies for patients with neurodevelopmental disorders is a matter of continuing debate. We determined the yield of additional or repeated metabolic studies in patients with neurodevelopmental disorders. Patients referred to a tertiary diagnostic center for patients with unexplained neurodevelopmental disorders were included. Initial metabolic studies had been performed in most patients (87%) before referral. Additional/repeated metabolic studies were individually tailored. Twelve metabolic diseases of 433 patients studied (2.8%) were diagnosed, despite normal initial metabolic studies before referral. Specific metabolic investigations lead to a greater diagnostic yield in patients with neurodevelopmental disorders.
## Objective We conducted a Mendelian randomization (MR) study to disentangle the comparative effects of lipids and apolipoproteins on ischemic stroke. ## Methods Single‐nucleotide polymorphisms associated with low‐ and high‐density lipoprotein (LDL and HDL) cholesterol, triglycerides, and apolipoprotein A‐I and B (apoA‐I and apoB) at the level of genomewide significance ( p < 5 × 10 ) in the UK Biobank were used as instrumental variables. Summary‐level data for ischemic stroke and its subtypes were obtained from the MEGASTROKE consortium with 514,791 individuals (60,341 ischemic stroke cases, and 454,450 non‐cases). ## Results Increased levels of apoB, LDL cholesterol, and triglycerides were associated with higher risk of any ischemic stroke, large artery stroke, and small vessel stroke in the main and sensitivity univariable MR analyses. In multivariable MR analysis including apoB, LDL cholesterol, and triglycerides in the same model, apoB retained a robust effect ( p < 0.05), whereas the estimate for LDL cholesterol was reversed, and that for triglycerides largely attenuated. Decreased levels of apoA‐I and HDL cholesterol were robustly associated with increased risk of any ischemic stroke, large artery stroke, and small vessel stroke in all univariable MR analyses, but the association for apoA‐I was attenuated to the null after mutual adjustment. ## Interpretation The present MR study reveals that apoB is the predominant trait that accounts for the etiological basis of apoB, LDL cholesterol, and triglycerides in relation to ischemic stroke, in particular large artery and small vessel stroke. Whether HDL cholesterol exerts a protective effect on ischemic stroke independent of apoA‐I needs further investigation. ANN NEUROL 2020;88:1229–1236 Blood lipids are established causal factors in the development of stroke. , , It has been shown that high concentrations of low‐density lipoprotein (LDL) cholesterol increase the risk of ischemic stroke, , , whereas high concentrations of high‐density lipoprotein (HDL) cholesterol possibly decrease the risk of ischemic stroke, particularly small vessel stroke. , Furthermore, large‐scale randomized clinical trials have revealed that lowering cholesterol concentrations with statins reduces the risk of ischemic and overall stroke, , , despite an increase in hemorrhagic stroke. However, given high phenotypic and genetic correlation across different lipids and apolipoproteins, it remains unclear whether one or more lipid‐related entities account for the observed associations between lipids and stroke. Disentangling the associations of atherogenic lipoprotein lipids and risk of stroke is of great public health and clinical importance. First, a better understanding of the comparative role of lipoprotein lipids in stroke not only facilitates a clearer perception of the underlying pathophysiology of stroke, but also helps to capture the most effective biomarker and corresponding agent that lipid‐modifying therapeutics should target. Second, the findings can provide an evidence basis in guiding the prevention and treatment of stroke among more than one‐quarter of the general population who has discordant apolipoprotein B (apoB) and LDL cholesterol levels, in particular those with obesity or type 2 diabetes. , Third, such investigation will help unify the guidelines concerning regular apoB measurement supported by European Society of Cardiology/European Atherosclerosis Society, but not by the American College of Cardiology/American Heart Association. The amount of cholesterol and triglycerides vary largely between lipoprotein particles, , , which results in an imprecise quantification of the number of atherogenic lipoproteins, although the levels of LDL cholesterol and triglycerides quantifies the levels of these lipid substances carried in circulating lipoproteins. In contrast, one apoB molecule is included in each circulating atherogenic lipoprotein particle. , Thus, the level of apoB molecules is proportional to the number of circulating atherogenic particles in the blood. Available evidence indicates causal effects of increased LDL cholesterol, triglycerides, and apoB on increasing stroke risk , and a stronger effect of apoB compared to LDL cholesterol on cardiovascular disease. , It is plausible to assume that each lipid‐related entity played an individual causal role or that one trait, such as apoB, predominated and accounted for the associations of related lipoprotein particle entities. Confined by potential methodological limitations, such as residual confounding and reverse causality, traditional observational study designs are unable to infer causality regarding the role of lipoprotein lipids in the development of stroke. Another approach is the Mendelian randomization (MR) design, which utilizes genetic variants as instrumental variables for an exposure to determine causality of an exposure‐outcome association. Given correlations across lipid‐related traits, the multivariable MR framework, as an extension to the traditional MR method, should be recommended to appraise the association of correlated multiple risk factors with the outcome of interest simultaneously. By including the genetic associations for multiple exposures in the same model, the multivariable MR can assess which traits retain causal associations with the outcome through the genetic protection against conventional biases, including unobserved confounders, reverse causality, the inherent correction for measurement error, and the avoidance of collider bias. Here, we employed the traditional MR analysis to determine the associations of individual lipid‐related traits with ischemic stroke and then multivariable to MR analysis to elucidate which of the atherogenic lipid traits accounts for the etiological basis of lipoprotein lipids in relation to stroke. ## Materials and Methods ### Study Design An overview of the study design and used data sources are displayed in Figure and Supplementary Table . Genetic instruments for LDL and HDL cholesterol, triglycerides, and apoA‐I and apoB were selected based on the UK Biobank study. Data for the associations of the lipid‐related traits associated single‐nucleotide polymorphisms (SNPs) with ischemic stroke and subtypes were available from the MEGASTROKE consortium. The univariable MR analysis aimed to investigate the association of individual lipid‐related traits with ischemic stroke and the multivariable MR analysis aimed to compare the independent effects of correlated lipid‐related traits on ischemic stroke. We first determined which one of apoB, LDL cholesterol, and triglycerides predominantly accounted for the causal associations with ischemic stroke. We then assessed the predominant entity accounted for the inverse association of HDL‐related phenotypes with ischemic stroke (HDL cholesterol and apoA‐I). To expel the possibility of reverse causality, we performed a reverse MR analysis to examine the influence of liability to stroke on 5 lipid‐related traits. The UK Biobank study was approved by the North West Multicenter Research Ethics Committee. Original studies included in the MEGASTROKE consortium had been approved by a relevant review board. The present analyses were approved by the Swedish Ethical Review Authority. Overview of study design. There are three key assumptions for Mendelian randomization (MR). Assumption 1: the genetic variants selected as instrumental variables should be robustly associated with the lipid‐related traits. Assumption 2: the used instrumental variables should not be associated with any potential confounders. Assumption 3: the genetic variants of an exposure should affect the risk of the outcome merely through the risk factor, not via other alternative pathways. IVW = inverse variance weighted; SNP = single‐nucleotide polymorphism. ### Genetic Instrument Selection Genetic variants, in this case SNPs, associated with LDL and HDL cholesterol, triglycerides, and apoA‐I and apoB levels were extracted as instrumental variables for corresponding lipid‐related traits at the genomewide significance level ( p  < 5 × 10 ) from the UK Biobank study including up to 343,992 individuals of European ancestry. The mean age of included participants was 56.9 years old and approximately 54% were women. The mean (standard deviation [SD]) levels were 3.57 (0.87) mmol/L for LDL cholesterol and 1.45 (0.38) mmol/L for HDL cholesterol. The median level of triglycerides was 1.50 (interquartile range = 1.11) mmol/L. The mean values for apoB and apoA‐I were 1.03 (0.24) g/L and 1.54 (0.27) g/L, respectively. Association tests were adjusted for age, sex, and a binary variable denoting the genotyping chip individuals were allocated to in UKBB. The linkage disequilibrium (LD) clumping was undertaken to select independent SNPs (r  < 0.001) based on a reference panel of 503 Europeans from phase III (version 5) of the 1000 Genomes Project and the SNP with the smallest p values for the association with the trait of interest was retained in each locus. In univariable MR analysis, we used 220 SNPs as instrument variables for LDL cholesterol, 534 SNPs for HDL cholesterol, 440 SNPs for triglycerides, 440 SNPs for apoA‐I, and 255 SNPs for apoB (Supplementary Table ). By combing SNPs from related lipid‐traits and selecting independent SNPs (r  < 0.01) by clump function in TwoSampleMR package, we used 548 SNPs in the multivariable MR analysis of LDL cholesterol, triglycerides and apoB, and 569 SNPs in the multivariable MR analysis of HDL cholesterol and apoA‐I. We used 32 SNPs associated with stroke at the genomewide significance level as the genetic instruments for stroke in the reverse MR analysis. ### Outcome Source Summary‐level data for ischemic stroke and subtypes were obtained from the MEGASTROKE consortium encompassing 29 genomewide association studies (GWAS) with a final sample of 514,791 individuals (60,341 ischemic stroke cases and 454,450 non‐cases) of multi‐ancestry (European people as the majority, 86%). The stroke cases were defined as rapidly developing signs of focal (or global) disturbance of cerebral function, lasting more than 24 hours or leading to death with no apparent cause other than that of vascular origin. Any ischemic stroke was defined by all stroke cases except for intracerebral hemorrhage. Any ischemic stroke included large artery ischemic stroke (LAS; 6,688 cases), cardioembolic ischemic stroke (CES; 9,006 cases), and small vessel ischemic stroke (SVS; 11,710 cases) according to the Trial of Org 10,172 in Acute Stroke Treatment criteria and also included ischemic stroke of undefined subtype. Association test was performed under an additive genetic model with a minimum of sex and age as covariates. Summary‐level data for stroke based merely on European population were used in a sensitivity analysis. Summary‐level genetic data for lipids and apolipoprotein were obtained from Neale Laboratories ( ). ### Statistical Analysis The inverse‐variance weighted method was used as major analysis. This method provides an estimate with the highest power and rely on the assumption that all SNPs are valid instrumental variables. The I (%) statistic was calculated to assess the heterogeneity among estimates across individual SNPs. The weighted median approach and MR‐Egger regression were used as secondary analyses to examine the robustness of the results and correct for pleiotropy. The weighted median analysis can generate consistent estimates if at least 50% of the weight in the analysis comes from valid instrumental variables. The MR‐Egger regression approach can detect and correct for directional pleiotropy albeit with compromised power. Given genetic and phenotypic correlations across lipid‐related traits (Pearson's R ranging from −0.49 to 0.96; Supplementary Table ), we further used multivariable inverse‐variance weighted method to disentangle and compare the effects of correlated lipid‐traits on ischemic stroke and subtypes. Odd ratios (ORs) and corresponding 95% confidence intervals (CIs) for outcomes were scaled to one‐SD increase in levels of lipid‐related traits. To account for multiple testing, we considered associations with p values below 0.003 (where p = 0.05/20 [5 lipid‐related traits and 4 stroke outcomes]) to represent strong evidence of causal associations, and associations with p values below 0.05 but above 0.003 as suggestive evidence of associations in the univariable MR analysis. The multiple testing was not tailed for multivariable MR analysis due to the mutual adjustment nature of multivariable MR analysis. Statistical power was estimated using a webtool and the results are shown in Supplementary Table . All analyses were performed using the mrrobust package in Stata/SE 15.0 (Stata Statistical Software: Release 15; StataCorp LLC, College Station, TX, USA) and the TwoSampleMR and Mendelian Randomization package in R Software 3.6.0 (R Core Team, R Foundation for Statistical Computing, Vienna, Austria, 2019; ). ### Data Availability The datasets analyzed in this study are publicly available summary statistics. Data used can be obtained through cited papers. ## Results ### Univariable MR Analysis Genetically predicated increased levels of apoB, LDL cholesterol, and triglycerides and decreased levels of apoA‐I and HDL cholesterol were significantly or suggestively associated with higher risk of acute ischemic stroke (AIS), LAS, and SVS, but not with CES. The effect sizes of the lipid‐related traits were similar for LAS and SVS than for AIS (Fig ). Results of sensitivity analyses are displayed in Supplementary Table . The observed associations persisted based on data from individuals of European descent (Supplementary Table 6). We did not detect any reverse associations of genetic liability to stroke with the levels of lipids and apolipoproteins (Supplementary Table ). Associations of lipid‐related traits with stroke and subtypes in inverse‐variance weighted model. AIS = acute ischemic stroke; CES = cardioembolic stroke; CI = confidence interval; HDL = high‐density lipoprotein; LAS = large artery stroke; LDL = low‐density lipoprotein; OR = odds ratio; SNPs = single‐nucleotide polymorphisms; SVS = small vessel stroke. ### Multivariable MR Analysis Results of multivariable MR analysis are displayed in Figure and Figure 4. In the multivariable MR analysis with mutual adjustment for apoB, LDL cholesterol, and triglycerides, apoB retained a robust causal association with AIS, LAS, and SVS, whereas the estimate for LDL cholesterol was reversed and that for triglycerides largely attenuated. The ORs of AIS, LAS, and SVS were 1.31 (95% CI = 1.01, 1.69), 1.69 (95% CI = 0.99, 2.87), and 2.18 (95% CI = 1.14, 4.18), respectively, for one‐SD increase of apoB (Fig ). The pattern for the comparative role of apoB, LDL cholesterol, and triglycerides in ischemic stroke persisted when the analysis was confined to individuals of merely European ancestry (Supplementary Table ). In the analysis of mutual adjustment for apoA‐I and HDL cholesterol, the magnitude of the associations for HDL cholesterol persisted or became stronger but became nonsignificant. Associations for apoA‐I became weaker or attenuated substantially to the null in the analyses of LAS and SVS. The results were consistent based on data derived from individuals of European descent and multi‐ancestries (Fig and Supplementary Table ). We did not include CES in Figures and because no association was detected in the univariable MR analysis of lipids and apolipoproteins with CES. Associations of apolipoprotein B, LDL cholesterol, and triglycerides with stroke and subtypes in multivariable inverse‐variance weighted model. AIS = acute ischemic stroke; CI = confidence interval; LAS = large artery stroke; LDL = low‐density lipoprotein; OR = odds ratio; SNPs = single‐nucleotide polymorphisms; SVS = small vessel stroke. Associations of apolipoprotein A‐I and LDL cholesterol with stroke and subtypes in multivariable inverse‐variance weighted model. AIS = acute ischemic stroke; CI = confidence interval; HDL = high‐density lipoprotein; LAS = large artery stroke; OR = odds ratio; SNPs = single‐nucleotide polymorphisms; SVS = small vessel stroke. ## Discussion ### Principal Findings The present study confirmed the causal effects of apoB, apoA‐I, LDL and HDL cholesterol, and triglycerides on ischemic stroke. Results of multivariable MR analyses showed that the effect of apoB on ischemic stroke remained robust, whereas the associations of LDL cholesterol and triglycerides entities with ischemic stroke attenuated markedly to the null after the adjustment. This suggests that apoB is the critical entity that underlies the positive associations of lipid‐related factors and ischemic stroke, in particular, large artery and small vessel stroke. The associations for HDL cholesterol and apoA‐I became nonsignificant after adjustment; however, the magnitude of the associations for HDL cholesterol remained or became stronger. Whether HDL cholesterol had predominant effects on ischemic stroke needs more study. Our findings of the univariable MR investigation are overall in line with previous studies on LDL and HDL cholesterol in relation to ischemic stroke. , , , , However, a previous MR study based on a smaller sample size revealed that only LDL cholesterol was statistically significantly associated with LAS and that HDL cholesterol was related to SVS. That study did not observe any associations of triglycerides with ischemic stroke and subtypes, which is not consistent with the present study and a recently published MR study. Discrepancy might be caused by inadequate power due to limited phenotypic variance explained by used genetic variants for the lipid trait and/or small sample size for stroke outcomes. Observational and genetic studies have reported an inverse association of apoA‐I and a positive association of apoB with ischemic stroke. , The present MR study confirmed those associations but extended the evidence to show that only apoB showed independent effects on stroke. Studies on comparative effects of apoB, LDL cholesterol, and triglycerides on stroke are limited. Nevertheless, the finding of a predominant role of apoB in ischemic stroke observed in our study showed agreement with several studies on ischemic cardiovascular disease. , , In the Copenhagen City Heart Study, even though apoB was not found to predict ischemic stroke better than LDL cholesterol in a clear pattern, women with higher levels of apoB had similar risk estimates for ischemic cerebrovascular disease and ischemic stroke compared with those with lower levels. Notably, the observed dominant role of apoB does not discredit the causal roles of LDL cholesterol or triglycerides in ischemic stroke, as both LDL cholesterol and triglycerides are enveloped in atherogenic lipoproteins, each containing an apolipoprotein B molecule that cannot occur in physiological isolation. Instead, our study provides genetic evidence that apoB is the necessary element in order for bad lipoprotein lipids to exert their causal effect on ischemic stroke. In other words, changes in the amount of cholesterol and triglycerides in lipoproteins that are not accompanied by commensurate changes in number of lipoprotein particles containing apoB may not affect ischemic stroke risk. Mechanically, this finding is supported by the “response to retention” hypothesis that apoB is the necessary entity for atherosclerosis to occur. , In detail, particles containing apoB trapped in the tunica intima of the arterial wall cause atherosclerosis. Studies on comparative effects of HDL cholesterol and apoA‐I are limited. The present study found a stronger effect of HDL cholesterol than of apoA‐I on ischemic stroke. However, due to an inadequate power embedding in multivariable MR analysis, whether HDL cholesterol plays a predominant protective role in the etiology of ischemic stroke needs more investigation. High HDL cholesterol levels prevent the oxidation of LDL cholesterol and increase the reverse transport of LDL cholesterol from peripheral tissues to the liver where degradation happens. These functions of HDL cholesterol lower the risk of atherogenesis and may explain why high HDL cholesterol levels reduce the risk of ischemic stroke. ### Public Health and Clinical Implication Clinical trials have demonstrated that modifying LDL cholesterol and triglycerides through angiopoietin‐like proteins 4 and proprotein convertase subtilisin/kexin type 9 inhibitor might be promising approaches to lower the risk of ischemic stroke. , The present study supports these current treatments. More importantly, our findings shed new light on the focus of lipid‐modifying therapies, which should be the reduction in the number of atherogenic lipoprotein particles rather than the reduction in the amount of cholesterol or triglycerides within the particles. In addition, from the preventive perspective and especially among individuals with discordant apoB and LDL cholesterol levels, we promote apoB measurement as one of routine blood lipid examination. ### Strengths and Limitations There are strengths of the present study. The major one was the multivariable MR method, which compared the roles of different correlated lipid‐related traits in ischemic stroke and exempted the findings from residual confounding and reverse causality. We used updated genetic instruments for lipid‐related traits, thereby ensuring an adequate power in analysis. The major limitation was that there were missing SNPs that might compromise the power and accuracy of analysis. However, the missing rates were not dissatisfying for AIS, LAS, and CES (all under 15%), except for SVS. Thus, the observed associations for SVS needs to be verified. In addition, a small proportion of stroke cases were from non‐European descents (around 14%), which might introduce population stratification bias. Nevertheless, the consistent findings based on data from individuals of only European ancestry indicated that there was a neglectable chance of population stratification bias twisting our findings. The CIs in the multivariable MR analysis were wide, which might reveal some degree of compromise of the precision of MR statistical model fitting strongly correlated exposures. Finally, the associations of SNPs with levels of lipid‐traits were derived from non‐fasting blood samples, which might cause inaccuracy in estimation. Nonetheless, the GWAS for lipid‐related traits found that adjustment for fasting time led to negligible alterations in the effect estimates. ## Conclusions In summary, the present MR study provides evidence supporting apoB as the predominant trait that accounts for the etiological basis of apoB, HDL cholesterol, and triglycerides in relation to ischemic stroke, in particular large artery and small vessel stroke. Whether HDL cholesterol exerts protective effects on ischemic stroke independent of apoA‐I needs further investigation. ## Author Contributions S.Y. and S.C.L. contributed to the conception and design of the study. S.Y. contributed to the acquisition and analysis of data. S.Y., B.T., J.Z., and S.C.L. contributed to drafting the text and preparing the figures. ## Potential Conflicts of Interest The authors declared no conflicts of interest. ## Data Availability The datasets analyzed in this study are publicly available summary statistics. ## Supporting information
Serotonin is a neuromodulator that is extensively entangled in fundamental aspects of brain function and behavior. We present a computational view of its involvement in the control of appetitively and aversively motivated actions. We first describe a range of its effects in invertebrates, endowing specific structurally fixed networks with plasticity at multiple spatial and temporal scales. We then consider its rather widespread distribution in the mammalian brain. We argue that this is associated with a more unified representational and functional role in aversive processing that is amenable to computational analyses with the kinds of reinforcement learning techniques that have helped elucidate dopamine's role in appetitive behavior. Finally, we suggest that it is only a partial reflection of dopamine because of essential asymmetries between the natural statistics of rewards and punishments.
Although often considered as a group, spinal motor neurons are highly diverse in terms of their morphology, connectivity, and functional properties and differ significantly in their response to disease. Recent studies of motor neuron diversity have clarified developmental mechanisms and provided novel insights into neurodegeneration in amyotrophic lateral sclerosis (ALS). Motor neurons of different classes and subtypes--fast/slow, alpha/gamma--are grouped together into motor pools, each of which innervates a single skeletal muscle. Distinct mechanisms regulate their development. For example, glial cell line-derived neurotrophic factor (GDNF) has effects that are pool-specific on motor neuron connectivity, column-specific on axonal growth, and subtype-specific on survival. In multiple degenerative contexts including ALS, spinal muscular atrophy (SMA), and aging, fast-fatigable (FF) motor units degenerate early, whereas motor neurons innervating slow muscles and those involved in eye movement and pelvic sphincter control are strikingly preserved. Extrinsic and intrinsic mechanisms that confer resistance represent promising therapeutic targets in these currently incurable diseases.
There is a pressing need for objective, quantifiable outcome measures in intervention trials for children with autism spectrum disorder (ASD). The current study investigated the use of eye tracking as a biomarker of treatment response in the context of a pilot randomized clinical trial of treatment for young children with ASD. Participants included 28 children with ASD, aged 18-48&#x2009;months, who were randomized to one of two conditions: Pivotal Response Intervention for Social Motivation (PRISM) or community treatment as usual (TAU). Eye-tracking and behavioral assessment of developmental functioning were administered at Time 1 (prior to randomization) and at Time 2 (after 6 months of intervention). Two well-established eye-tracking paradigms were used to measure social attention: social preference and face scanning. As a context for understanding relationships between social attention and developmental ability, we first examined how scanning patterns at Time 1 were associated with concurrent developmental functioning and compared to those of 23 age-matched typically developing (TD) children. Changes in scanning patterns from Time 1 to Time 2 were then compared between PRISM and TAU groups and associated with behavioral change over time. Results showed that the social preference paradigm differentiated children with ASD from TD children. In addition, attention during face scanning was associated with language and adaptive communication skills at Time 1 and change in language skills from Time 1 to Time 2. These findings highlight the importance of examining targeted biomarkers that measure unique aspects of child functioning and that are well-matched to proposed mechanisms of change. Autism Research 2019, 12: 779-793. &#xa9; 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Biomarkers have the potential to provide important information about how and why early interventions effect positive change for young children with ASD. The current study suggests that eye-tracking measures of social attention can be used to track change in specific areas of development, such as language, and points to the need for targeted eye-tracking paradigms designed to measure specific behavioral changes. Such biomarkers could inform the development of optimal, individualized, and adaptive interventions for young children with ASD.
Alterations in the gut microbiota may influence gastrointestinal (GI) dysbiosis frequently reported in individuals with autism spectrum disorder (ASD). In this study, we sequenced the bacterial 16S rRNA gene to evaluate changes in fecal microbiota between 48 children with ASD and 48 healthy children in China. At the phylum level, the number of Firmicutes, Proteobacteria, and Verrucomicrobia decreased in children with ASD, while the Bacteroidetes/Firmicutes was significantly higher in autistic children due to enrichment of Bacteroidetes. At the genus level, the amount of Bacteroides, Prevotella, Lachnospiracea_incertae_sedis, and Megamonas increased, while Clostridium XlVa, Eisenbergiella, Clostridium IV, Flavonifractor, Escherichia/Shigella, Haemophilus, Akkermansia, and Dialister decreased in children with ASD relative to the controls. Significant increase was observed in the number of species synthesizing branched-chain amino acids (BCAAs), like Bacteroides vulgatus and Prevotella copri, while the numbers of Bacteroides fragilis and Akkermansia muciniphila decreased in children with ASD compared to the controls. Most importantly, the highest levels of pathogenic bacteria were different for each child with ASD in this cohort. We found that only one functional module, cellular antigens, was enriched in children with ASD, and other pathways like lysine degradation and tryptophan metabolism were significantly decreased in children with ASD. These findings provide further evidence of altered gut microbiota in Chinese ASD children and may contribute to the treatment of patients with ASD. LAY SUMMARY: This study characterized the gut bacteria composition of 48 children with ASD and 48 neurotypical children in China. The metabolic disruptions caused by altered gut microbiota may contribute significantly to the neurological pathophysiology of ASD, including significant increases in the number of species synthesizing BCAAs, and decreases in the number of probiotic species. These findings suggest that a gut microbiome-associated therapeutic intervention may provide a novel strategy for treating GI symptoms frequently seen in individuals with ASD. Autism Res 2020, 13: 1614-1625. &#xa9; 2020 International Society for Autism Research, Wiley Periodicals, Inc.
Autism and specific language impairment (SLI) are developmental disorders that, although distinct by definition, have in common some features of both language and social behavior. The goal of this study was to further explore the extent to which specific clinical features of autism are seen in SLI. The children with the two disorders, matched for non-verbal IQ, were compared on the Autism Diagnostic Interview-Revised (ADI-R) and the Autism Diagnostic Observation Schedule (ADOS). In the SLI group, 41% met autism or autism spectrum cut-offs for social or communication domains either on the ADI or ADOS or both. No relationship was found between the language deficits exhibited by the children with SLI and their scores on the ADI and ADOS. These findings contribute to evidence that there is some overlap in social and communicative deficits between autism and SLI, supporting the view that autism and SLI share etiologic factors. This continuum of pathology between SLI and autism appears to range from structural language abnormalities as seen in individuals with SLI to individuals with SLI with both structural and social abnormalities to individuals with autism with pragmatic impairment and language abnormalities.
Stress, especially chronic stress, is one of the most important factors responsible for precipitation of affective disorders in humans. The animal models commonly used in the investigation of stress effects are based mainly on powerful physical stressors. In the majority of cases, these models are not relevant to situations that human beings encounter in everyday life. In our study, an animal model for chronic social stress has been developed for rats using a resident-intruder paradigm. This paradigm is considered a model of social defeat or subordination, and therefore may mimic situations occurring in humans. Rats were subjected daily to subordination stress for a period of five weeks and, in parallel, tested with a battery of behavioural tests. Chronically stressed rats showed behavioural changes, including decreased motility and exploratory activity, increased immobility in a forced swim test, and reduced preference for sweet sucrose solution (anhedonia). Reduced locomotor and exploratory activity represents a loss of interest in new stimulating situations, implying a deficit in motivation. Increased immobility in the forced swim test indicates behavioural despair, a characteristic of depressive disorders. Decreased sucrose preference may indicate desensitisation of the brain reward mechanism. Since anhedonia is one of the core symptoms of depression in humans, our findings suggest that the rat chronic social stress model may be an appropriate model for depressive disorders.
In human parietal cortex, the retinal location of a just seen visual stimulus is updated from one hemisphere to the other, when a horizontal eye movement brings the representation of the stimulus into the opposite visual hemifield. The present study aimed to elucidate the time course of this process. Twelve subjects performed an updating task, in which a filled circle was shown before a horizontal saccade, requiring updating of stimulus location, and a control task without visual stimulation before the saccade. Electroencephalogram (EEG) and electrooculogram (EOG) were recorded while subjects performed the tasks and LORETA source analysis was performed on event-related potential (ERP) components. ERP amplitudes were more positive in the updating condition in comparison to the control condition in two latency windows. An early positive wave starting at about 50 ms after saccade offset and originating in the posterior parietal cortex contralateral to saccade direction probably reflects the integration of saccade-related and visual information and thus the updating process. A shift of the representation of the to-be-updated stimulus to the opposite hemisphere is reflected in a later component starting approximately 400 ms after saccade offset, which is related to memory and originates in the PPC ipsilateral to saccade direction and thus contralateral to the spatial location of the updated visual stimulus.
Chronic psychosocial stress has been suggested as "second hit" in the etiology of neuropsychiatric disease, but experimental evidence is scarce. We employed repetitive social defeat stress in juvenile mice, housed individually or in groups, and measured sensorimotor gating by pre-pulse inhibition (PPI), a marker of neuronal network function. Using the resident-intruder paradigm, 28-day old C57BL/6NCrl mice were subjected daily for 3 weeks to social defeat. PPI and basic behaviour were analyzed 10 weeks later. Whereas stress increased the level of anxiety in all animals, persistent PPI deficits were found only in individually housed mice. Thus, social support in situations of severe psychosocial stress may prevent lasting impairment in basic information processing.
Episodic memory and episodic future thinking activate a network of overlapping brain regions, but little is known about the mechanism with which the brain separates the two processes. It was recently suggested that differential activity for memory and future thinking may be linked to differences in the phenomenal properties (e.g., richness of detail). Using functional magnetic resonance imaging in healthy subjects and a novel experimental design, we investigated the networks involved in the imagery of future and the recall of past events for the same target occasion, i.e. the Christmas and New Year's holidays, thereby keeping temporal distance and content similar across conditions. Although ratings of phenomenal characteristics were comparable for future thoughts and memories, differential activation patterns emerged. The right posterior hippocampus exhibited stronger memory-related activity during early event recall, and stronger future thought-related activity during late event imagination. Other regions, e.g., the precuneus and lateral prefrontal cortex, showed the reverse activation pattern with early future-associated and late past-associated activation. Memories compared to future thoughts were further related to stronger activation in several visual processing regions, which accords with a reactivation of the original perceptual experience. In conclusion, the results showed for the first time unique neural signatures for both memory and future thinking even in the absence of differences in phenomenal properties and suggested different time courses of brain activation for episodic memory and future thinking.
The stress response is a multifaceted physiological reaction that engages a wide range of systems. Animal studies examining stress and the stress response employ diverse methods as stressors. While many of these stressors are capable of inducing a stress response in animals, a need exists for an ethologically relevant stressor for female rats. The purpose of the current study was to use an ethologically relevant social stressor to induce behavioral alterations in adult female rats. Adult (postnatal day 90) female Wistar rats were repeatedly exposed to lactating Long Evans female rats to simulate chronic stress. After six days of sessions, intruder females exposed to defeat were tested in the sucrose consumption test, the forced swim test, acoustic startle test, elevated plus maze, and open field test. At the conclusion of behavioral testing, animals were restrained for 30 min and trunk blood was collected for assessment of serum hormones. Female rats exposed to maternal aggression exhibited decreased sucrose consumption, and impaired coping behavior in the forced swim test. Additionally, female rats exposed to repeated maternal aggression exhibited an increased acoustic startle response. No changes were observed in female rats in the elevated plus maze or open field test. Serum hormones were unaltered due to repeated exposure to maternal aggression. These data indicate the importance of the social experience in the development of stress-related behaviors: an acerbic social experience in female rats precipitates the manifestation of depressive-like behaviors and an enhanced startle response.
Several studies have examined impulsive choice behavior in spontaneously hypertensive rats (SHRs) as a possible pre-clinical model for Attention-Deficit/Hyperactivity Disorder (ADHD). However, this strain was not specifically selected for the traits of ADHD and as a result their appropriateness as a model has been questioned. The present study investigated whether SHRs would exhibit impulsive behavior in comparison to their control strain, Wistar Kyoto (WKY) rats. In addition, we evaluated a strain that has previously shown high levels of impulsive choice, the Lewis (LEW) rats and compared them with their source strain, Wistar (WIS) rats. In the first phase, rats could choose between a smaller-sooner (SS) reward of 1 pellet after 10 s and a larger-later (LL) reward of 2 pellets after 30 s. Subsequently, the rats were exposed to increases in LL reward magnitude and SS delay. These manipulations were designed to assess sensitivity to magnitude and delay within the choice task to parse out possible differences in using the strains as models of specific deficits associated with ADHD. The SHR and WKY strains did not differ in their choice behavior under either delay or magnitude manipulations. In comparison to WIS, LEW showed deficits in choice behavior in the delay manipulation, and to a lesser extent in the magnitude manipulation. An examination of individual differences indicated that the SHR strain may not be sufficiently homogeneous in their impulsive choice behavior to be considered as a viable model for impulse control disorders such as ADHD. The LEW strain may be worthy of further consideration for their suitability as an animal model.
Postnatal maternal separation (PMS) has been shown to be associated with an increased vulnerability to psychiatric illnesses in adulthood. However, the underlying neurological mechanisms are not well understood. Here we evaluated its effects on neurogenesis and tonic GABA currents of cortical layer 5 (L5) pyramidal neurons. PMS not only increased cell proliferation in the subventricular zone, cortical layer 1 and hippocampal dentate gyrus in the adult brain, but also promoted the newly generated cells to differentiate into GABAergic neurons, and PMS adult brain maintained higher ratios of GABAergic neurons in the survival of newly generated cells within 5 days immediately post PMS. Additionally, PMS increased the tonic currents at P7-10 and P30-35 in cortical L5 pyramidal cells. Our results suggest that the newly generated GABAergic neurons and the low GABA concentration-activated tonic currents may be involved in the development of psychiatric disorders after PMS.
Postnatal overfeeding is a well-known model of early-life induced obesity and glucose intolerance in rats. However, little is known about its impact on insulin signaling in specific brain regions such as the mesocorticolimbic system, and its putative effects on dopamine-related hedonic food intake in adulthood. For this study, rat litters were standardized to 4 (small litter - SL) or 8 pups (control - NL) at postnatal day 1. Weaning was at day 21, and all tests were conducted after day 60 of life in male rats. In Experiment 1, we demonstrated that the SL animals were heavier than the NL at all time points and had decreased AKT/pAKT ratio in the Ventral Tegmental Area (VTA), without differences in the skeletal muscle insulin signaling in response to insulin injection. In Experiment 2, the standard rat chow intake was addressed using an automated system (BioDAQ, Research Diets(&#xae;)), and showed no differences between the groups. On the other hand, the SL animals ingested more sweet food in response to the 1 min tail-pinch challenge and did not develop conditioned place preference to sweet food. In Experiment 3 we showed that the SL rats had increased VTA TH content but had no difference in this protein in response to a sweet food challenge, as the NL had. The SL rats also showed decreased levels of dopamine D2 receptors in the nucleus accumbens. Here we showed that early postnatal overfeeding was linked to an altered functioning of the mesolimbic dopamine pathway, which was associated with altered insulin signaling in the VTA, suggesting increased sensitivity, and expression of important proteins of the dopaminergic system.
The current study evaluated age differences in conditioned pain modulation using a test stimulus that provided the opportunity to evaluate changes in heat pain sensitivity, sensitization, and desensitization within the same paradigm. During this psychophysical test, pain intensity clamping uses REsponse Dependent STIMulation (REDSTIM) methodology to automatically adjust stimulus intensity to maintain a desired pain rating set-point. Specifically, stimulus intensity increases until a pre-defined pain rating (the setpoint) is exceeded, and then decreases until pain ratings fall below the setpoint, with continued increases and decreases dictated by ratings. The subjects are blinded in terms of the setpoint and stimulus intensities. Younger and older subjects completed two test sessions of two REDSTIM trials, with presentation of conditioning cold stimulation between the trials of one session but not the other. The results indicated that conditioning cold stimulation similarly decreased the overall sensitivity of younger and older subjects, as measured by the average temperature that maintained a setpoint rating of 20 (on a scale of 0-100). The conditioning stimulus also significantly enhanced sensitization following ascending stimulus progressions and desensitization following descending stimulus progressions in older subjects relative to younger subjects. Thus, older subjects experienced greater swings in sensitivity in response to varying levels of painful stimulation. These results are discussed in terms of control over pain intensity by descending central modulatory systems. These findings potentially shed new light on the central control over descending inhibition and facilitation of pain.
Brain edema is a major contributor to poor outcome and reduced quality of life after surgical brain injury (SBI). Although SBI pathophysiology is well-known, the correlation between cerebral edema and neurological deficits has not been thoroughly examined in the rat model of SBI. Thus, the purpose of this study was to determine the correlation between brain edema and deficits in standard sensorimotor neurobehavior tests for rats subjected to SBI. Sixty male Sprague-Dawley rats were subjected to either sham surgery or surgical brain injury via partial frontal lobectomy. All animals were tested for neurological deficits 24 post-SBI and fourteen were also tested 72 h after surgery using seven common behavior tests: modified Garcia neuroscore (Neuroscore), beam walking, corner turn test, forelimb placement test, adhesive removal test, beam balance test, and foot fault test. After assessing the functional outcome, animals were euthanized for brain water content measurement. Surgical brain injury resulted in significantly elevated frontal lobe brain water content 24 and 72 h after surgery compared to that of sham animals. In all behavior tests, significance was observed between sham and SBI animals. However, a correlation between brain water content and functional outcome was observed for all tests except Neuroscore. The selection of behavior tests is critical to determine the effectiveness of therapeutics. Based on this study's results, we recommend using beam walking, the corner turn test, the beam balance test, and the foot fault test since correlations with brain water content were observed at both 24 and 72 h post-SBI.
Development of disease modifying therapeutics for Parkinson's disease (PD), the second most common neurodegenerative disorder, relies on availability of animal models which recapitulate the disease hallmarks. Only few transgenic mouse models, which mimic overexpression of alpha-synuclein, show dopamine loss, behavioral impairments and protein aggregation. Mice overexpressing human wildtype alpha-synuclein under the Thy-1 promotor (Thy1-aSyn) replicate these features. However, female mice do not exhibit a phenotype. This was attributed to a potentially lower transgene expression located on the X chromosome. Here we support that female mice overexpress human wildtype alpha-synuclein only about 1.5 fold in the substantia nigra, compared to about 3 fold in male mice. Since female Thy1-aSyn mice were shown previously to exhibit differences in corticostriatal communication and synaptic plasticity similar to their male counterparts we hypothesized that female mice use compensatory mechanisms and strategies to not show overt motor deficits despite an underlying endophenotype. In order to unmask these deficits we translated recent findings in PD patients that sensory abnormalities can enhance motor dysfunction into a novel behavioral test, the adaptive rotating beam test. We found that under changing sensory input female Thy1-aSyn mice showed an overt phenotype. Our data supports that the integration of sensorimotor information is likely a major contributor to symptoms of movement disorders and that even low levels of overexpression of human wildtype alpha-synuclein has the potential to disrupt processing of these information. The here described adaptive rotating beam test represents a sensitive behavioral test to detect moderate sensorimotor alterations in mouse models.
Functional interaction between cannabinoid and serotonin neuronal systems have been reported in different tasks related to memory assessment. The present study investigated the effect of serotonin 5-HT4 agents into the dorsal hippocampus (the CA1 region) on spatial and object novelty detection deficits induced by activation of cannabinoid CB1 receptors (CB1Rs) using arachidonylcyclopropylamide (ACPA) in a non-associative behavioral task designed to forecast the ability of rodents to encode spatial and non-spatial relationships between distinct stimuli. Post-training, intra-CA1 microinjection of 5-HT4 receptor agonist RS67333 or 5-HT4 receptor antagonist RS23597 both at the dose of 0.016&#x3bc;g/mouse impaired spatial memory, while cannabinoid CB1R antagonist AM251 (0.1&#x3bc;g/mouse) facilitated object novelty memory. Also, post-training, intraperitoneal administration of CB1R agonist ACPA (0.005-0.05mg/kg) impaired both memories. However, a subthreshold dose of RS67333 restored ACPA response on both memories. Moreover, a subthreshold dose of RS23597 potentiated ACPA (0.01mg/kg) and reversed ACPA (0.05mg/kg) responses on spatial memory, while it potentiated ACPA response at the dose of 0.005 or 0.05mg/kg on object novelty memory. Furthermore, effective dose of AM251 restored ACPA response at the higher dose. AM251 blocked response induced by combination of RS67333 or RS23597 and the higher dose of ACPA on both memories. Our results highlight that hippocampal 5-HT4 receptors differently affect cannabinoid signaling in spatial and object novelty memories. The inactivation of CB1 receptors blocks the effect of 5-HT4 agents into the CA1 region on memory deficits induced by activation of CB1Rs via ACPA.
Ghrelin is a peptide of 28 amino acids with a homology between species, which acts on the central nervous system to regulate different actions, including the control of growth hormone secretion and metabolic regulation. It has been suggested that central ghrelin is a mediator of behavior linked to stress responses and induces anxiety in rodents and birds. Previously, we observed that the anxiogenic-like behavior induced by ghrelin injected into the intermediate medial mesopallium (IMM) of the forebrain was blocked by bicuculline (a GABA<sub>A</sub> receptor competitive antagonist) but not by diazepam (a GABA<sub>A</sub> receptor allosteric agonist) in neonatal meat-type chicks (Cobb). Numerous studies have indicated that hypothalamic-pituitary-adrenal (HPA) axis activation mediates the response to stress in mammals and birds. However, it is still unclear whether this effect of ghrelin is associated with HPA activation. Therefore, we investigated whether anxiety behavior induced by intra-IMM ghrelin and mediated through GABA<sub>A</sub> receptors could be associated with HPA axis activation in the neonatal chick. In the present study, in an Open Field test, intraperitoneal bicuculline methiodide blocked anxiogenic-like behavior as well as the increase in plasma ACTH and corticosterone levels induced by ghrelin (30pmol) in neonatal chicks. Moreover, we showed for the first time that a competitive antagonist of GABA<sub>A</sub> receptor suppressed the HPA axis activation induced by an anxiogenic dose of ghrelin. These results show that the anxiogenic ghrelin action involves the activation of the HPA axis, with a complex functional interaction with the GABA<sub>A</sub> receptor.
Studies using silver catfish (Rhamdia quelen) as experimental models are often applied to screen essential oils (EO) with GABAergic-mediated effects. However, the expression of GABAa receptors in the silver catfish brain remains unknown. Thus, we assessed whether silver catfish express GABAa receptor subunits associated with sedation/anesthetic process and/or neurological diseases. Additionally, we evaluated the brain expression of GABAa receptor subunits in fish sedated with Nectandra grandiflora EO and its isolated compounds, the fish anesthetic (+)-dehydrofukinone (DHF), and dehydrofukinone epoxide (DFX), eremophil-11-en-10-ol (ERM) and selin-11-en-4-&#x3b1;-ol (SEL), which have GABAa-mediated anxiolytic-like effects in mice. The expression of the subunits gabra1, gabra2, gabra3, gabrb1, gabrd and gabrg2 in the silver catfish brain were assessed after a 24h-sedation bath by real time PCR. Since qPCR data rarely describes mechanisms of action, which are usually found through interactions with receptors, we also performed an antagonist-driven experiment using flumazenil (FMZ). Real-time PCR detected the mRNA expression of all targeted genes in R. quelen brain. The expression of gabra1 was decreased in fish sedated with ERM; EO increased gabra2, gabra3, gabrb1 and gabrg2 expression; SEL increased gabrb1, gabrd and gabrg2 expression. EO and compounds DFX, SEL and ERM induced sustained sedation in fish and FMZ-bath prompted the recovery from ERM- and DFX-induced sedation. Our results suggest that the EO, SEL, ERM and DFX sedative effects involve interaction with the GABAergic system. Our findings support the use of the silver catfish as robust and reliable experimental model to evaluate the efficacy of drugs with putative GABAergic-mediated effects.
This paper offers a formal account of emotional inference and stress-related behaviour, using the notion of active inference. We formulate responses to stressful scenarios in terms of Bayesian belief-updating and subsequent policy selection; namely, planning as (active) inference. Using a minimal model of how creatures or subjects account for their sensations (and subsequent action), we deconstruct the sequences of belief updating and behaviour that underwrite stress-related responses - and simulate the aberrant responses of the sort seen in post-traumatic stress disorder (PTSD). Crucially, the model used for belief-updating generates predictions in multiple (exteroceptive, proprioceptive and interoceptive) modalities, to provide an integrated account of evidence accumulation and multimodal integration that has consequences for both motor and autonomic responses. The ensuing phenomenology speaks to many constructs in the ecological and clinical literature on stress, which we unpack with reference to simulated inference processes and accompanying neuronal responses. A key insight afforded by this formal approach rests on the trade-off between the epistemic affordance of certain cues (that resolve uncertainty about states of affairs in the environment) and the consequences of epistemic foraging (that may be in conflict with the instrumental or pragmatic value of 'fleeing' or 'freezing'). Starting from first principles, we show how this trade-off is nuanced by prior (subpersonal) beliefs about the outcomes of behaviour - beliefs that, when held with unduly high precision, can lead to (Bayes optimal) responses that closely resemble PTSD.
Diabetes mellitus induces neuropsychiatric comorbidities at an early stage, which can be ameliorated by exercise. However, the neurobiological mechanisms underlying this ameliorative effect remain unclear. The present study was conducted in Otsuka Long-Evans Tokushima fatty (OLETF) rats, which develop diabetes with age, and aimed to investigate whether social and anxiety-like behaviors and neurobiological changes associated with these behavioral phenotypes were reversed by voluntary exercise and whether those were maintained in the later stage. We investigated the effects of exercise at different diabetic stages in OLETF rats by comparing with control rats. Three groups of OLETF rats were used: sedentary rats, rats exercising on a wheel for two weeks at 4-5 weeks of age (early voluntary exercise), and those exercising at 10-11 weeks of age (late voluntary exercise). In the elevated plus-maze test, both early and late voluntary exercises did not affect anxiety-like behavior. In the social interaction tests, both early and late voluntary exercises ameliorated impaired sociability, novel exploration deficits, and hypoactivity in OLETF rats. Both early and late voluntary exercises reversed the increases in cholecystokinin-positive neuron densities in the infralimbic cortex and hippocampal cornu ammonis area 3 in the OLETF rats, although they did not affect the area-reduction in the medial prefrontal cortex and the increase in cholecystokinin-positive neuron densities in the basolateral amygdala. These suggest that voluntary exercise has therapeutic effects on impaired sociability and novel exploration deficits associated with cholecystokinin-positive neurons in specific corticolimbic regions in OLETF rats, and those are maintained after early exercise.
Strong evidence has implicated ubiquitin signaling in the process of fear memory formation. While less abundant than ubiquitination, evidence suggests that protein SUMOylation may also be involved in fear memory formation in neurons. However, the importance of amygdala protein SUMOylation in fear memory formation has never been directly examined. Furthermore, while recent evidence indicates that males and females differ significantly in the requirement for ubiquitin signaling during fear memory formation, whether sex differences also exist in the importance of protein SUMOylation to this process remains unknown. Here we found that males and females differ in the requirement for protein SUMOylation in the amygdala during fear memory formation. Western blot analysis revealed that while females had higher resting levels of SUMOylation, both sexes showed global increases following fear conditioning. However, SUMOylation-specific proteomic analysis revealed that only females have increased targeting of individual proteins by SUMOylation following fear conditioning, some of which were heat shock proteins. This suggests that protein SUMOylation is more robustly engaged in the amygdala of females following fear conditioning. In vivo siRNA mediated knockdown of Ube2i, the coding gene for the essential E2 ligase for SUMOylation conjugation, in the amygdala impaired fear memory in males without any effect in females. Importantly, higher siRNA concentrations than what was needed to impair memory in males reduced Ube2i levels in the amygdala of females but resulted in an increase in SUMOylation levels, suggesting a compensatory effect in females that was not observed in males. Collectively, these data reveal a novel, sex-specific role for protein SUMOylation in the amygdala during fear memory formation and expand our understanding of how ubiquitin-like signaling regulates memory formation.
Previous studies have reported that peer groups are one of the most important predictors of adolescent and young adult marijuana use, and yet the neural correlates of social processing in marijuana users have not yet been studied. In the current study, marijuana-using young adults (n = 20) and non-using controls (n = 22) participated in a neuroimaging social exclusion task called Cyberball, a computerized ball-tossing game in which the participant is excluded from the game after a pre-determined number of ball tosses. Controls, but not marijuana users, demonstrated significant activation in the insula, a region associated with negative emotion, when being excluded from the game. Both groups demonstrated activation of the ventral anterior cingulate cortex (vACC), a region associated with affective monitoring, during peer exclusion. Only the marijuana group showed a correlation between vACC activation and scores on a self-report measure of peer conformity. This study indicates that marijuana users show atypical neural processing of social exclusion, which may be either caused by, or the result of, regular marijuana use.
Neurodegenerative disorders of aging represent a growing public health concern. In the United States alone, there are now &gt;5 million patients with Alzheimer's disease (AD), the most common form of dementia. No therapeutic approaches are available that alter the relentless course of AD or other dementias of aging. A major hurdle to the development of effective therapeutics has been the lack of predictive model systems in which to develop and validate candidate therapies. Animal model studies based on the analysis of transgenic mice that overexpress rare familial AD-associated mutant genes have been informative about mechanisms of familial disease, but they have not proven predictive for drug development. New approaches to disease modeling are of particular interest. Methods such as epigenetic reprogramming of patient skin fibroblasts to human induced pluripotent stem cells, which can be differentiated into patient-derived neuron subtypes, have generated significant excitement because of their potential to model more accurately aspects of human neurodegeneration. Studies focused on the generation of human neuron models of AD and frontotemporal dementia have pointed to pathologic pathways and potential therapeutic venues. This article discusses the promise and potential pitfalls of modeling of dementia disorders based on somatic cell reprogramming.
Postural instability occurs in HIV infection, but quantitative balance tests in conjunction with neuroimaging are lacking. We examined whether infratentorial brain tissue volume would be deficient in nondemented HIV-infected individuals and whether selective tissue deficits would be related to postural stability and psychomotor speed performance. The 123 participants included 28 men and 12 women with HIV infection without dementia or alcohol use disorders, and 40 men and 43 women without medical or psychiatric conditions. Participants completed quantitative balance testing, Digit Symbol test, and a test of finger movement speed and dexterity. An infratentorial brain region, supratentorial ventricular system, and corpus callosum were quantified with MRI-derived atlas-based parcellation, and together with archival DTI-derived fiber tracking of pontocerebellar and internal and external capsule fiber systems, brain measures were correlated with test performance. The tissue ratio of the infratentorium was ~3% smaller in the HIV than control group. The HIV group exhibited performance deficits in balancing on one foot, walking toe-to-heel, Digit Symbol substitution task, and time to complete all Digit Symbol grid boxes. Total infratentorial tissue ratio was a significant predictor of balance and Digit Symbol scores. Balance scores did not correlate significantly with ventricular volumes, callosal size, or internal or external capsule fiber integrity but did so with indices of pontocerebellar tract integrity. HIV-infected individuals specifically recruited to be without complications from alcohol use disorders had pontocerebellar tissue volume deficits with functional ramifications. Postural stability and psychomotor speed were impaired and attributable, at least in part, to compromised infratentorial brain systems.
Stress is a risk factor for the onset of mental disorders. Although stress response varies across individuals, the mechanism of individual differences remains unclear. Here, we investigated the neural basis of individual differences in response to mental stress using magnetoencephalography (MEG). Twenty healthy male volunteers completed the Temperament and Character Inventory (TCI). The experiment included two types of tasks: a non-stress-inducing task and a stress-inducing task. During these tasks, participants passively viewed non-stress-inducing images and stress-inducing images, respectively, and MEG was recorded. Before and after each task, MEG and electrocardiography were recorded and subjective ratings were obtained. We grouped participants according to Novelty seeking (NS)&#xa0;- tendency to be exploratory, and Harm avoidance (HA)&#xa0;- tendency to be cautious. Participants with high NS and low HA (n&#xa0;=&#xa0;10) assessed by TCI had a different neural response to stress than those with low NS and high HA (n&#xa0;=&#xa0;10). Event-related desynchronization (ERD) in the beta frequency band was observed only in participants with high NS and low HA in the brain region extending from Brodmann's area 31 (including the posterior cingulate cortex and precuneus) from 200 to 350&#xa0;ms after the onset of picture presentation in the stress-inducing task. Individual variation in personality traits (NS and HA) was associated with the neural response to mental stress. These findings increase our understanding of the psychological and neural basis of individual differences in the stress response, and will contribute to development of the psychotherapeutic approaches to stress-related disorders.
Gray matter (GM)&#xa0;lobar atrophy and glucose hypometabolism are well-described hallmarks of frontotemporal lobar degeneration (FTLD), but the relationships between them are still poorly understood. In this study, we aimed to show the patterns of GM atrophy and hypometabolism in a sample of 15 patients with the behavioral variant of FTLD (bv-FTD), compared to 15 healthy controls, then to provide a direct comparison between GM atrophy and hypometabolism, using a voxel-based method specially designed to statistically compare the two imaging modalities. The participants underwent structural magnetic resonance imaging and <sup>18</sup>F-fluorodeoxyglucose (FDG) positron emission tomography examinations. First, between-group comparisons of GM volume and metabolism were performed. Then, in the patient group, correlations between regional alterations and direct between-modality voxelwise comparison were performed. Finally, we examined individual patterns of brain abnormalities for each imaging modality and each patient. The observed patterns of GM atrophy and hypometabolism were consistent with previous studies. We found significant voxelwise correlations between changes in GM and FDG uptake, mainly in the frontal cortex, corresponding to the typical profile of alterations in bv-FTD. The direct comparison revealed regional variability in the relationship between hypometabolism and atrophy. This analysis revealed greater atrophy than hypometabolism in the right putamen and amygdala, and left insula and superior temporal gyrus, whereas hypometabolism was more severe than GM atrophy in the left caudate nucleus and anterior cingulate cortex. Finally, GM atrophy affected the right amygdala/hippocampus and left insula in 95&#xa0;% of the patients. These findings provide evidence for regional variations in the hierarchy of hypometabolism and GM atrophy and the relationships between them, and enhance our understanding of the pathophysiology of bv-FTD.
Dopaminergic dysfunction and changes in white matter integrity are among the most replicated findings in schizophrenia. A modulating role of dopamine in myelin formation has been proposed in animal models and healthy human brain, but has not yet been systematically explored in schizophrenia. We used diffusion tensor imaging and <sup>18</sup>F-fallypride positron emission tomography in 19 healthy and 25 schizophrenia subjects to assess the relationship between gray matter dopamine D<sub>2</sub>/D<sub>3</sub> receptor density and white matter fractional anisotropy in each diagnostic group. AFNI regions of interest were acquired for 42 cortical Brodmann areas and subcortical gray matter structures as well as stereotaxically placed in representative white matter areas implicated in schizophrenia neuroimaging literature. Welch's t-test with permutation-based p value adjustment was used to compare means of z-transformed correlations between fractional anisotropy and <sup>18</sup>F-fallypride binding potentials in hypothesis-driven regions of interest in the diagnostic groups. Healthy subjects displayed an extensive pattern of predominantly negative correlations between <sup>18</sup>F-fallypride binding across a range of cortical and subcortical gray matter regions and fractional anisotropy in rostral white matter regions (internal capsule, frontal lobe, anterior corpus callosum). These patterns were disrupted in subjects with schizophrenia, who displayed significantly weaker overall correlations as well as comparatively scant numbers of significant correlations with the internal capsule and frontal (but not temporal) white matter, especially for dopamine receptor density in thalamic nuclei. Dopamine D<sub>2</sub>/D<sub>3</sub> receptor density and white matter integrity appear to be interrelated, and their decreases in schizophrenia may stem from hyperdopaminergia with dysregulation of dopaminergic impact on axonal myelination.
Dislipidemia is a risk factor for cognitive impairment. We studied the association between interindividual variability of plasma lipids and white matter (WM) microstructure, using diffusion tensor imaging (DTI) in 273 healthy adults. Special focus was placed on 7 regions of interest (ROI) which are structural components of cognitive neurocircuitry. We also investigated the effect of plasma lipids on cerebrospinal fluid (CSF) neurofilament light chain (NfL), an axonal degeneration marker. Low density lipoprotein (LDL) and triglyceride (TG) levels showed a negative association with axial diffusivity (AxD) in multiple regions. High density lipoproteins (HDL) showed a positive correlation. The association was independent of Apolipoprotein E (APOE) genotype, blood pressure or use of statins. LDL moderated the relation between NfL and AxD in the body of the corpus callosum (p&#x2009;=&#x2009;0.041), right cingulum gyrus (p&#x2009;=&#x2009;0.041), right fornix/stria terminalis (p&#x2009;=&#x2009;0.025) and right superior longitudinal fasciculus (p&#x2009;=&#x2009;0.020) and TG in the right inferior longitudinal fasciculus (p&#x2009;=&#x2009;0.004) and left fornix/stria terminalis (p&#x2009;=&#x2009;0.001). We conclude that plasma lipids are associated to WM microstructural changes and axonal degeneration and might represent a risk factor in the transition from healthy aging to disease.
Duchenne muscular dystrophy (DMD) is an X-linked recessive neuromuscular disorder caused by absence of dystrophin protein. Dystrophin is expressed in muscle, but also in the brain. Difficulties with attention/inhibition, working memory and information processing are well described in DMD patients but their origin is poorly understood. The default mode network (DMN) is one of the networks involved in these processes. Therefore we aimed to assess DMN connectivity in DMD patients compared to matched controls, to better understand the cognitive profile in DMD. T1-weighted and resting state functional MRI scans were acquired from 33 DMD and 24 male age-matched controls at two clinical sites. Scans were analysed using FMRIB Software Library (FSL). Differences in the DMN were assessed using FSL RANDOMISE, with age as covariate and threshold-free cluster enhancement including multiple comparison correction. Post-hoc analyses were performed on the visual network, executive control network and fronto-parietal network with the same methods. In DMD patients, the level of connectivity was higher in areas within the control DMN (hyperconnectivity) and significant connectivity was found in areas outside the control DMN. No hypoconnectivity was found and no differences in the visual network, executive control network and fronto-parietal network. We showed differences both within and in areas outside the DMN in DMD. The specificity of our findings to the DMN can help provide a better understanding of the attention/inhibition, working memory and information processing difficulties in DMD.
Fluorescent protein technology has evolved to include genetically encoded biosensors that can monitor levels of ions, metabolites, and enzyme activities as well as protein conformation and even membrane voltage. They are well suited to live-cell microscopy and quantitative analysis, and they can be used in multiple imaging modes, including one- or two-photon fluorescence intensity or lifetime microscopy. Although not nearly complete, there now exists a substantial set of genetically encoded reporters that can be used to monitor many aspects of neuronal and glial biology, and these biosensors can be used to visualize synaptic transmission and activity-dependent signaling in vitro and in vivo. In this review, we present an overview of design strategies for engineering biosensors, including sensor designs using circularly permuted fluorescent proteins and using fluorescence resonance energy transfer between fluorescent proteins. We also provide examples of indicators that sense small ions (e.g., pH, chloride, zinc), metabolites (e.g., glutamate, glucose, ATP, cAMP, lipid metabolites), signaling pathways (e.g., G protein-coupled receptors, Rho GTPases), enzyme activities (e.g., protein kinase A, caspases), and reactive species. We focus on examples where these genetically encoded indicators have been applied to brain-related studies and used with live-cell fluorescence microscopy.
In the last decade, drastic changes in the understanding of the role of the olfactory bulb and piriform cortex in odor detection have taken place through awake behaving recording in rodents. It is clear that odor responses in mitral and granule cells are strikingly different in the olfactory bulb of anesthetized versus awake animals. In addition, sniff recording has evidenced that mitral cell responses to odors during the sniff can convey information on the odor identity and sniff phase. Moreover, we review studies that show that the mitral cell conveys information on not only odor identity but also whether the odor is rewarded or not (odor value). Finally, we discuss how the substantial increase in awake behaving recording raises questions for future studies.
Human research with psychedelics is making groundbreaking discoveries. Psychedelics modify enduring elements of personality and seemingly reduce anxiety, depression, and substance dependence in small but well-designed clinical studies. Psychedelics are advancing through pharmaceutical regulatory systems, and neuroimaging studies have related their extraordinary effects to select brain networks. This field is making significant basic science and translational discoveries, yet preclinical studies have lagged this renaissance in human psychedelic research. Preclinical studies have a lot to offer psychedelic research as they afford tight control of experimental parameters, subjects with documented drug histories, and the capacity to elucidate relevant signaling cascades as well as conduct invasive mechanistic studies of neurochemistry and neural circuits. Safety pharmacology, novel biomarkers, and pharmacokinetics can be assessed in disease state models to advance psychedelics toward clinical practice. This chapter documents the current status of psychedelic research, with the thematic argument that new preclinical studies would benefit this field.
This chapter discusses the premotor neural mechanisms that control horizontal saccadic eye movements. Oculomotoneurons carry a pulse-step signal that underlies the pulse-step force driving the overdamped plant. The pulse and step are both generated by a common signal, arising from medium-lead burst neurons in the pons. Their burst signal encodes saccadic eye velocity, while the number of spikes in the burst relates to the saccade amplitude. The step component, which encodes the eye position, is obtained by neural integration of the burst. Several oculomotor neural disorders can be explained by impairments in the binocular push-pull organization of this pulse-step mechanism. Plasticity of the pulse-step control, e.g., in response to muscle weakening, is mediated by cerebellar vermis and flocculus. Saccadic offset may be controlled, either by active braking, or by an exponential slide signal. The neurophysiology is summarized by a quantitative model, in which the firing rate of burst neurons is controlled by a dynamic negative feedback loop that carries the instantaneous eye position signal from the neural integrator. This signal is compared with a desired eye-position command in the head from higher centers, and the resulting dynamic motor error drives the high-gain burst cells. Instability of the system is prevented by the mutual inhibitory interaction between burst cells and omnipause neurons. The model explains many features of normal saccades, but also accounts for pathologies and abnormalities like dynamic overshoots and saccade oscillations.
In the present study, we used a modification of the rabbit small clot embolic stroke model (RSCEM), a multiple infarct ischemia model to achieve reperfusion (REP) through the internal carotid artery (ICA) following small clot embolization. We determined if increasing regional cortical blood flow (RCBF) following an embolic stroke is beneficial to neurological outcome. We compared this to cerebral reperfusion induced by the administration of the thrombolytic Tenecteplase (TNK, 1.5 mg/kg, IV bolus) in the presence or absence of REP. In this study, we also measured the incidence of ICH following REP and thrombolytic treatment. Following embolization, RCBF was reduced to 48-55% of baseline. When REP was induced by removal of a CCA ligature, RCBF initially increased to 185% of baseline. REP (P(50)=1.18+/-0.43 mg) had no effect on embolization-induced behavior measured 24 h following embolization compared to control (P(50)=1.01+/-0.48 mg). However, TNK treatment (2-hours post-embolization) in the absence or presence of REP (initiated 2 h following embolization) significantly (p&lt;0.05) increased the group P(50) to 2.92+/-0.55 mg and 2.42+/-0.40 mg, respectively. In addition, ICH was increased in the REP (42%, p&lt;0.05) and REP-TNK (35%, p&gt;0.05) group compared to either the control group (5.5%) or TNK group (10%). This study show that reperfusion of ICA can increase RCBF following embolization, but this is not associated with improved neurological outcome measured using quantal analysis. However, TNK administration significantly increased behavioral outcome when given 2 h following embolization; an increase that is not affected by combining TNK with REP.
A significant number of studies that evaluated tactile-pain interactions employed heat to evoke nociceptive responses. However, relatively few studies have examined the effects of non-noxious thermal stimulation on tactile discriminative capacity. In this study, the impact that non-noxious heat had on three features of tactile information processing capacity was evaluated: vibrotactile threshold, amplitude discriminative capacity, and adaptation. It was found that warming the skin made a significant improvement on a subject's ability to detect a vibrotactile stimulus, and although the subjects' capacities for discriminating between two amplitudes of vibrotactile stimulation did not change with skin heating, the impact that adapting or conditioning stimulation normally had on amplitude discrimination capacity was significantly attenuated by the change in temperature. These results suggested that although the improvements in tactile sensitivity that were observed could have been a result of enhanced peripheral activity, the changes in measures that reflect a decrease in the sensitization to repetitive stimulation are most likely centrally mediated. The authors speculate that these centrally mediated changes could be a reflection of a change in the balance of cortical excitation and inhibition.
The localization of an axon growth inhibitory molecule Nogo and its receptor (NgR) was investigated in the mouse spinal cord during prenatal development of the commissural pathway. Using the antibody N18, an intense signal for Nogo was localized largely on radial glia processes that are immunoreactive to RC2 antibody during the major period of commissural axon growth and was gradually reduced towards the end of gestation. The glial processes ramified extensively in the ventral funiculus and resided within the interfascicular space between the longitudinally projecting axons. Axonal localization of Nogo was observed on the premidline segment of commissural axons and on axons in the dorsal and ventral funiculi, but only at the earliest stage of pathway development. Nogo signals were initially weak on the glial processes during the period of axon crossing in the floor plate but was elevated when the decussation is finished. NgR was expressed on the commissural axons; the expression pattern is spatially regulated, being low in the premidline and midline courses but is upregulated when the axons leave the floor plate. These expression patterns raise the possibilities that the glial-specific form of Nogo may be involved in the guidance of commissural axons by (i) preventing recrossing of axons across the midline through an upregulation of axonal NgR and (ii) partitioning axons in the ventral funiculus into longitudinal fascicles.
The aim of this study was to quantitatively investigate the chronic ethanol-induced cerebral metabolic changes in various regions of the rat brain, using the proton high resolution magic angle spinning spectroscopy technique. The rats were divided into two groups (control group: N=11, ethanol-treated group: N=11) and fed with the liquid diets for 10 weeks. In each week, the mean intake volumes of liquid diet were measured. The brain tissues, including cerebellum (Cere), frontal cortex (FC), hippocampus (Hip), occipital cortex (OC) and thalamus (Thal), were harvested immediately after the end of experiments. The ex vivo proton spectra for the five brain regions were acquired with the Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence at 500-MHz NMR spectrometer. All of the spectra were processed using the LCModel software, with simulated basis-set file, and the metabolite levels were referenced to total creatine. In the ethanol liquid diet group, there were significant increases in the metabolites ratio levels, as compared to control (Cere: alanine, glutathione, and N-acetlyaspartate; FC: phosphocholine and taurine; Hip: alanine, glutamine, and N-acetylaspartate; OC: glutamine; Thal: alanine, &#x3b3;-aminobutyric acid, glutamate, glycerophosphocholine, phosphocholine, taurine, and free choline). However, in the ethanol liquid diet group, the myo-inositol levels of the OC were significantly lower. The present study demonstrates how chronic ethanol consumption affects cerebral metabolites in the chronic ethanol-treated rat. Therefore, this result could be useful to pursue clinical applications for quantitative diagnosis in human alcoholism.
We have shown previously that intracerebroventricular (icv) injection of naloxone (a non-selective opioid receptor antagonist) or naloxonazine (a selective &#x3bc;1-opioid receptor antagonist) at the maintenance phase of hibernation arouses Syrian hamsters from hibernation. This study was designed to clarify the role of &#x3b2;-endorphin (an endogenous &#x3bc;-opioid receptor ligand) on regulation of body temperature (T(b)) during the maintenance phase of hibernation. The number of c-Fos-positive cells and &#x3b2;-endorphin-like immunoreactivity increased in the arcuate nucleus (ARC) after hibernation onset. In contrast, endomorphin-1 (an endogenous &#x3bc;-opioid receptor ligand)-like immunoreactivity observed on the anterior hypothalamus decreased after hibernation onset. In addition, hibernation was interrupted by icv injection of anti-&#x3b2;-endorphin antiserum at the maintenance phase of hibernation. The mRNA expression level of proopiomelanocortin (a precursor of &#x3b2;-endorphin) on ARC did not change throughout the hibernation phase. However, the mRNA expression level of prohormone convertase-1 increased after hibernation onset. [D-Ala2,N-MePhe4,Gly-ol5] enkephalin (DAMGO, a selective &#x3bc;-opioid receptor agonist) microinjection into the dorsomedial hypothalamus (DMH) elicited the most marked T(b) decrease than other sites such as the preoptic area (PO), anterior hypothalamus (AH), lateral hypothalamus (LH), ventromedial hypothalamus and posterior hypothalamus (PH). However, microinjected DAMGO into the medial septum indicated negligible changes in T(b). These results suggest that &#x3b2;-endorphin which synthesizes in ARC neurons regulates T(b) during the maintenance phase of hibernation by activating &#x3bc;-opioid receptors in PO, AH, VMH, DMH and PH.
Although anatomical data indicates that the caudal ventrolateral medulla (CVLM) projects directly to the subfornical organ (SFO), little is known about the afferent information relayed through the CVLM to SFO. Experiments were done in the anesthetized rat to investigate whether CVLM neurons mediate baroreceptor afferent information to SFO and whether this afferent information alters the response of SFO neurons to systemic injections of angiotensin II (ANG II). Extracellular single unit recordings were made from 78 spontaneously discharging single units in SFO. Of these, 32 (41%) responded to microinjection of L-glutamate (L-Glu; 0.25M; 10nl) into CVLM (27/32 were inhibited and 5/32 were excited). All 32 units also were excited by systemic injections of ANG II (250ng/0.1ml, ia). However, only those units inhibited by CVLM (n=27) were found to be inhibited by the reflex activation of baroreceptors following systemic injections of phenylephrine (2&#x3bc;g/kg, iv). Activation of CVLM or arterial baroreceptors in conjunction with ANG II resulted in an attenuation of the SFO unit's response to ANG II. Finally, microinjections (100nl) of the synaptic blocker CoCl(2) or the non-specific glutamate receptor antagonist kynurenic acid into CVLM attenuated (10/13 units tested) the SFO neuron's response to activation of baroreceptors, but not the unit's response evoked by systemic ANG II. Taken together, these data suggest that baroreceptor afferent information relayed through CVLM functions to modulate of the activity of neurons within SFO to extracellular signals of body fluid balance.
[11C]CUMI-101 is the first selective serotonin receptor (5-HT1AR) partial agonist radiotracer for positron emission tomography (PET) tested in vivo in nonhuman primates and humans. We evaluated specific binding of [3H]CUMI-101 by quantitative autoradiography studies in postmortem baboon and human brain sections using the 5-HT1AR antagonist WAY-100635 as a displacer. The regional and laminar distributions of [3H]CUMI-101 binding in baboon and human brain sections matched the known distribution of [3H]8-OH-DPAT and [3H]WAY-100635. Prazosin did not measurably displace [3H]CUMI-101 binding in baboon or human brain sections, thereby ruling out [3H]CUMI-101 binding to &#x3b1;1-adrenergic receptors. This study demonstrates that [11C]CUMI-101 is a selective 5-HT1AR ligand for in vivo and in vitro studies in baboon and human brain.
Prosaposin (also known as SGP-1) is an intriguing multifunctional protein that plays roles both intracellularly, as a regulator of lysosomal enzyme function, and extracellularly, as a secreted factor with neuroprotective and glioprotective effects. Following secretion, prosaposin can undergo endocytosis via an interaction with the low-density lipoprotein-related receptor 1 (LRP1). The ability of secreted prosaposin to promote protective effects in the nervous system is known to involve activation of G proteins, and the orphan G protein-coupled receptors GPR37 and GPR37L1 have recently been shown to mediate signaling induced by both prosaposin and a fragment of prosaposin known as prosaptide. In this review, we describe recent advances in our understanding of prosaposin, its receptors and their importance in the nervous system.
Locus coeruleus (LC) nucleus is involved in noradrenergic descending pain modulation. LC receives dense orexinergic projections from the lateral hypothalamus. Orexin-A and -B are hypothalamic peptides which modulate a variety of brain functions via orexin type-1 (OX1) and orexin type-2 (OX2) receptors. Previous studies have shown that activation of OX1 receptors induces endocannabinoid synthesis and alters synaptic neurotransmission by retrograde signaling via affecting cannabinoid type-1 (CB1) receptors. In the present study the interaction of orexin-A and endocannabinoids was examined at the LC level in a rat model of inflammatory pain. Pain was induced by formalin (2%) injection into the hind paw. Intra-LC microinjection of orexin-A decreased the nociception score during both phases of formalin test. Furthermore, intra-LC microinjection of either SB-334867 (OX1 receptor antagonist) or AM251 (CB1 receptor antagonist) increased flinches and also the nociception score during phase 1, 2 and the inter-phase of formalin test. The analgesic effect of orexin-A was diminished by prior intra-LC microinjection of either SB-334867 or AM251. This data show that, activation of OX1 receptors in the LC can induce analgesia and also the blockade of OX1 or CB1 receptors is associated with hyperalgesia during formalin test. Our findings also suggest that CB1 receptors may modulate the analgesic effect of orexin-A. These results outline a new mechanism by which orexin-A modulates the nociceptive processing in the LC nucleus.
Epidemiological studies indicate that light-moderate alcohol (ethanol) consumers tend to have reduced risks of cognitive impairment and progression to dementia during aging. Exploring possible mechanisms, we previously found that moderate ethanol preconditioning (MEP, 20-30mM) of rat brain cultures for several days instigated neuroprotection against &#x3b2;-amyloid peptides. Our biochemical evidence implicated the NMDA receptor (NMDAR) as a potential neuroprotective "sensor", specifically via synaptic NMDAR signaling. It remains unclear how ethanol modulates the receptor and its downstream targets to engender neuroprotection. Here we confirm with deconvolution microscopy that MEP of rat mixed cerebellar cultures robustly increases synaptic NMDAR localization. Phospho-activation of the non-receptor tyrosine kinases Src and Pyk2, known to be linked to synaptic NMDAR, is also demonstrated. Additionally, the preconditioning enhances levels of an antioxidant protein, peroxiredoxin 2 (Prx2), reported to be downstream of synaptic NMDAR signaling, and NMDAR antagonism with memantine (earlier found to abrogate MEP neuroprotection) blocks the Prx2 elevations. To further link Prx2 with antioxidant-based neuroprotection, we circumvented the ethanol preconditioning-NMDAR pathway by pharmacologically increasing Prx2 with the naturally-occurring cruciferous compound, 3H-1,2-dithiole-3-thione (D3T). Thus, D3T pretreatment elevated Prx2 expression to a similar extent as MEP, while concomitantly preventing &#x3b2;-amyloid neurotoxicity; D3T also protected the cultures from hydrogen peroxide toxicity. The findings support a mechanism that couples synaptic NMDAR signaling, Prx2 expression and augmented antioxidant defenses in ethanol preconditioning-induced neuroprotection. That this mechanism can be emulated by a cruciferous vegetable constituent suggests that such naturally-occurring "neutraceuticals" may be useful in therapy for oxidative stress-related dementias.
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by loss of memory and cognitive abilities. In AD, amyloid &#x3b2; (A&#x3b2;) protein aggregates in the brain of patients, forming amyloid plaques. A&#x3b2; plaques are known to be surrounded by activated microglial cells. Serum amyloid A (SAA) is elevated from several hundred to 1000-fold as part of the immune response against various injuries, including trauma, infection, and inflammation. Additionally, continuous elevation of SAA is related to the development of amyloidosis. This study was designed to identify the relationship between SAA1 and AD using liver specific SAA1 overexpressing mice (TG), because SAA1 is expressed in the liver during the acute phase. We detected exogenous SAA1 expression in the brain of TG mice. This result implies that liver-derived SAA1 migrates to the brain tissues. Thus, we confirmed that the blood brain barrier (BBB) functioned normally using Evans-blue staining and CARS. Furthermore, our results show an increase in the accumulation of the 87kDa form of A&#x3b2; in TG mice compared to wild type mice (WT). Additionally, the number of microglial cells and levels of pro-inflammatory cytokines were increased. Next, we investigated the relationship between SAA1 and depression by performing social interaction tests. The results showed that TG mice have a tendency to avoid stranger mice and an impaired social recognition. In conclusion, the SAA1 TG mouse model is a valuable model to study depression.
Uridine is a potential endogenous neuromodulator studied for several decades for its antiepileptic effect, but the results were controversial. One remarkable feature of uridine is its regulatory action on the dopaminergic pathways. In this study, the changes in uridine and dopamine (DA) release were examined in the mouse corpus striatum after pilocarpine (PC) intraperitoneal injection. Then, the effect of uridine pre-treatment on DA release and expression of dopamine receptor (DR) was determined. The results revealed an increased uridine release initially, followed by a downward trend after an injection of 400-mg/kg PC. However, the DA release continuous increased significantly. The expression of dopamine receptor-1 (D1R) increased in a dose-dependent manner while that of dopamine receptor-2 (D2R) decreased significantly. Prophylactic administration of uridine significantly relieved the high-frequency and high-amplitude expression induced by PC as well as dose-dependently reversed the PC-induced changes in DA and DRs levels. These findings suggested that uridine produced an antiepileptic effect, which might have been mediated in part by interfering with the dopaminergic system.
Medial frontal activity in the EEG is enhanced following negative feedback and varies in relation to dimensions of impulsivity. In 22 undergraduate students (M<sub>age</sub>&#x202f;=&#x202f;18.92&#x202f;years, range 18-22&#x202f;years), we employed a probabilistic negative reinforcement learning paradigm in which choices to avoid were followed by cues indicating successful or unsuccessful avoidance of an impending aversive noise. Our results showed that medial frontal theta power was enhanced following a cue that signaled avoidance was unsuccessful. In addition, self-reported lack of perseverance, a dimension of impulsivity characterized by an inability to maintain focus and determination during a challenging task, was negatively correlated with medial frontal theta elicited to an unsuccessful avoidance cue. We also observed robust differences in alpha attenuation and beta modulation following unsuccessful avoidance cue presentation. To our knowledge, this is the first study in humans to show a functional relation between medial frontal theta modulation and avoidance success. We discuss our findings in the context of frontal theta and self-regulation, negative reinforcement, and anxiety.
We emulated instances of open traumatic brain injuries (TBI) in a maritime disaster. New Zealand rabbit animal models were used to evaluate the pathophysiological changes in open TBI with and without the influence of artificial seawater. New Zealand rabbits were randomly divided into 3 groups. Control group consisted of only normal animals. Animals in TBI and TBI&#xa0;+&#xa0;Seawater groups underwent craniotomy with dura mater incised and brain tissue exposed to free-fall impact. Afterward, only TBI&#xa0;+&#xa0;Seawater group received on-site artificial seawater infusion. Brain water content (BWC) and permeability of blood-brain barrier (BBB) were assessed. Reactive oxygen species levels were measured. Western blotting and immunofluorescence were employed to detect: apoptosis-related factors Caspase-3, Bax and Bcl-2; angiogenesis-related factors CD31 and CD34; astrogliosis-related factor glial fibrillary acidic protein (GFAP); potential neuron injury indicator neuron-specific enolase (NSE). Hematoxylin &amp; eosin, Masson-trichrome and Nissl stainings were performed for pathological observations. Comparing to Control group, TBI group manifested abnormal neuronal morphology; increased BWC; compromised BBB integrity; increased ROS, Bax, CD31, CD34, Caspase-3 and GFAP expressions; decreased Bcl-2 and NSE expression. Seawater immersion caused all changes, except BWC, to become more significant. Seawater immersion worsens the damage inflicted to brain tissue by open TBI. It aggravates hypoxia in brain tissue, upregulates ROS expression, increases neuron sensitivity to apoptosis-inducing factors, and promotes angiogenesis as well as astrogliosis.
Intracerebral hemorrhage (ICH) is a subtype of stroke that causes major motor impairments. Brain-derived neurotrophic factor (BDNF) is known to have important roles in neuroplasticity and beneficially contributes to stroke recovery. This study aimed to characterize BDNF expression in the motor cortex after ICH and investigate the relationship between cortical BDNF expression and behavioral outcomes using an ICH rat model. Wistar rats were divided into two groups: a SHAM group (n&#xa0;=&#xa0;7) and an ICH group (n&#xa0;=&#xa0;8). ICH was induced by the injection of collagenase into the left striatum near the internal capsule. For behavioral assessments, the cylinder test and open field test were performed before surgery and 3&#xa0;days, 1&#xa0;week, 2&#xa0;weeks, and 4&#xa0;weeks after surgery. Following the behavioral assessments at 4&#xa0;weeks, BDNF expression in the ipsilateral and contralateral motor cortex was assayed using RT-PCR and ELISA methods. There was no significant difference in either cortical BDNF mRNA or protein expression levels between the SHAM and ICH groups. However, the asymmetry index of BDNF mRNA expression between the ipsilateral and contralateral hemispheres shifted to the ipsilateral hemisphere after ICH. Furthermore, the ipsilateral cortical BDNF mRNA expression level positively correlated with motor function in the affected forelimb after ICH. This study describes for the first time that cortical BDNF mRNA expression is related to post-ICH motor impairment. These results highlight the importance of assessing the interhemispheric laterality of BDNF expression and could help develop novel treatment strategies for BDNF-dependent recovery after ICH.
Among auditory stimuli, the own name is one of the most powerful and it is able to automatically capture attention and elicit a robust electrophysiological response. The subject’s own name (SON) is preferentially processed in the right hemisphere, mainly because of its self-relevance and emotional content, together with other personally relevant information such as the voice of a familiar person. Whether emotional and self-relevant information are able to attract attention and can be, in future, introduced in clinical studies remains unclear. In the present study we used EEG and asked participants to count a target name (active condition) or to just listen to the SON or other unfamiliar names uttered by a familiar or unfamiliar voice (passive condition). Data reveals that the target name elicits a strong alpha event related desynchronization with respect to non-target names and triggers in addition a left lateralized theta synchronization as well as delta synchronization. In the passive condition alpha desynchronization was observed for familiar voice and SON stimuli in the right hemisphere. Altogether we speculate that participants engage additional attentional resources when counting a target name or when listening to personally relevant stimuli which is indexed by alpha desynchronization whereas left lateralized theta synchronization may be related to verbal working memory load. After validating the present protocol in healthy volunteers it is suggested to move one step further and apply the protocol to patients with disorders of consciousness in which the degree of residual cognitive processing and self-awareness is still insufficiently understood. Highlights EEG during an active–passive task based on first names was time-frequency analyzed. The presented names were uttered either by an unfamiliar or a familiar voice. Counted names elicited alpha desynchronization and left theta synchronization. Own name and familiar voices enhanced strong right alpha desynchronization. Alpha desynchronization reflects attentional engagement and emotional processing. ## Introduction Many studies have investigated auditory processing of the subject’s own name (SON). Also because of its countless repetitions during lifetime, the SON is intrinsically meaningful to individuals. In fact, among auditory stimuli, the own name is considered the most powerful stimulus which captures attention without any voluntary effort, as for example demonstrated in the classical “cocktail party” phenomenon ( , , ), or by its residual processing during non-conscious states such as sleep ( , ). EEG studies have shown that the presentation of the SON evokes larger “P300” ( ) or “P3” responses ( ) than other first names, which is to be expected, as the P3 is the most significant event-related potential that is known to be related to the processing of relevant or “target” stimuli ( ). In the frequency domain, only recently responses to SON have been studied. It has been reported that alpha (8–12 Hz) and theta (4–7 Hz) activity reflect attentional and/or memory processes ( , , ). The evaluation of on-going oscillatory activity in response to SON stimuli can therefore shed light on involved cognitive functions. With respect to event-related response found stronger theta event-related synchronization (ERS) to the SON which they interpreted as attentional engagement. Other recent studies found a decrease in alpha power in response to SON presentation which the authors likewise interpreted in terms of enhanced alertness or increased active processing due to release of inhibition ( , ). Interestingly, also in patients suffering from a disorder of consciousness (DOC) or locked in syndrome (LIS) it is known that the salient SON can still evoke a significant brain response. Surprisingly not only minimally conscious state (MCS) but even supposedly unaware vegetative state/unresponsive wakefulness syndrome (VS/UWS) patients ( ) seem to be able to differentiate their own name from other names. A similar study by Fischer in line with these findings reports that some DOC patients, irrespective of their diagnosis, are able to process SON stimuli when they are presented as deviant stimuli in a stream of tones. The authors suggest that the processing of stimulus novelty might prove preservation of some cognitive function independent of conscious awareness ( ). Because of its self-relevance and its emotional content, the SON is preferentially processed in the right hemisphere together with other personally relevant information ( , ; ; ). More interestingly, the activation of the right temporal-parietal junction in response to SON has been related to self-recognition processes ( ). Interestingly, the processing of familiar voices or identifying the individual identity of voices likewise elicits right hemispheric dominant brain responses ( , ). However, it has been discussed that the passive own name paradigm, in which subjects only passively listen to the presented stimuli might reflect mere automatic stimulus identification and does not allow for an inference about the level of preserved awareness ( , ). Addressing this criticism, several EEG studies instructed participants and patients to focus their attention on an auditory target stimulus while ignoring other irrelevant stimuli ( , ). Specifically, a greater P3 component for attended stimuli was observed in controls as well as in MCS patients ( ). In a more recent study using time–frequency analysis, greater alpha event related desynchronization (ERD) was evident when participants were asked to count the SON, probably reflecting enhanced attentional engagement ( ). In addition, stronger theta event related synchronization (ERS) reflecting working memory involvement was found when subjects were counting as compared to listening to the SON. This task related theta-synchronization was only evident for the SON, but not for unfamiliar name (UN) stimuli, indicating that top-down processes might be easier to engage when the stimulus is emotionally salient and already strongly bottom-up processed. In line with this view, it has been demonstrated earlier that familiar objects, because of their biographical and emotional relevance, are able to increase the number of responses as well as their goal-directedness in DOC patients ( ). Furthermore, meaningful stimuli with high emotional valence, such as infant cries or the voice of a family member, can induce more widespread “higher-order” cortical responses ( , , , ) and facilitate applying top-down attention to relevant input ( , , ). Given those findings, we believe that it is important to further elaborate on study protocols which focus on emotionally relevant stimuli on an individual level. In the current study we used a modified version of the classical own name paradigm including an active “counting” as well as a familiar voice condition. The active condition, in which subjects were asked to (silently) count a specific unfamiliar name should give an important insight in the amount of top-down control and attentional resources engaged by target names and could, therefore, in future studies allow for identifying “awareness” in patients suffering from DOC, in whom behavioural assessment is often challenging and leads to high rates of misdiagnosis ( , ). The introduction of familiar voices aims at increasing the bottom-up stimulus strength by adding emotional valence, which should make it easier to attend to the presented stimuli and will provide us with important information regarding the processing of emotional and self-relevant information in the absence of an explicit cognitive demand. We will focus on on-going oscillatory activity that is not necessarily exactly time-locked to the presentation of the stimulus, like event-related potentials. In fact, time–frequency analysis, quantifying evoked as well as induced brain activity, has been shown to be more sensitive than mere evoked responses which are more prone to temporal dispersion ( ). Furthermore, concerning the intended clinical application in DOC patients in the future, it is important to consider that many DOC patients have prevailing background activity in the delta range that can interfere substantially with event-related potentials ( , , ). Consequently, we believe that using time-frequency analysis together with a modified own name paradigm using emotionally and personally salient stimuli will be a more sensitive measure in identifying cognitive, and in future clinical applications, conscious processing. ## Results ### Alpha ERD in the active counting condition The main findings of ANOVA CONDITION (target vs. non-target; both spoken in a familiar voice)×ELECTRODES (Fz vs. Cz vs. Pz)×TIME ( t 1 vs. t 2 vs. t 3 vs. t 4; t 1=0–200 ms, t 2 =200–400 ms, t 3=400–600 and t 4=600–800 ms post-stimulus),) showed that alpha desynchronization was higher for the target than for non-targets ( F =5.98, p <.05) (cf. , ). Additionally, main effects for ELECTRODES ( F =5.46, p <.05) and TIME ( F =8.05, p <.001) were revealed. Post hoc tests revealed that t 3 and t 4 significantly differed from t 1 ( t (13)=−3.88, p <.05; t (13)=−3.18, p <.05) while t 3 differed from t 2 ( t (13)=−3.55, p <.05). Furthermore, alpha ERD was higher on the electrode Pz compared to Cz ( t (13)=2.86, p <.05) indicating generally larger desynchronization in the posterior part of the scalp and in particular in the last two time windows. The difference between the two conditions is also embedded in the interactions CONDITION×ELECTRODES ( F =5.27, p <.05) and CONDITION×TIME ( F =11.44, p <.001). Post-hoc tests on the first interaction revealed that target stimuli evoke stronger alpha ERD compared to non-targets mainly over Pz ( t (13)=2.51, p =0.013) while post-hoc testing of the latter indicated that alpha ERD was stronger in response to targets as compared to non-targets only in the later time windows ( t 3: t (13)=−2.47, p <.05; t 4: t (13)=−4.32, p <0.001). On the single subject level, we conducted one-sample t tests against zero for trials across different condition (for details see ) and found target related alpha ERD to be evident in 81% of the subjects. For an overview of event-related potentials in the active condition please also refer to supplementary material and . ### Theta ERS in the active counting condition Theta ERS analysis revealed main effects for ELECTRODES ( F =32.43, p <.001) and TIME ( F =6.13, p <.05) as well as an interaction between ELECTRODES and TIME ( F =3.68, p <.05). According to post-hoc analyses electrodes Fz and Cz exhibited higher theta ERS as compared to the electrode Pz ( t (13)=5.29, p <.001; t (13)=10.49, p <.001, respectively) indicating that theta ERS was most pronounced over fronto-central sites. Theta ERS was strongest 200–400 ms after stimulus onset followed by a steady decrease over time ( t 2> t 3: t(13)= 3.50, p <.05; t 2> t 4: t (13)=3.36, p <.05), In addition, the interaction ELECTRODES×TIME indicated that theta ERS was systematically higher on Fz ( t 1: t (13)=9.45, p <.001; t 2: t (13)=9.44, p <0.01; t 3: t (13)=8.39, p <.001; t 4: t (13)=5.65, p <0.001) and Cz in all time windows as compared to Pz ( t 1: t (13)=4.76, p <.001; t 2: t (13)=6.07, p <0.00; t 3: t (13)=5.84, p <.001; t 4: t (13)=3.43, p <0.05). Results are also depicted in using topography maps. Since lateralization effects were evident for theta in the active counting condition we decided to also focus on potential hemispheric differences. An ANOVA including the factors CONDITION (target vs. non target), HEMISPHERE (C3 vs. C4) and TIME for the theta frequency revealed a nearly significant main effect for HEMISPHERE ( F =4.52, p =.055) indicating generally higher theta ERS in the left hemisphere (21.99% theta ERS on C3 vs. 18.52% at C4; t (12)=2.12). The interaction CONDITION×HEMISPHERE×TIME ( F =3.72, p <.05) indicated that theta ERS is greater for targets as compared to non-target on the left side of the scalp and in the time window from 200 to 400 ms ( t (12)=2.186, p <.05). On a single subject-level theta ERS was evident in more than 90% of the subjects (100% for the target condition and 92% for the non-target), as revealed by one-sample t tests against zero for trials across different condition (for details refer to ). Results are also depicted in in time–frequency plots and across the scalp using topography maps (cf. ). ### Delta enhancement in the active condition Since visual inspection of other frequency bands indicated a possible involvement of the delta band in the active condition we also tested whether there was a stimulus specific modulation in this frequency range. Surprisingly, we found a significant effect in the active condition also in the delta range. As illustrated by the main effect CONDITION ( F =12.16, p <.05) delta activity was significantly higher for target names as compared to non-targets ( t (13)=3.48, p <.005) over all electrodes (Fz, Cz, Pz). Additionally, the main effect TIME ( F =31.22, p <.001) indicated that delta was modulated over time with higher ERS from 200 to 600 ms after stimulus onset ( t 2> t 1: t (13)=8.98, p <.001; t 3> t 1: t (13)=5.65, p <.001; t 2> t 4: t (13)=6.01, p <.001; t 3> t 4: t (13)=10.17, p <.001). (cf. .) ### Alpha ERD in the passive listening condition Concerning the ANOVA NAME (SON vs. UN)×VOICE (FV vs. UV)×ELECTRODES (Fz vs. Cz vs. Pz)×TIME ( t 1 vs. t 2 vs. t 3; t 1=0–200 ms, t 2=200–400 ms, t 3=400–600 ms post-stimulus) for alpha ERD during passive listening, only a main effect for TIME ( F =5.71 p <.05) was significant. Post hoc tests revealed higher desynchronization in the alpha band around 400–600 ms ( t 3) as compared to 0–200 ms ( t 1) after stimulus onset ( t (13)=−2.82, p <.05). To again test for hemispheric differences, an additional ANOVA including the factors NAME (SON vs. UN), VOICE (familiar voice vs. unfamiliar voice), HEMISPHERE (P3 vs. P4) and TIME ( t 1, t 2, t 3) was calculated. A significant interaction VOICE x HEMISPHERE ( F =5.81, p <.05) indicated that the right parietal electrode (P4) showed higher alpha ERD for stimuli spoken in a familiar voice as compared to stimuli spoken in an unfamiliar voice ( t (13)=−3.58, p <.05). In addition, the SON as compared to UN also showed enhanced alpha ERD (NAME×HEMISPHERE×TIME: F =3.80, p <.05) over the right parietal region in the last two time windows (from 200 to 400 and from 400 to 600 ms) irrespective of VOICE ( t (13)=−2.25, p <.05, t (13)=−2.59, p <.05; respectively) (cf. for time–frequency plot and scalp distribution). For the respective comparisons using event-related potentials please refer to . ### Theta ERS in the passive listening condition The ANOVA NAME×VOICE×ELECTRODES×TIME (for the factor levels please refer to 2.4) for theta frequency yielded main effects for ELECTRODE ( F =22.52, p <.001) and TIME ( F =5.27, p <.05). Post hoc tests revealed that the electrode Pz showed less theta ERS than both Cz and Fz ( t (13)=−5.87, p <.001; t (13)=−4.74, p <.001, respectively) and that theta synchronization was strongest 200–400 ms post-stimulus ( t 2) ( t 2> t 1: t (13)=3.16, p <.05; t 2> t 3: t (13)=3.60, p <.05). The topographical distribution of theta ERS for the passive condition is also depicted in . For an overview of event-related potentials in the passive condition please refer to the supplementary material ( ). ## Discussion The present study focused on oscillatory brain responses to auditory name stimuli uttered by a familiar or unfamiliar voice. In the active condition, in which subjects had to count a specific target name, a higher desynchronization in the alpha band (8–12 Hz) to target as compared to non-target stimuli was found. The response was localized around central and posterior sites and reached its maximum about 400–600 ms post-stimulus. This is coherent with previous findings showing that alpha desynchronization reflects general task demands including attentional processes ( ). Considering that in our active condition subjects had to match the memorized target name to the heard name item-per-item, the result could also indicate a release of inhibition after successful matching ( ). Also left-lateralized theta (4–7 Hz) ERS in the active condition appeared to be higher for target than non-target stimuli. Since we controlled for names of relatives and friends, in the active condition only stimuli of comparable familiarity were involved and hence familiarity cannot account for the differences between targets and non-targets. The presentation of strictly unfamiliar names in the active condition in the current study allowed for a better differentiation of top-down attention, (i.e. instruction following and counting) from automatic attention which may be grabbed automatically by the presentation of the own name ( ). The increased theta ERS for targets on the left side is, therefore, most likely related to top-down attention and the active counting of the target name. Attending to a target name and inhibiting irrelevant name stimuli engages selective attention mechanisms and challenges working memory resources. Higher theta ERS in the left hemisphere probably reflects attention to the processing of the new information or enhanced verbal working memory engagement ( , , ). In the active condition we also found a significant effect in the delta range (1–4 Hz), with delta showing higher synchronization for target than for non-target stimuli. Previous studies reported that in tasks where internal concentration is required in order to focus attention on a specific stimulus delta increases ( , ). In addition a reciprocal relationship between alpha and delta activity has been shown, in the sense that both frequencies together may contribute to inhibitory control ( ). Therefore, in our study, delta increase during counting, together with alpha desynchronization, might reflect inhibition of irrelevant information (other names) and disinhibition of relevant information in order to focus attention exclusively on the target name. The active condition, as proposed in the present study might be a promising method to assess DOC and allow refinement of their diagnosis. However, it has to be mentioned that active paradigms of that kind will only be able to distinguish DOC patients at the higher end of the DOC spectrum as they require the integrity of several sensory and cognitive processes at the same time. For a future application in DOC, it would be important, however, to further examine slow oscillatory (delta–theta) band involvement, since the EEG of DOC patients is usually characterized by a predominance of slow frequencies (mainly in the delta range). With the passive condition, we investigated differences between the processing of the subject’s own name as compared to unfamiliar names and additionally, we were interested in the differential activation in response to familiar and unfamiliar voices. In fact, in the right hemispheric parietal alpha desynchronization was higher in response to the SON as well as in response to familiar voices. Personally relevant information is known to be more powerful in attracting involuntary attention, as demonstrated by the increase in brain activity after self-referential sound presentation ( , ). As shown by in a similar paradigm, alpha ERD can be triggered by the retrieval of information stored in long-term memory (LTM) – with the LTM retrieval being a prerequisite for the identification of personal relevance – and has been interpreted as reflecting access to LTM traces that are reactivated during the on-going task ( ). In addition, speech perception is facilitated when a highly familiar voice is presented suggesting that familiarity may even help listeners to compensate for sensory or cognitive decline ( ). Concerning the found lateralization effect, the right hemispheric dominance for the SON is again possibly related to its emotional and personal relevance ( , ; ), which is in line with the idea that top-down involvement is more strongly reflected in the right hemisphere when listening to relevant familiar sounds ( ). The right lateralization of alpha ERD in response to familiar voices is also coherent with previous studies showing that the right entorhinal cortex and the anterior part of the right temporal lobe are more active during discrimination of familiar voices than during a control discrimination task ( ). Converging evidences from fMRI studies also revealed that the right anterior superior-temporal sulcus and part of the right precuneus ( , , , ) are specifically involved in familiar voice recognition. Additional support for a right dominance in the processing of familiar voices come from lesion studies suggesting that an impairment recognizing familiar voices (phonanosia) is only evident in cases of damage to the right hemisphere, or more specifically right temporal lobe ( , , ). Thus, there is clear converging evidence for an important role of the right hemisphere in processing voice identity. According to the cognitive model of voice perception by , ), following a low-level analysis in the primary auditory cortex, vocal information is processed at three interacting but partially dissociable pathways: (i) analysis of speech information, preferentially in the left hemisphere, (ii) analysis of vocal affective information, predominantly in the right hemisphere, (iii) analysis of vocal identity, involving voice recognition and person-related semantic knowledge, also predominant in the right hemisphere. In this view, different levels of cognition and awareness might be required to move from low-level to higher levels analysis. The pronounced alpha ERD for familiar voices could, therefore, indicate processing at least at the vocal affective level and might, thus, serve as marker in cases where verbal report cannot be obtained. For a potential application in DOC, the understanding of whether and to what extent patients are able to process vocal information would help to better comprehend their residual capabilities. Since SON and FV stimuli in our study were simply presented to participants without further instruction to elaborate on them, we cannot be sure whether the right hemisphere enhancement for these “emotional” stimuli (i.e. FV and SON) is purely automatic or rather reflects higher levels of processing and emotional self-awareness. On an individual subject level in the active counting condition data revealed that 81% of participants did show alpha ERD (or 100% theta ERS), but only 64% more than to the non-target (62% for theta ERS) (cf. ). It, therefore, appears that salient information of the chosen kind is reliably evoking event-related brain responses. Introducing emotional or self-relevant information might, therefore, be a way to effectively enhance arousal and increase bottom-up stimulus processing (as demonstrated by higher theta ERS and right alpha ERD in the passive condition) which in turn might allow for the engagement of top-down processes in the first place. If the reliability of these effects is, however, sufficient for sensitive detection of residual capabilities in DOC patients has to be assessed in future studies. Experiments in healthy individuals introducing distracting material and systematically varying working memory demands could reveal whether emotional or self-relevant stimuli might still be reliably (top-down) processed in situations where limited attentional capacity usually precludes instruction following. Furthermore, while the predominant role of the right hemisphere in the processing of self-relevant and emotional information (FV and SON) is already validated ( ; ; ), the link to self-awareness remains elusive. In both conditions the differential contribution of alpha and theta was mirrored in the differential topographical distribution in these two frequency bands. In fact, while in the active condition alpha is more pronounced on the parietal area around the midline theta is higher over left central regions. In the passive condition alpha is right lateralized. These differences in scalp distribution might, therefore, underline the involvement of different cerebral structures and source localization studies should further elucidate that. In summary, our results demonstrate that time frequency analysis allows for studying the correlates of an active task demand in combination with voice familiarity. Alpha ERD seems to reflect the release of inhibition after successful memory matching. In addition, theta ERS is pronounced when selective attention is attracted by personally relevant information and when incoming information matches long term memory representations, such as a familiar voice or a subject’s own name. Ultimately, we hope that this paper will stimulate new perspectives in order to access and assess (self) awareness also in clinical populations such as DOC patients. ## Experimental procedures ### Subjects A sample consisting of 14 subjects (9 females, 5 males) with age ranging from 21 to 53 ( M =25.79; SD=8.17) was recorded. All volunteers were right-handed German native speakers without any recorded history of neurological disease. Participants gave written informed consent approved by the local ethics committee and received monetary compensation for their participation. ### Experimental design and procedure The experiment expands the SON task as introduced by and subsequently adapted in Fellinger et al. (2011). Stimuli were either spoken by a familiar (FV; subject’s close friend or family member) or unfamiliar voice (UFV; spoken by a text-to-speech algorithm, CereProc , CareProc Ltd: “Alex”, “Gudrun”). Stimuli included the subject’s own name and five commonly used Austrian names (according to statistics Austria) matched for number of syllables and the gender of the participant. Stimuli were presented via headphones at a sound pressure level of 80 db. The task consisted of two experimental conditions: an active condition to investigate the ability to consciously follow commands and a passive listening condition with the passive condition always preceding the active condition. Each condition consisted of 3 blocks; with each block including 13 presentations of each name (i.e., 39 presentations for each single name). In the passive condition 6 stimuli were presented with 234 repetitions in total (about 12 min), in particular, SON uttered by a familiar or unfamiliar voice and two different unfamiliar names either spoken by a familiar or an unfamiliar voice. In the active condition only 3 different stimuli were presented (117 repetition) for about 6 min, all of them unfamiliar to participants and all uttered by a familiar voice (cf. ). During the passive condition participants were simply asked to listen to all the names presented, while in the active condition they were asked to focus and silently count the appearance of the target name. In order to be sure that participants attended the presented stimuli experimenters controlled at the end of the experiment whether the number of targets counted by participants matched the total number of stimuli presented and controlled online for arousal fluctuations. The inter stimulus interval [ISI] lasted 2000 ms and for stimulus presentation and synchronization, the Software Presentation , (Version 0.71; Presentation Software, Neurobehavioralsystems Inc., CA) was used. Own name task design using familiar voice manipulation. (A) The active condition consisted of only three different unfamiliar names (UN , UN ) with one name being the attended target name ( UN ). (B) The passive condition consisted of six different stimuli which participants attended to. Own names and unfamiliar names and both uttered in a familiar or unfamiliar voice. Abbreviations: Familiar voice [FV]; unfamiliar voice [UFV]; subject’s own name [SON]; unfamiliar name [UN]. ERS/ERD during the active counting condition. (A) The graph depicts the mean (0–800 ms) of alpha ERD for targets and non-targets over electrode Pz. (B) Theta ERS for targets and non-targets over left-central site C3. Error bars represent ±1 standard error of mean, asterisks denote the respective significance-level for post hoc comparisons: * p <.05, ** p <.01. (C) Time frequency plots depict stronger delta ERS and alpha ERD in the target than non-target condition at parietal electrodes (Pz, upper panel) and stronger theta ERS at left-central (C3, lower panel) electrode sites. Zero marks the presentation of the stimuli, with solid rectangles (black for alpha and blue for delta) in the plot highlighting significant differences between targets and non-targets and dashed lines indicating trends for theta ERS. Time windows: t 1=0–200 ms, t 2=200–400 ms, t 3=400–600 and t 4=600–800 ms post-stimulus. (A) Topographic maps depict the topographic distribution for alpha ERD (400–600 ms) and (B) theta ERS (200–400 ms) in the active condition. (C) Panel C depicts the topographic distribution of the difference between targets and non-targets for alpha ERD. (D) Panel D shows the topographical distribution of the difference between targets and non-targets for theta ERS. Note that alpha ERD is more pronounced for targets than for non-targets in the central and posterior part of the scalp (left panel) while theta ERS is higher for targets than non-targets in the frontal and central portion of the scalp. Squares indicate electrodes where hemispheric asymmetry was modulated by target and non-target stimuli. Alpha ERD during the passive listening condition. Upper panel: (A) Graph depict alpha ERD for own name and unfamiliar names over the right parietal electrode (P4), which is higher from 200 to 600 ms post-stimulus. Error bars represent ±1 standard errors of the mean, and asterisks (*) denote the respective significance-level for post hoc comparisons (* p <.05). Time windows: t 1=0–200 ms, t 2=200–400 ms and t 3=400–600 ms post-stimulus. (B) Time frequency plots depict the difference between own name and unfamiliar name at right parietal (P4) electrode sites indicating stronger alpha ERD for own names compared to unfamiliar. Zero marks the presentation of the stimuli, with solid rectangles in the plot highlighting significant differences between conditions. Lower panel: Topographic maps of alpha ERD in the passive condition (400–600 ms). (C) Note that own vs. unfamiliar name presentation leads to a stronger alpha ERD over right posterior portion of the scalp. (D) Likewise, stimuli uttered by a familiar vs. unfamiliar voice evoke stronger alpha ERD over the right parietal region. ### Data acquisition EEG was recorded with 32 Ag/AgCl sintered electrodes and head circumference matched Easycaps (EASYCAP GmbH; Herrsching Germany) placed according to the international 10–20 system. The following scalp EEG channels were used: FP1, FP2, F7, F3, FC5, FC1, Fz, F4, F8, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP6, P7, P3, Pz, P4, P8, O1, Oz, O2, AFz and FCz. Additional electrodes were placed on the left and right mastoids (M1 and M2). For electrooculography (EOG) two horizontal (placed at the outer canthus of each eye) and two vertical (placed above and below the right eye) electrodes were used for latter correction of blinks and saccadic eye movements. Electrodes were placed on the scalp by applying abrasive electrolyte gel, preceded by a gentle peeling (NuprepTM, Weaver and Company) and on the face secured with plasters. EEG was recorded with a 32-channel BrainAmp EEG amplifier (Brain Products GmbH, Gilching, Germany) and Brain Vision Recorder (Brain Products). The EEG sampling rate was set to 500 Hz. Impedances were kept below 5 kΩ. AFz electrode served as a ground electrode while FCz was the recording reference electrode; the mastoid electrodes, M1 and M2 were used for later re-referencing. Acoustic stimuli were delivered binaurally over headphones and surrounding noise was reduced to a minimum. ### Data analysis In a first step data was re-referenced to mastoids and bandpass—filtered between 0.5 and 70 Hz, a notch filter was set to 50 Hz. Ocular correction was conducted using the regression-based approach ( ) implemented in Brain Vision Analyzer 2.0 (Brain Products, Gilching, Germany). Afterwards, data was visually checked for further artefacts and only artefact free trials were used for analysis. Then data was segmented into epochs ranging from −800 to +1200 ms relative to stimulus-onset. For time–frequency spectral analyses, complex Morlet wavelet transformations as implemented in Brain Vision Analyser 2.0 (Brain Products, Gilching, Germany) were applied. We calculated wavelet coefficients for frequencies between 1 and 30 Hz (Morlet parameter c =8, linear frequency steps) with 30 frequency steps. Subsequently the wavelets were averaged across each stimulus type. After wavelet transformation all epochs were averaged together for each participant, each condition and each stimulus type separately. In order to have comparable amounts of segments to be compared, non-target stimuli ( UN2/ UN3) in the active condition and unfamiliar names ( UN4/ UN5 and UN4/ UN5) in the passive condition were averaged together and only 50% of artefact free segments were randomly selected for further analysis. ### ERD/ERS For statistical analysis we selected two frequency bands of interests: theta and alpha in order to estimate whether presented stimuli were able to trigger attention and memory processes. For the above mentioned frequencies we chose well-established frequency ranges ( ) (4–7 Hz for theta and 8–12 Hz for alpha; frequency borders: from 3.58 to 7.73 Hz for theta and 7.17 to 13.25 Hz for alpha) and concentrated on midline electrodes (Fz, Cz, Pz). For delta frequency we selected the frequency range from 1 to 4 Hz (filter borders: 0.90–4.42 Hz) ( ). With the obtained wavelet coefficients we calculated event related de-/synchronization, reflecting the percentage change in test power with respect to a reference interval ( ) according to the formula: ERD%=[(test−reference power)/reference power]×100. Note that contrary to the original formula we express ERS with positive and ERD with negative values. As a reference period, the time period between −700 and −200 ms relative to stimulus onset was used. ### Statistical analysis Five different repeated measures ANOVAs were calculated, four with theta and alpha ERS/ERD as dependent measures and one with delta ERS. Three ANOVAs tested for effects in the active condition and focused on alpha, delta and theta ERS/ERD as dependent variables, respectively: CONDITION (target, non-target), TIME ( t 1, t 2, t 3, t 4; t 1=0–200 ms, t 2=200–400 ms, t 3=400–600 and t 4=600–800 ms post-stimulus), ELECTRODES (Fz, Cz, Pz). For elimination of multiple comparisons error the false discovery rate (FDR) correction according to was used. Two ANOVAs were performed in order to test the effect of familiar and unfamiliar voices on stimulus processing in the passive condition: NAME (SON vs. UN), VOICE (FV vs. UV), ELECTRODES (Fz, Cz and Pz) and TIME ( t 1, t 2, t 3; t 1=0–200 ms, t 2=200–400 ms, t 3=400–600 ms post-stimulus). Additional ANOVAs were performed post-hoc in order to specify hemispheric asymmetries apparent in the passive listening and active counting condition. For post-hoc tests we only focus on effects of interest, that is interactions with factor TARGET for the active condition and factors VOICE and NAME for the passive. ERPs results for all conditions are also reported in supplementary materials as well as individual ERS/ERD values, tested against zero, for the active condition. All the mentioned analyses were conducted on a sample of 14 healthy volunteers except the ANOVA to test specific hemispheric asymmetry in the processing of target, which was calculated with 13 subjects due to an outlier (power exceeding M±2 SD on C3 and C4).
The advent of Human Immunodeficiency Virus (HIV) antiretrovirals have reduced the severity of HIV related neurological comorbidities but they nevertheless remain prevalent. Synaptic degeneration due to the action of several viral factors released from infected brain myeloid and glia cells and inflammatory cytokines has been attributed to the manifestation of a range of cognitive and behavioral deficits. The contributions of specific pro-inflammatory factors and their interplay with viral factors in the setting of treatment and persistence are incompletely understood. Exposure of neurons to chemokine receptor-4(CXCR4)-tropic HIV-1 envelope glycoprotein (Env) can lead to post-synaptic degradation of dendritic spines. The contribution of members of the extracellular matrix (ECM) and specifically, of perineuronal nets (PNN) toward synaptic degeneration, is not fully known, even though these structures are found to be disrupted in post-mortem HIV-infected brains. Osteopontin (Opn, gene name <i>SPP1</i>), a cytokine-like protein, is found in abundance in the HIV-infected brain. In this study, we investigated the role of Opn and its ECM integrin receptors, <i>&#x3b2;</i>1- and <i>&#x3b2;</i>3 integrin in modifying neuronal synaptic sculpting. We found that in hippocampal neurons incubated with HIV-1 Env protein and recombinant Opn, post-synaptic-95 (PSD-95) puncta were significantly increased and distributed to dendritic spines when compared to Env-only treated neurons. This effect was mediated through <i>&#x3b2;</i>3 integrin, as silencing of this receptor abrogated the increase in post-synaptic spines. Silencing of <i>&#x3b2;</i>1 integrin, however, did not block the increase of post-synaptic spines in hippocampal cultures treated with Opn. However, a decrease in the PNN to <i>&#x3b2;</i>III-tubulin ratio was found, indicating an increased capacity to support spine growth. From these results, we conclude that one of the mechanisms by which Opn counters the damaging impact of the HIV Env protein on hippocampal post-synaptic plasticity is through complex interactions between Opn and components of the ECM which activate downstream protective signaling pathways that help maintain the potential for effective post-synaptic plasticity.
Leber's hereditary optic neuropathy (LHON) is one of the mitochondrial diseases that causes loss of central vision, progressive impairment and subsequent degeneration of retinal ganglion cells (RGCs). In recent years, diffusion tensor imaging (DTI) studies have revealed structural abnormalities in visual white matter tracts, such as the optic tract, and optic radiation. However, it is still unclear if the disease alters only some parts of the white matter architecture or whether the changes also affect other subcortical areas of the brain. This study aimed to improve our understanding of morphometric changes in subcortical brain areas and their associations with the clinical picture in LHON by the application of a submillimeter surface-based analysis approach to the ultra-high-field 7T magnetic resonance imaging data. To meet these goals, fifteen LHON patients and fifteen age-matched healthy subjects were examined. For all individuals, quantitative analysis of the morphometric results was performed. Furthermore, morphometric characteristics which differentiated the groups were correlated with variables covering selected aspects of the LHON clinical picture. Compared to healthy controls (HC), LHON carriers showed significantly lower volume of both palladiums (left <i>p</i> = 0.023; right <i>p</i> = 0.018), the right accumbens area (<i>p</i> = 0.007) and the optic chiasm (<i>p</i> = 0.014). Additionally, LHON patients have significantly higher volume of both lateral ventricles (left <i>p</i> = 0.034; right <i>p</i> = 0.02), both temporal horns of the lateral ventricles (left <i>p</i> = 0.016; right <i>p</i> = 0.034), 3rd ventricle (<i>p</i> = 0.012) and 4th ventricle (<i>p</i> = 0.002). Correlation between volumetric results and clinical data showed that volume of both right and left lateral ventricles significantly and positively correlated with the duration of the illness (left <i>R</i> = 0.841, <i>p</i> = 0.002; right <i>R</i> = 0.755, <i>p</i> = 0.001) and the age of the LHON participants (left <i>R</i> = 0.656, <i>p</i> = 0.007; right <i>R</i> = 0.691, <i>p</i> = 0.004). The abnormalities in volume of the LHON patients' subcortical structures indicate that the disease can cause changes not only in the white matter areas constituting visual tracts, but also in the other subcortical brain structures. Furthermore, the correlation between those results and the illness duration suggests that the disease might have a neurodegenerative nature; however, to fully confirm this observation, longitudinal studies should be conducted.
Chronic disorders of consciousness cause a total or partial and fluctuating unawareness of the surrounding environment. Virtual reality (VR) can be useful as a diagnostic and/or a neurorehabilitation tool, and its effects can be monitored by means of both clinical and electroencephalography (EEG) data recording of brain activity. We reported on the case of a 17-year-old patient with a disorder of consciousness (DoC) who was provided with VR training to improve her cognitive-behavioral outcomes, which were assessed using clinical scales (the Coma Recovery Scale-Revised, the Disability Rating Scale, and the Rancho Los Amigos Levels of Cognitive Functioning), as well as EEG recording, during VR training sessions. At the end of the training, significant improvements in both clinical and neurophysiological outcomes were achieved. Then, we carried out a systematic review of the literature to investigate the role of EEG and VR in the management of patients with DoC. A search on PubMed, Web of Science, Scopus, and Google Scholar databases was performed, using the keywords: "disorders of consciousness" and "virtual reality", or "EEG". The results of the literature review suggest that neurophysiological data in combination with VR could be useful in evaluating the reactions induced by different paradigms in DoC patients, helping in the differential diagnosis. In conclusion, the EEG plus VR approach used with our patient could be promising to define the most appropriate stimulation protocol, so as to promote a better personalization of the rehabilitation program. However, further clinical trials, as well as meta-analysis of the literature, are needed to be affirmative on the role of VR in patients with DoC.
Healthy aging is associated with decline in the ability to maintain visual information in working memory (WM). We examined whether this decline can be explained by decreases in the ability to filter distraction during encoding or to ignore distraction during memory maintenance. Distraction consisted of irrelevant objects (Exp. 1) or irrelevant features of an object (Exp. 2). In Experiment 1, participants completed a spatial WM task requiring remembering locations on a grid. During encoding or during maintenance, irrelevant distractor positions were presented. In Experiment 2, participants encoded either single-feature (colors or orientations) or multifeature objects (colored triangles) and later reproduced one of these features using a continuous scale. In multifeature blocks, a precue appeared before encoding or a retrocue appeared during memory maintenance indicating with 100% certainty to the to-be-tested feature, thereby enabling filtering and ignoring of the irrelevant (not-cued) feature, respectively. There were no age-related deficits in the efficiency of filtering and ignoring distractor objects (Exp. 1) and of filtering irrelevant features (Exp. 2). Both younger and older adults could not ignore irrelevant features when cued with a retrocue. Overall, our results provide no evidence for an aging deficit in using attention to manage visual WM.
Transcutaneous vagus nerve stimulation (tVNS) is an alternative non-invasive method for the electrical stimulation of the vagus nerve with the goal of treating several neuropsychiatric disorders. The objective of this study is to assess the effects of tVNS on cerebral cortex activity in healthy volunteers using resting-state microstates and power spectrum electroencephalography (EEG) analysis. Eight male subjects aged 25-45 years were recruited in this randomized sham-controlled double-blind study with cross-over design. Real tVNS was administered at the left external acoustic meatus, while sham stimulation was performed at the left ear lobe, both of them for 60 min. The EEG recording lasted 5 min and was performed before and 60 min following the tVNS experimental session. We observed that real tVNS induced an increase in the metrics of microstate A mean duration (<i>p</i> = 0.039) and an increase in EEG power spectrum activity in the delta frequency band (<i>p</i> &lt; 0.01). This study confirms that tVNS is an effective way to stimulate the vagus nerve, and the mechanisms of action of this activation can be successfully studied using scalp EEG quantitative metrics. Future studies are warranted to explore the clinical implications of these findings and to focus the research of the prognostic biomarkers of tVNS therapy for neuropsychiatric diseases.
Significant differences exist in human brain functions affected by time of day and by people's diurnal preferences (chronotypes) that are rarely considered in brain studies. In the current study, using network neuroscience and resting-state functional MRI (rs-fMRI) data, we examined the effect of both time of day and the individual's chronotype on whole-brain network organization. In this regard, 62 participants (39 women; mean age: 23.97 &#xb1; 3.26 years; half morning- versus half evening-type) were scanned about 1 and 10 h after wake-up time for morning and evening sessions, respectively. We found evidence for a time-of-day effect on connectivity profiles but not for the effect of chronotype. Compared with the morning session, we found relatively higher small-worldness (an index that represents more efficient network organization) in the evening session, which suggests the dominance of sleep inertia over the circadian and homeostatic processes in the first hours after waking. Furthermore, local graph measures were changed, predominantly across the left hemisphere, in areas such as the precentral gyrus, putamen, inferior frontal gyrus (orbital part), inferior temporal gyrus, as well as the bilateral cerebellum. These findings show the variability of the functional neural network architecture during the day and improve our understanding of the role of time of day in resting-state functional networks.
Some studies observed a benefit of Parkinson's disease (PD) patients after treatment with safinamide in some non-motor symptoms (NMSs). The aim of this study was to analyze the effectiveness of safinamide on NMS burden in PD. SAFINONMOTOR (an open-label study of the effectiveness of safinamide on non-motor symptoms in Parkinson's disease patients) is a prospective open-label single-arm study conducted in five centers from Spain. The primary efficacy outcome was the change from baseline (V1) to the end of the observational period (6 months) (V4) in the non-motor symptoms scale (NMSS) total score. Between May/2019 and February/2020 50 patients were included (age 68.5 &#xb1; 9.12 years; 58% females; 6.4 &#xb1; 5.1 years from diagnosis). At 6 months, 44 patients completed the follow-up (88%). The NMSS total score was reduced by 38.5% (from 97.5 &#xb1; 43.7 in V1 to 59.9 &#xb1; 35.5 in V4; <i>p</i> &lt; 0.0001). By domains, improvement was observed in sleep/fatigue (-35.8%; <i>p</i> = 0.002), mood/apathy (-57.9%; <i>p</i> &lt; 0.0001), attention/memory (-23.9%; <i>p</i> = 0.026), gastrointestinal symptoms (-33%; <i>p</i> = 0.010), urinary symptoms (-28.3%; <i>p</i> = 0.003), and pain/miscellaneous (-43%; <i>p</i> &lt; 0.0001). Quality of life (QoL) also improved with a 29.4% reduction in the PDQ-39SI (from 30.1 &#xb1; 17.6 in V1 to 21.2 &#xb1; 13.5 in V4; <i>p</i> &lt; 0.0001). A total of 21 adverse events in 16 patients (32%) were reported, 5 of which were severe (not related to safinamide). Dyskinesias and nausea were the most frequent (6%). Safinamide is well tolerated and improves NMS burden and QoL in PD patients with severe or very severe NMS burden at 6 months.
The COVID-19 pandemic represents an unprecedented public health emergency, with consequences at the political, social, and economic levels. Mental health services have been called to play a key role in facing the impact of the pandemic on the mental health of the general population. In the period March-May 2020, an online survey was implemented as part of the Covid Mental Health Trial (COMET), a multicentric collaborative study carried out in Italy, one of the Western countries most severely hit by the pandemic. The present study aims to investigate the use of mental health resources during the first wave of the pandemic. The final sample consisted of 20,712 participants, mainly females (<i>N</i> = 14,712, 71%) with a mean age of 40.4 &#xb1; 14.3 years. Access to mental health services was reported in 7.7% of cases. Among those referred to mental health services, in 93.9% of cases (<i>N</i> = 1503 subjects) a psychological assessment was requested and in 15.7% of cases (<i>N</i> = 252) a psychiatric consultation. People reporting higher levels of perceived loneliness (OR 1.079, 95% CI 1.056-1.101, <i>p</i> &lt; 0.001), practicing smart-working (OR 1.122, 95% CI 0.980-1.285, <i>p</i> = 0.095), using avoidant (OR 1.586, 95% CI 1.458-1.725, <i>p</i> &lt; 0.001) and approach (OR 1.215, 95% CI 1.138-1.299, <i>p</i> &lt; 0.001) coping strategies more frequently accessed mental health services. On the other hand, having higher levels of perceived social support (OR 0.833, 95% CI 0.795-0.873, <i>p</i> &lt; 0.001) was associated with a reduced probability to access mental health services. The COVID-19 pandemic represents a new threat to the mental health and well-being of the general population, therefore specific strategies should be implemented to promote access to mental healthcare during the pandemic and afterwards.
Persons with autism spectrum disorder (ASD) have impaired mentalizing skills. In this study, a group of persons with ASD traits (high-AQ scores) initially received sham tDCS before completing a pre-test in two mentalizing tasks: false belief and self-other judgments. Over the next week, on four consecutive days, they received sessions of anodal electrical stimulation (a-tDCS) over the right temporo-parietal junction (rTPJ), a region frequently associated with the theory of mind. On the last day, after the stimulation session, they completed a new set of mentalizing tasks. A control group (with low-AQ scores) matched in age, education and intelligence received just sham stimulation and completed the same pre-test and post-test. The results showed that the high-AQ group improved their performance (faster responses), after a-tDCS, in the false belief and in the self-other judgments of mental features, whereas they did not change performance in the false photographs or the self-other judgments of physical features. These selective improvements cannot be attributed to increased familiarity with the tasks, because the performance of the low-AQ control group remained stable about one week later. Therefore, our study provides initial proof that tDCS could be used to improve mentalizing skills in persons with ASD traits.
Acute aerobic high-intensity interval exercise (HIIE) has demonstrated positive effects on inhibitory control and P3 event-related potential (ERP) in young adults. However, the evidence is not well established regarding the effects of different HIIE modalities that incorporate aerobic-resistance training on these cognitive and neurocognitive outcomes. The purpose of this investigation was to examine the transient effects of HIIE-aerobic and HIIE-aerobic/resistance on P3 and Flanker task performance. Participants (<i>n</i> = 24; 18-25 years old) completed the Flanker task at two time points (30 min and 85 min) following 9 min of HIIE-aerobic (intermittent bouts of walking and running at 90% of maximal heart rate), HIIE-aerobic/resistance (intermittent bouts of walking and high-intensity calisthenics), and seated rest on three separate counterbalanced days. Results revealed no changes in Flanker performance (i.e., reaction time and response accuracy) or P3 (latency and mean amplitude) following either HIIE conditions compared to seated rest. Together, these data suggest inhibitory control and neuroelectric underpinnings are not affected by different modalities of HIIE at 30 min and 85 min post-exercise. Such findings reveal that engaging in short bouts of different HIIE modalities for overall health neither improves nor diminishes inhibitory control and brain function for an extended period throughout the day.
Tics can be associated with neurological disorders and are thought to be the result of dysfunctional basal ganglia pathways. In Tourette Syndrome (TS), excess dopamine in the striatum is thought to excite the thalamo-cortical circuits, producing tics. When external stressors activate the hypothalamic-pituitary-adrenal (HPA) axis, more dopamine is produced, furthering the excitation of tic-producing pathways. Emotional processing structures in the limbic are also activated during tics, providing further evidence of a possible emotional component in motor ticking behaviors. The purpose of this review is to better understand the relationship between emotional states and ticking behavior. We found support for the notion that premonitory sensory phenomena (PSP), sensory stimulation, and other environmental stressors that impact the HPA axis can influence tics through dopaminergic neurotransmission. Dopamine plays a vital role in cognition and motor control and is an important neurotransmitter in the pathophysiology of other disorders such as obsessive-compulsive disorder (OCD) and attention deficit hyperactivity disorder (ADHD), which tend to be comorbid with ticking disorders and are thought to use similar pathways. It is concluded that there is an emotional component to ticking behaviors. Emotions primarily involving anxiety, tension, stress, and frustration have been associated with exacerbated tics, with PSP contributing to these feelings.
Grounded cognition theory postulates that cognitive processes related to motor or sensory content are processed by brain networks involved in motor execution and perception, respectively. Processing words with auditory features was shown to activate the auditory cortex. Our study aimed at determining whether onomatopoetic verbs (e.g., "tr&#xf6;pfeln"-to dripple), whose articulation reproduces the sound of respective actions, engage the auditory cortex more than non-onomatopoetic verbs. Alpha and beta brain frequencies as well as evoked-related fields (ERFs) were targeted as potential neurophysiological correlates of this linguistic auditory quality. Twenty participants were measured with magnetoencephalography (MEG) while semantically processing visually presented onomatopoetic and non-onomatopoetic German verbs. While a descriptively stronger left temporal alpha desynchronization for onomatopoetic verbs did not reach statistical significance, a larger ERF for onomatopoetic verbs emerged at about 240 ms in the centro-parietal area. Findings suggest increased cortical activation related to onomatopoeias in linguistically relevant areas.
Brain-Derived Neurotropic Factor (BDNF) expression is decreased in conditions associated with cognitive decline as well as metabolic diseases. One potential strategy to improve metabolic health and elevate BDNF is by increasing circulating ketones. Beta-Hydroxybutyrate (BHB) stimulates BDNF expression, but the association of circulating BHB and plasma BDNF in humans has not been widely studied. Here, we present results from three studies that evaluated how various methods of inducing ketosis influenced plasma BDNF in humans. Study 1 determined BDNF responses to a single bout of high-intensity cycling after ingestion of a dose of ketone salts in a group of healthy adults who were habitually consuming either a mixed diet or a ketogenic diet. Study 2 compared how a ketogenic diet versus a mixed diet impacts BDNF levels during a 12-week resistance training program in healthy adults. Study 3 examined the effects of a controlled hypocaloric ketogenic diet, with and without daily use of a ketone-salt, on BDNF levels in overweight/obese adults. We found that (1) fasting plasma BDNF concentrations were lower in keto-adapted versus non keto-adapted individuals, (2) intense cycling exercise was a strong stimulus to rapidly increase plasma BDNF independent of ketosis, and (3) clinically significant weight loss was a strong stimulus to decrease fasting plasma BDNF independent of diet composition or level of ketosis. These results highlight the plasticity of plasma BDNF in response to lifestyle factors but does not support a strong association with temporally matched BHB concentrations.
Alcohol abuse dramatically affects individuals' lives nationwide. The 2020 National Survey on Drug Use and Health (NSDUH) estimated that 10.2% of Americans suffer from alcohol use disorder. Although social support has been shown to aid in general addiction prevention and rehabilitation, the benefits of social support are not entirely understood. The present study sought to compare the benefits of social interaction on the conditioned ethanol approach behavior in rats through a conditioned place preference (CPP) paradigm in which a drug is paired with one of two distinct contexts. In experiment 1A, rats were single-housed and received conditioning trials in which ethanol was paired with the less preferred context. In experiment 1B, rats underwent procedures identical to experiment 1A, but were pair-housed throughout the paradigm. In experiment 1C, rats were single-housed, but concurrently conditioned to a socially-paired context and an ethanol-paired context. By comparing the time spent between the ethanol-paired environment and the saline-paired or socially-paired environment, we extrapolated the extent of ethanol approach behavior in the pair-housed, single-housed, and concurrently conditioned rats. Our results revealed that social interaction, both in pair-housed animals or concurrently socially-conditioned animals, diminished the ethanol approach behavior, which highlights the importance of social support in addiction prevention, treatment, and recovery programs.
Loss of function of the hippocampus or frontal cortex is associated with reduced performance on memory tasks, in which subjects are incidentally exposed to cues at specific places in the environment and are subsequently asked to recollect the location at which the cue was experienced. Here, we examined the roles of the rodent hippocampus and frontal cortex in cue-directed attention during encoding of memory for the location of a single incidentally experienced cue. During a spatial sensory preconditioning task, rats explored an elevated platform while an auditory cue was incidentally presented at one corner. The opposite corner acted as an unpaired control location. The rats demonstrated recollection of location by avoiding the paired corner after the auditory cue was in turn paired with shock. Damage to either the dorsal hippocampus or the frontal cortex impaired this memory ability. However, we also found that hippocampal lesions enhanced attention directed towards the cue during the encoding phase, while frontal cortical lesions reduced cue-directed attention. These results suggest that the deficit in spatial sensory preconditioning caused by frontal cortical damage may be mediated by inattention to the location of cues during the latent encoding phase, while deficits following hippocampal damage must be related to other mechanisms such as generation of neural plasticity.
This meta-analysis evaluated the effects of methylphenidate (MPH) on cognitive outcome and adverse events in adults with traumatic brain injuries (TBI). We searched PubMed, EMBASE, and PsycINFO for randomized controlled trials (RCTs) published before July 2019. Studies that compared the effects of MPH and placebos in adults with TBI were included. The primary outcome was cognitive function, while the secondary outcome was adverse events. Meta-regression and sensitivity analysis were conducted to evaluate heterogeneity. Seventeen RCTs were included for qualitative analysis, and ten RCTs were included for quantitative analysis. MPH significantly improved processing speed, measured by Choice Reaction Time (standardized mean difference (SMD): -0.806; 95% confidence interval (CI): -429 to -0.182, <i>p</i> = 0.011) and Digit Symbol Coding Test (SMD: -0.653; 95% CI: -1.016 to -0.289, <i>p</i> &lt; 0.001). Meta-regression showed that the reaction time was inversely associated with the duration of MPH. MPH administration significantly increased heart rate (SMD: 0.553; 95% CI: 0.337 to 0.769, <i>p</i> &lt; 0.001), while systolic or diastolic blood pressure did not exhibit significant differences. Therefore, MPH elicited better processing speed in adults with TBI. However, MPH use could significantly increase heart rate. A larger study is required to evaluate the effect of dosage, age, or optimal timing on treatment of adults with TBI.
Gait is often considered as an automatic movement but cortical control seems necessary to adapt gait pattern with environmental constraints. In order to study cortical activity during real locomotion, electroencephalography (EEG) appears to be particularly appropriate. It is now possible to record changes in cortical neural synchronization/desynchronization during gait. Studying gait initiation is also of particular interest because it implies motor and cognitive cortical control to adequately perform a step. Time-frequency analysis enables to study induced changes in EEG activity in different frequency bands. Such analysis reflects cortical activity implied in stabilized gait control but also in more challenging tasks (obstacle crossing, changes in speed, dual tasks…). These spectral patterns are directly influenced by the walking context but, when analyzing gait with a more demanding attentional task, cortical areas other than the sensorimotor cortex (prefrontal, posterior parietal cortex, etc.) seem specifically implied. While the muscular activity of legs and cortical activity are coupled, the precise role of the motor cortex to control the level of muscular contraction according to the gait task remains debated. The decoding of this brain activity is a necessary step to build valid brain–computer interfaces able to generate gait artificially. ## 1. Introduction Gait control in natural environments (e.g., passing through a doorway, stepping an obstacle, initiating gait,…) can only be achieved on the basis of proprioception, visual and vestibular signals, and implies cognitive control [ ]. These processes occur mainly at cortical and cerebellar levels and compose the voluntary aspect of walking. Measuring brain activity during gait with sufficient temporal resolution can help to determine which brain areas are involved in motor behavior control. Electroencephalography (EEG) presents better temporal resolution than other brain imaging methods to record cortical activation during gait. Indeed, gait is often considered as an automatic activity which can be modified by cortical activations in certain circumstances that necessitate adaptation of gait pattern [ ]. Gait is actually composed of repetitive stereotyped gait cycles. Each cycle consists of two phases: following foot contact, the leg is on the ground supporting the gravitational load of the body and propelling the body forwards. This first phase is called the stance phase, with two periods of double contact. Then, during the swing phase, the leg is lifted from the ground by the muscles and is moved against its inertial load. The changeover of relatively similar right and left cycles occurs rhythmically. Animal studies in decerebrate cats suggest that central pattern generators which are responsible for rhythmic movements are located in the spinal cord. Indeed, the spinal cord itself contains neural circuits that, when activated, can coordinate the different muscles to produce locomotor movements [ ]. Adaptations of this motor program are necessary during gait initiation, stops, turns, obstacles, or following changes in the environment. A central network generates essential features of the motor pattern and sensory feedback signals control the system. In mammals, locomotor regions situated in the brainstem can directly activate central pattern generators located in the medulla and are under the control of the basal ganglia, cerebellum and cortex [ ]. There is now large evidence of the role of the cerebral cortex in gait control [ ]. In fact, numerous interactions exist between motor control of gait and cognition. Some of these interactions, often defined as dual-task interactions, are common in daily life, and involve common situations such as walking while simultaneously talking, texting on a cell phone or thinking about one’s shopping list [ , ]. Dual-task walking abilities in humans are at least in part under the control of cortical prefrontal areas [ ], but other cortical and subcortical regions are also involved [ ]. In order to study cortical activity during real locomotion, EEG appears to be particularly appropriate. Indeed, overall EEG devices have a compact size, are relatively low-cost and easily available. EEG is a non-invasive brain imaging modality and allows a direct assessment of neural activation with a high temporal resolution (in the order of one millisecond). One of the strengths of EEG is thus the possibility to assess brain functioning during online walking and to correlate cortical activation with gait measures given by other devices (video-based motion analysis systems, force insoles, inertial sensors, electromyogram,…) ( ). By comparison, in the context of positron emission tomography (PET) or single photon emission computed tomography (SPECT), metabolic changes occur several minutes after the injection according to the marker used. Spatial resolution of EEG is also better than functional near-infrared spectroscopy. However, PET and SPECT scans present a higher spatial resolution than EEG, while functional magnetic resonance imaging (fMRI) is associated with still better spatial accuracy despite the low temporal resolution related to this neuroimaging modality due to the slowness of changes in blood flow following fast changes in electrical neuronal activity. The main limitation of the use of fMRI, PET, and SPECT scans for analyzing gait-related neural activation remains the required head immobility of the subjects. In this narrative review, after some methodological considerations, we will mainly focus on recent literature regarding EEG spectral changes during gait initiation and stable gait in healthy subjects, restricting this paper to cortical activity. Most recent studies have investigated cortical oscillations while some of them have also analyzed event-related potentials (ERPs). However, the latter were not within the scope of this paper. Spectral analyses promote a better understanding of the neurophysiology of gait and the role of oscillations in the adaptation of gait to the environment. ## 2. Methodological Considerations Until recently, several concerns limited the use of EEG during real locomotion. Firstly, spatial resolution of EEG is low compared to PET, SPECT or MRI. Source localization from scalp EEG signals can be used to improve spatial resolution at the cortical level. Sources of EEG activity can be estimated by solving the inverse problem (i.e., identifying the location and calculating the amplitude and orientation of the neural sources that are responsible for the measured EEG data, based on these scalp EEG recordings). These characteristics of a source are adjusted in order to obtain a best fit between the recorded EEG signal and the calculated potentials produced by the source [ ]. However, the main limit of this approach is that signals recorded by scalp electrodes mostly result from post-synaptic potentials of cortical neurons or synchronized activity of cortical neurons with deep sources (subcortical nuclei). There is still some disagreement among scientists about the possibility for EEG to record deep sources that are not synchronized with cortical activity [ ]. Indeed, source analysis localized the irritative zone in patients suffering from epilepsy with high sensitivity and specificity if EEG signals were recorded with a large number of electrodes (128–256 channels) and if individual MRI was used as head model [ ]. Other studies confirmed that a large number of electrodes is necessary to adequately solve the inverse problem (32 electrodes is not enough [ ]), for example for a cognitive task (picture naming) [ ]. However, the sources in these examples are cortical and, for some authors, subcortical signals are much weaker than cortical activity and deeper sources can be associated with distributed cortical activity [ ]. Moreover, although the obvious differentiation between dipolar models with their a-priori assumed fixed number of dipoles and distributed source imaging techniques, the diversity of methods to solve the inverse problem as well as the difficulty of obtaining evidence about the true location of the sources makes it difficult to give any guidelines for the best method to choose. An example of different results of source localization methods during gait initiation is provided in . Intracerebral recordings of local field potentials with deep brain stimulation electrodes can also be used to explore profound sources [ , , ]. Unfortunately, they can only be performed in patients that require deep brain stimulation to reduce their symptoms. Secondly, artefacts caused by movement and muscle activity contaminate EEG signals. Of course, adequate filtering, bad channel repair by interpolation (flat electrodes or with high-amplitude noise), and the selection of epochs uncontaminated by obvious artefacts must be performed [ ]. Another possible pre-processing step aims at removing transient, non-biological, large-amplitude noise/artefacts (e.g., abrupt impedance changes due to headset motions) using a non-stationary method based on sliding window principal component analysis: the Artifact Subspace Reconstruction (ASR) method [ ]. Despite these essential first steps, two studies [ , ] investigated movement-related artefacts in EEG recordings and found contamination of the EEG data at frequencies from 1 to 150 Hz. As EEG signal frequencies investigated during walking include theta (4–7 Hz), alpha (8–12 Hz), beta 1 (13–20 Hz) beta 2 (20–30 Hz) and gamma (> 30 Hz) bands, these motion artefacts are taken into account in the analysis of cortical activity during gait and should be removed before considering motor-related changes in a power frequency band. Currently, the most frequently performed method to remove movement-related artefacts used independent component analysis (ICA) [ ]. In this context, EEG is assumed to be a linear mixture of non-Gaussian and statistically independent source components that can be separated via ICA, visually examined, and classified as artefacts or EEG signal components [ ]. Therefore, ICA allows to identify EEG sources, regardless of their localization. Once the artefact components have been identified, they can be removed. The remaining EEG signal components can be projected back to the original electrode space. This procedure yields the reconstruction of an artefact-free EEG signal. The number of independent components is equal to the rank of the matrix storing the original EEG signals (i.e., the total number of channels minus the amount of interpolated electrodes). We should note that, in reality, the effective number of statistically independent signals contributing to the scalp EEG is generally unknown. Although some sources correspond to obvious artefacts (e.g., eye blinks, horizontal and vertical eye movements), it is often difficult to determine with certainty whether the component represents cortical signal or not. Localization of the source on a scalp map, the component time course, the component activity power spectrum and an image of collected single-trial data epochs are crucial for identifying the nature of the considered independent component. It has also been proposed to use image processing algorithms on independent components in order to automatically reject EEG artefacts [ ]. More generally, researchers can now use (semi-)automated EEG independent component classifiers, including ICLabel that proved the best classification accuracy and computational efficiency [ ]. These classifiers that were highly trained on components labelled by experts of the field help to eliminate EEG artefacts. Other algorithms that model independent components as equivalent current dipoles can be used to localize neural sources (DIPFIT) [ ]. A recent study tested the ability of ICA combined with DIPFIT as a source localization algorithm in order to remove EEG artefacts during treadmill walking (cortical signal blocked by a silicon cap) [ ]. ICA and dipole fitting accurately localized 99% of the independent components in non-neural locations. Some authors also propose removal of specific muscular activity such as neck muscle activity that can affect the EEG signal during walking [ , ], by using either ICA or another blind source separation approach called canonical correlation analysis (CCA) [ ]. A more general problem remains with analysis in the gamma band since gamma rhythms are generated by small volumes and are thus difficult to record with scalp EEG [ ]. Moreover, the gamma band is largely contaminated by muscle activity. Once the EEG signal is pre-processed, two methods are mainly used for the subsequent analysis step: either ERPs if the EEG signal changes are phase-locked to an event; or time-frequency analysis in order to study induced changes (i.e., time-locked to an event) in EEG activity in different frequency bands. More recently, some groups used brain connectivity methods to study the directed and undirected functional links between different cortical areas [ ], mainly during tasks involving upper limb movement [ ] or cognitive tasks such as the Stroop task [ ] or picture naming [ ]. Some studies have also been performed during gait but in pathological conditions, mainly in patients with Parkinson’s disease [ ]. ## 3. Brain Oscillations: Principles of Time-Frequency Analysis Non-phase-locked (induced) changes can be studied in a time-frequency analysis, which highlights the cortical oscillations related to an external or internal event [ ]. EEG signal mainly represents the temporal–spatial summation of post-synaptic potentials from the local neuronal population. Oscillations in a given frequency band are the results of synchronization across neurons [ ]. Indeed, motor-related cortical oscillations are generally assessed by quantifying increases (also called event-related synchronizations or ERS) or decreases (event-related desynchronizations or ERD) in spectral power in a given frequency band. Studying these oscillations across time, also called time-frequency analysis, consists in calculating the relative values of the signal power in different physiological frequency bands. The event-related spectrum (averaged over trials or related to a single trial) at each time-frequency point is either divided by the average spectral power in the pre-stimulus baseline period (during which the subject does not move) at the same frequency, or a subtraction of the average baseline power and a division by the standard deviation of the baseline power at the same frequency can be performed. These two models for the pre-stimulus baseline correction of event-related spectral perturbations (ERSPs) are respectively called gain model and additive model, and both are used in EEG studies. The units of ERSP are thus a z -score or a percentage of the average baseline power, but spectral perturbations can also be expressed as the log value of this percentage [ ]. For example, increases in amplitude of the cortical oscillations in delta and gamma bands are observed during both the planning and execution of movement [ ]. The initiation of voluntary movements has been linked to desynchronization of cortical activity in alpha and beta bands in electrocorticography and scalp EEG recordings over the motor and premotor cortices [ , ]. We should keep in mind that these oscillations, whatever the considered frequency band, are not specific to movement and have been attributed to numerous cognitive processes such as memory or attention [ , ]. Contrary to ERPs, EEG power changes do not need to be phase- or time-locked to a particular event at each trial (e.g., foot strike, start of the anticipatory postural adjustments for gait initiation). In summary, time-frequency analysis of EEG activity contributes to a better understanding of the neuronal oscillations that underlie information processing in the brain or programming of a movement. The most frequent pattern before and during movement, whatever its nature, is a decrease of alpha- and beta-band power starting over the sensorimotor cortex. The mu rhythm is of particular interest [ , ]. The latter is defined by activity in the alpha band recorded by scalp electrodes over the sensorimotor cortex during movement. Despite being comprised in the alpha band, the mu rhythm is distinct from alpha rhythm since the latter is recorded occipitally, reacts to eyes opening and is not specifically related to movement [ , ]. It has been firstly described that the mu rhythm reflects synchronized activity in large groupings of pyramidal neurons in the brain’s motor cortex [ ]. A role of mu rhythm in the mirror neuron system [ ] has then been proposed since mu rhythm is also attenuated during observed movements [ ]. Despite this specificity, in most publications, it is not distinguished from the alpha rhythm. In the present review, we will use indifferently either alpha power decrease/increase (i.e., alpha ERD/ERS) or mu rhythm increase/decrease (mu ERD/ERS) according to the methodology used in the different articles. ## 4. Cortical Oscillations during Gait Initiation in Healthy Subjects Cortical areas involved in gait initiation include the sensorimotor cortex, premotor cortex in link with basal ganglia and brainstem structures. It was initially suggested that the motor programs underlying gait initiation were stored in subcortical structures, and could be elicited by a startling stimulus or a decision for action [ , , , ]. However, studies in patients with focal lesions of the supplementary motor area (SMA) and studies in patients with Parkinson’s disease have shown that the motor program can also be modulated at the supraspinal level, with implication of the SMA, the basal ganglia and the pontomedullary reticular formation [ , ]. Moreover, inhibitory repetitive transcranial magnetic stimulation over the SMA shortens the duration of anticipatory postural adjustments for a brief period, i.e., for the first stepping trial after stimulation [ ]. As a consequence, cortical activation seems to directly (or via cortico-subcortical loops) modulate the timing of the motor program. The output of this pathway is located in the midbrain locomotor region (which may correspond in part to the cuneiform nucleus and the dorsal part of the pedunculopontine nucleus), which is connected to limbic structures and the basal ganglia [ ]. Attentional control can also modulate gait initiation: either directly by involving brainstem structures (e.g., the alerting process induced by a loud stimulus can produce a start-react effect) or indirectly via a cortical loop that includes more complex attentional networks [ , , ]. Indeed, gait initiation requires more attentional resources than gait [ ] and may cause more dual-task interference with an attentional task than steady-state walking does [ ]. For instance, errors in motor programming have been exhibited in tasks requiring executive control [ ], and particularly in older subjects [ ]. It has been demonstrated that gait initiation is associated with the desynchronization of sensorimotor rhythms (alpha and beta bands) related to sensorimotor cortex activation [ ]. Alpha and beta ERD are sensitive to the attentional demand, being more ample in case of selective attention required during preparation of movement. Different patterns of alpha/beta ERD during preparation of step initiation according to the attentional demand are also noticed: earlier alpha/beta ERD in case of alert stimulus, more prolonged beta ERD in case of conflicting information [ ]. This implies that alpha and beta ERD during gait initiation are directly modulated by attentional abilities of the subject. This modification of sensorimotor cortex activation has direct consequences on motor commands and could lead, for example, to errors in the motor program (i.e., errors in anticipatory postural adjustments that increase with ageing). During gait, oscillations in lower bands (delta, theta) are more difficult to interpret since they largely overlap with the ERPs locked with the stimulus, leading to high inter-trial coherence (ITC; a quantification of event-related phase modulations locked to an event) [ ]. We should also point out that activations over non-motor areas (prefrontal, temporal, etc.) are not specific to gait initiation but are also recorded when the attentional task is coupled, for instance, with button pressing and not with step initiation. They may reflect more adequately the attentional processes than the movement preparation itself. We can give the example of EEG scalp recordings during gait initiation of 30 healthy subjects using a flanker task in (signal locked to the onset of the anticipatory postural adjustments). Some of the data have been previously published in [ ]. Subjects had to initiate gait with the leg indicated by the direction of a target arrow that was surrounded by either congruent or incongruent flankers. In this latter case, executive control is necessary to inhibit the incongruent flankers indicating the wrong lateralization. We observed an earlier and more ample alpha/beta ERD in case of incongruent flankers, reflecting the interaction between the executive attentional control and motor preparation. ## 5. Cortical Oscillations during Gait in Healthy Subjects The first proper analyses using methods to avoid artefacts such as ICA as described in the methodological considerations paragraph were conducted on eight subjects during treadmill walking [ ]. In this analysis, each gait cycle was expressed in function of the average gait cycle. Significant alpha- and beta-band power increases over sensorimotor cortex and dorsal anterior cingulate cortex occurred during the end of stance, as the leading foot was contacting the ground and the trailing foot was pushing. This demonstrates that, even under steady-speed walking conditions, the cortex shows moment-to-moment adjustments in activity tone. In another study including six subjects walking slowly on a treadmill, the analysis consisted in expressing changes in EEG oscillations during gait according to the EEG activity during standing while fixing a cross on a computer screen [ ]. When looking at changes throughout the gait cycle, a desynchronization occurred in mu and beta bands during the swing phase. Just before heel strike and during the double support phase, increases in mu and beta power were observed. These results were later confirmed ([ ], robotic gait with a lokomat versus gait on treadmill) and source analysis [ ] revealed that beta ERD is located in central sensorimotor area, which is consistent with somatotopic representation of leg movements [ , ]. This location found with scalp recordings was confirmed during electrocorticographic recordings of leg movements [ ]. An example of a scalp EEG recording in one healthy subject by our group is given in . Indeed, mu rhythm and beta-band power decreases observed over the central sensorimotor and parietal areas during active walking relative to standing are similar to the desynchronization in mu- and beta-band power observed in the motor system during the preparation and voluntary execution of movements [ ]. Beta-band power increases in scalp EEG data are related to movement suppression or more probably to sensorimotor integration of the movement since this synchronization disappears when sensory information is disrupted [ ]. Recently, electrocortigraphic recordings in two subjects pointed out the precise role of the primary motor cortex (M1) in gait pattern generation [ ]. Mu, beta, and gamma oscillations were recorded during steady gait and gait across multiple walking speeds. Only gamma oscillations were consistent in both subjects during the tasks and were directly related to gait speed or gait initiation. Beta modulation was recorded in only one subject. More generally, gamma oscillations in M1 encode for high-level motor control and probably interact with subcortical/spinal networks, which are responsible for low-level motor control. The main limitation of this study was the small number of subjects and the lack of recording of key structures such as the SMA. Indeed, these results have to be compared with those from a study using fMRI and PET scan [ ]. The latter study compared imagined locomotion (multiple initiations, stops, changes in speed) that recruits mainly an indirect pathway of modulatory locomotion (via the SMA, basal ganglia and mesencephalic locomotor region) and real locomotion that recruits a direct pathway of steady-state locomotion (M1 that hypothetically drives directly the central pattern generators in this model). Coupling between the EEG and electromyography (EMG) activity during gait has also been studied. Significant coupling between EEG recordings over the leg motor area and EMG from the anterior tibial muscle (ankle dorsal flexor) was found in the gamma frequency band prior to heel strike, during the swing phase of walking and not during the support phase [ ]. In another recent study, cortical power, corticomuscular coherence, and ITC were evaluated. In contrast to the previous study, theta, alpha, beta, and gamma frequencies increased during the double support phase of the gait cycle [ ]. The authors concluded that the coherent activity between M1 and muscle would reflect, due to its high ITC, an evoked response. Therefore, an additive response would be evoked during the double support phase. Recently, EEG recorded from the leg area of M1 and EMG recorded from ankle plantar flexor muscles have shown coupled gamma oscillations in the stance phase during treadmill walking [ ]. Our group has also demonstrated that either excitatory or inhibitory repetitive transcranial magnetic stimulation of M1 was unable to change the level of activity of the leg muscles during gait contrary to what is observed for simple upper limb movements, suggesting a more complex role of M1 than simply controlling muscle tone during gait [ ]. ## 6. Cortical Oscillations during More Challenging Tasks In daily life, subjects do not walk slowly on a treadmill but have to adapt their speed, to change their pace, to modify their stride length according to an obstacle or to navigate according to different cues (visual, auditory, etc.). When comparing walking on an incline with walking on level surface, theta power was greater in the anterior cingulate, sensorimotor and posterior parietal clusters during incline walking. It also showed differences in gamma band suggesting that these areas are implied in control of gait in these conditions [ ]. Moreover, as stated earlier, cortical activations are directly linked to muscle tone although it is not the main mechanism of control [ ], and corticomuscular coherence also differs according to the type of walk, predominating in swing phase during overground walking and in stance phase during ramp walking [ ]. Most of the previous studies did not include gait in real-world conditions that involve cognitive processing. For example, when subjects had to adapt their speed (e.g., increasing their gait speed), beta and gamma synchronizations in prefrontal and parietal areas were enhanced, suggesting that executive control of sensorimotor areas was intensified in order to improve speed tracking performance [ ]. Furthermore, according to the walking conditions (level ground, ramp ascent, and stair ascent), differences in activation in the posterior parietal cortex or M1 occurred [ ]. Alpha and beta ERD was more pronounced at the beginning of gait cycle for more challenging gait conditions. Beta ERD was also larger over posterior parietal cortex. When walking in synchrony with a series of cue tones, requiring the subject to adapt step rate and length to sudden shifts, beta-band power increases were observed in the medial prefrontal and dorsolateral prefrontal cortex [ ]. Once again, the role of such cortical regions in cognitive control of gait was proposed as an explanation of this pattern and Wagner et al. [ ] attributed this specific pattern of beta synchronization to cognitive top-down control. Dual-tasking situations are common in daily life, especially the ones which involve the concurrent performance of a cognitive task and gait [ ]. For example, people often send or read a text message while walking. McIsaac et al. [ ] have proposed to define dual-tasking as “the concurrent performance of two tasks that can be performed independently, measured separately and have distinct goals”. Dual-tasking can lead to changes in gait performance and these changes are considered as the costs of carrying out a second task concurrently. EEG oscillations have been evaluated during dual-tasking. In [ ], four tasks were performed: normal walk on a treadmill, two dual tasks involving gait as well as additions and subtractions or video watching, and a visual-cueing task asking the subject to adapt its gait pattern. No clearly different patterns of neural oscillations in classical frequency bands were observed, although support vector machine procedures were able to classify attention tasks by differences in gamma-band activity. In another protocol replicating scenarios in which humans were required to evaluate the environment for accurate stepping (i.e., by using different color marks that forced adaptation of step length and width), Oliveira et al [ ] showed changes during mid-stance in the frontal lobe and motor/sensorimotor regions, a phase in the gait cycle in which participants defined the correct foot placement for the next step. These changes consisted mainly in increases in electrocortical activity over the prefrontal cortex in beta and gamma bands when precision stepping was required. This higher neuronal synchronization in the gamma band has been related to increased attention but remains, in our opinion, difficult to isolate in scalp EEG, particularly during movement, even with the use of pre-processing methods such as ICA. ## 7. Conclusions As gait requires cortical resources, measuring cortical activity in real time is a necessary step to understand motor control during gait. Time-frequency analysis of EEG is adequate to record cortical activity. The main pattern of cortical activation during gait is an activation of the sensorimotor areas that is reflected by mu and beta desynchronizations predominantly during the swing phase of a gait cycle and during preparation of movement for gait initiation. Beta synchronization occurs at foot strike. Distinct roles of mu/beta oscillations are less clear than in simple movements since desynchronizations and synchronizations during gait frequently overlap both bands. Gamma oscillations are also crucial to encode gait speed or initiation, but methodological concerns still exist with scalp recordings. These spectral patterns are directly influenced by the walking context (changing the speed, overground versus ramp walking,…) and, when analysing gait with a more demanding attentional task, other areas (prefrontal, posterior parietal cortex) seem specifically involved, with the occurrence of beta/gamma oscillations. The decoding of this brain activity is a necessary step to build valid brain-computer interfaces (BCIs) able to generate gait artificially [ ]. As a perspective, a real-time closed-loop BCI that decodes lower limb joint angles from scalp EEG during treadmill walking in order to control the walking movements of a virtual avatar has already been built [ ]. This kind of approach could be useful in rehabilitation programs since healthy subjects are able to adapt an avatar’s gait pattern controlled via a closed-loop EEG-based BCI in eight days of training. It could be beneficial in patients suffering from either deficits in upper or lower structures of gait control such as patients with incomplete medullar injury [ ], patients with stroke [ ], or with Parkinson’s disease [ , ].
Degeneration of neurons, such as the inner ear spiral ganglion neurons (SGN), may be decelerated or even stopped by neurotrophic factor treatment, such as brain-derived neurotrophic factor (BDNF), as well as electrical stimulation (ES). In a clinical setting, drug treatment of the SGN could start directly during implantation of a cochlear implant, whereas electrical stimulation begins days to weeks later. The present study was conducted to determine the effects of consecutive BDNF and ES treatments on SGN density and electrical responsiveness. An electrode drug delivery device was implanted in guinea pigs 3 weeks after deafening and five experimental groups were established: two groups received intracochlear infusion of artificial perilymph (AP) or BDNF; two groups were treated with AP respectively BDNF in addition to ES (AP + ES, BDNF + ES); and one group received BDNF from the day of implantation until day 34 followed by ES (BDNF ⇨ ES). Electrically evoked auditory brainstem responses were recorded. After one month of treatment, the tissue was harvested and the SGN density was assessed. The results show that consecutive treatment with BDNF and ES was as successful as the simultaneous combined treatment in terms of enhanced SGN density compared to the untreated contralateral side but not in regard to the numbers of protected cells. ## 1. Introduction Cochlear implant technology is the state-of-the-art therapy for patients suffering from severe to profound sensorineural hearing loss. During surgery a silicone-based electrode device is inserted in the inner ear. Up to 22 electrode contacts electrically stimulate residual peripheral neurons of the hearing nerve and allow a hearing sensation for the deaf patients. Particularly at the electrode–nerve interface, there are pathophysiological processes occurring which can be manipulated by optimization of the biological aspects and combination of this approach with the implant. One of the most important problems which hinder further improvement of cochlear implant outcome is the secondary degeneration of the target cells of electrical stimulation, the spiral ganglion neurons (SGN) [ ]. Sensorineural hearing loss is primarily caused by the loss of hair cells in the organ of Corti of the inner ear. Following hair cell loss, the auditory nerve degenerates. Initially, the afferent fibers of the SGN degenerate and subsequently the SGN and their central projections die [ , , ]. Electrical stimulation (ES) delivered by a cochlear implant may provide trophic input to the neural structures of the inner ear and studies suggest that ES can preserve SGN following deafness [ , , ]. However, results concerning the protective effects evoked by ES alone are inconsistent across research groups [ , , ]. An additional strategy to enhance SGN survival following deafness is to replace the lost endogenous neurotrophic factor (NTF) support through local drug delivery [ , , ] and thus prevent degeneration. The combination of electrical stimulation with growth factor treatment presents a scenario closer to the normal clinical situation in which the auditory nerve is actively depolarized by the cochlear implant and trophic factors can support the neuronal survival. An additive effect on enhancing survival of SGN was found with the combined application of electrical stimulation and NTF such as BDNF [ , ] and glial cell line-derived neurotrophic factor (GDNF) [ , ]. Since candidates for cochlear implantation typically seek therapy following some period of deafness, an animal model with a delay lasting a few weeks between deafening and start of treatment is closer to the clinical situation than treatment initiation immediately post deafening. In a guinea pig model, a reduction in SGN numbers to 60% of normal can be observed after 4 weeks, whereas after 2 weeks no significant reduction was detected [ ]. Other authors reported a reduced SGN survival after 2 weeks [ ]. Therefore, several studies used a delay of 3 weeks for their investigations [ , ]. Previous studies assessed the survival of spiral ganglion cells after a period up to six weeks following deafness before treatment with BDNF plus fibroblast growth factor 1 (FGF1) [ ] or before combined treatment with GDNF and ES [ ] or BDNF and ES [ ]. The results of these studies confirmed that NTF treatment benefits can be significantly increased by the additional application of electrical stimulation even with a treatment delay after onset of deafness. Furthermore, when ES and BDNF were applied simultaneously but ES was delivered 2 to 6 weeks longer than BDNF, the positive effects of the combined treatment were maintained in a guinea pig study [ ]. Additionally in a study with deafened kittens, ES alone was able to preserve the positive effects of a previous combined treatment [ ]. As the electrical stimulation of cochlear implant patients starts days or weeks after implant surgery while treatment with NTF could potentially start directly with implantation, a consecutive treatment of the patients with NTF and ES might be closest to clinical practice. The aim of the current study was to investigate the effect of a delayed consecutive treatment on SGN density and electrical responsiveness and compare it with the effects of BDNF or ES treatment alone or in the combined condition. ## 2. Materials and Methods ### 2.1. Experimental Subjects Thirty-five healthy pigmented guinea pigs of both gender (Charles River WIGA GmbH, Sulzfeld, Germany), weighing between 280 and 550 g (age at implantation between 4 and 8 weeks), were used in the study in accordance to the German “Law on Protecting Animals” and to the European Communities Council Directive 86/609/EEC for the protection of animals used for experimental purposes. All experiments were approved by the Institutional Animal Care and Research Advisory Committee and permitted by the local government (LAVES, registration no. 02/558 and 04/913). All procedures were performed under general anesthesia with xylazine (10 mg/kg, i.m.; Rompun , Bayer, Leverkusen, Germany) and ketamine (40 mg/kg, i.m.; Ketamin Gräub , Albrecht, GmbH, Aulendorf, Germany). The animals were divided into five treatment groups: artificial perilymph (AP; n = 7; control group), AP with chronic electrical stimulation (AP + ES; n = 7), BDNF alone (BDNF; n = 9), BDNF with chronic electrical stimulation (BDNF + ES; n = 5) and, as fifth group, BDNF treatment from experimental day 21 until day 34 followed by ES from day 34 until day 48 (BDNF ⇨ ES; n = 7). An overview of the different treatment groups is provided in . ### 2.2. Electrode Drug Delivery Device The device (Cochlear Ltd., Sydney, Australia) consisted of six platinum contacts, 0.3 mm each, with 0.4 mm spacing ( ). Beginning on the electrode tip, only the first and fourth contacts were linked to the connector via parylene-insulated platinum–iridium wires. The wires were embedded in a silicone matrix of about 450 µm outer diameter. The silicone matrix included a drug delivery channel (diameter: ~200 µm) with a single opening at the electrode tip. Wires and the drug channel separated in a distance of 22 mm from the tip ( A). During surgery, the silicone tube for drug delivery was connected to the flow moderator of a mini-osmotic pump with an infusion rate of 0.5 µL/h, suitable for 14 days delivery (Alzet model 2002; Durect Corp., Cupertino, CA, USA). The day before surgery, the pumps were either filled with AP with addition of 0.1% guinea pig serum albumin (Sigma-Aldrich, Steinheim, Germany) [ ] or 50 ng BDNF (R&D Systems, Wiesbaden, Germany) diluted in 1 mL of serum albumin containing AP and primed in saline at 37 °C overnight. ### 2.3. Acoustically Evoked Auditory Brainstem Response (AABR) Acoustically evoked auditory brainstem response (AABR) measurements verified normal hearing bilaterally in all tested animals prior to inclusion in this study. The hearing threshold of each subject was evaluated as previously described [ ]. In brief, on experimental days 0 and 21, click stimuli were delivered to anesthetized animals and the neural responses were recorded, filtered (between 0.1 and 3 kHz) and averaged. The hearing threshold, defined as the lowest stimulus level that generated a visually replicable waveform in normal hearing guinea pigs, was determined. A minimum threshold shift of at least 50 dB SPL was required for deafened animals on day 21 to be included in this study. ### 2.4. Deafening Procedure After day 0 AABR measurements all animals were deafened under general anesthesia by application of kanamycin and ethacrynic acid. Kanamycin was applied subcutaneously (400 mg/kg) and two hours later 40 mg/kg ethacrynic acid was injected into the jugular vein of the animal. The method was adopted from West [ ]. ### 2.5. Surgical Implantation Procedure of the Electrode and the Pump On experimental day 21, after confirmation of deafness by AABR measurement, all animals were implanted with the electrode drug delivery device ( ). The electrode was implanted unilaterally in the scala tympani of the left inner ear via the round window; the drug delivery channel was filled with the solution to be delivered, connected to the mini-osmotic pump; and the pump was positioned between the scapulae as previously described [ ]. Epidural recording electrodes and restraint bolt holding screws were implanted as described by [ ]. Prior to the sealing of the bulla defect with carboxylate cement (Durelon , ESPE Dental AG, Seefeld, Germany), electrically evoked auditory brainstem responses (EABRs) were recorded to confirm functionality of the electrode array. ### 2.6. Electrically Evoked Auditory Brainstem Response (EABR) Electrically evoked ABRs were recorded on experimental day 21 directly after electrode implantation and subsequently on days 34, 41 and 48 of the experiment in all groups with ES (AP + ES, BDNF + ES, BDNF ⇨ ES) and on day 28 in groups AP + ES and BDNF + ES. Monophasic current pulses (duration: 50 µs) were presented at 50 Hz through a 10 MHz Pulse Generator (TGP 110 10 MHz Pulse Generator, Thurlby Thandar instruments, Huntingdon, UK). With a custom-made converter, every second stimulus was changed to negative phase thus creating alternating pulses. Responses were recorded according to Mitchell et al. (1997) with electrodes placed at the vertex (1 cm posterior to bregma) and midline (2 cm anterior to bregma), and 1 cm lateral to bregma, ipsilateral to the implant (ground electrode). The averaged response to 500 presentations of a given stimulus was recorded using a Vicking IV device (Nicolet Biomedical Corp., Madison, WI, USA). The stimulus level was adjusted in steps of 10 µA. The threshold was defined as the lowest stimulus level that evoked a 1 µV or larger replicable wave III. ### 2.7. Impedance Measurement Impedances were measured after each EABR measurement using a current of 100 µA. After a series of measurements with known resistances between 2.19 kΩ and 14.93 kΩ, a linear calibration curve was established that allowed the transformation of voltage values provided by an oscilloscope into impedance values. ### 2.8. Chronic Electrical Stimulation Animals of the BDNF + ES and AP + ES group received continuous pulsatile electrical stimulation for 24 h a day for 24 days beginning on day 24 (3 days after implantation) via a portable electrical stimulator being mounted on the head of the animal (provided by the University of Michigan, Ann Arbor, MI, USA). Electrical stimulation presented biphasic charge-balanced pulses (100 µs per phase, 250 Hz at 40% duty cycle) 8 dB above the electrical response threshold. The design of the electrical stimulator has been described in Mitchell et al. (1998) in more detail. The BDNF ⇨ ES group was electrically stimulated accordingly from day 34 to day 48 after stopping delivery of BDNF on day 34. ### 2.9. Histological Procedures On experimental day 48, 200 mL phosphate buffered saline (PBS) followed by 200 mL of 4% glutardialdehyde in PBS were perfused transcardially under general anesthesia. The temporal bones were isolated from the skull base and the bulla was opened and examined for tissue reactions and infections. The electrode was extracted and the cochlea was prepared for histology following the previously described procedures for fixation and decalcification of guinea pig cochleae [ ]. The tissue was embedded in paraffin, serially sectioned at 5 µm and mounted on glass slides in order to quantitatively assess the number of SGN. Midmodiolar sections were chosen for evaluation as these provide six to seven cross sections of the Rosenthal’s canal. Images were taken, SGN were counted and the profiles of each Rosenthal’s canal were assessed. Only neurons with a minimum perikaryal diameter of 12 µm and a discernible nucleus were chosen and included for analysis. This approach led to a parameter called the SGN density, referring to the number of SGN per 10,000 µm [ ]. Since SGN counting and area measurements were not always reliable at the most apical sites, these measurements were combined with the fourth middle turn. Measurements and quantification were performed microscopically at a magnification of 200× (Olympus CKX41, Hamburg, Germany). Images were taken with a charge-coupled device (CCD) camera (colorview XS, SIS, Muenster, Germany) and processed using an image analysis program (analySIS Version. 3.2, Olympus, Hamburg, Germany). ### 2.10. Data Analysis All data of SGN density as well as EABR threshold were tested for normality and are reported as mean ± SD for descriptive statistics. The paired t-test was applied to analyze the SGN density differences between the treated left side and the untreated right side within the animals of one group as well as for the analysis of SGN density differences between the basal and apical cochlear turns within one group. Group comparison was conducted depending on the result of the normality test, either by using ANOVA followed by Tukey’s multiple comparison test or by using Kruskal–Wallis test followed by Dunn’s post-test. For comparison of EABR thresholds at different time points within one group, repeated measured ANOVA followed by Dunnett’s multiple comparison or, in the case of non-Gaussian distribution, the Friedman test followed by Dunn’s post-test were applied. All analyses were performed using GraphPad Prism software (La Jolla, CA, USA). ## 3. Results ### 3.1. AABR and Deafening The animals had an initial hearing threshold of 34 ± 5 dB SPL (mean ± SD). The kanamycin and ethacrynic acid treatment resulted in all cases in an AABR threshold shift of at least 50 dB (average: 75 ± 9 dB). ### 3.2. Functional Results Based on EABR Measurements To monitor treatment related threshold changes, electrically evoked auditory brainstem responses were recorded on day 21 and weekly throughout the experiment in all electrically stimulated animals except for animals in group BDNF ⇨ ES, where no measurements were performed on day 28. The development of the average EABR hearing threshold of all electrically stimulated groups is plotted in A. The EABR threshold (mean ± SD) of the BDNF ⇨ ES and AP + ES groups decreased significantly during the treatment period from 260 ± 32 µA and 305 ± 93 µA right after implantation to 177 ± 39 µA ( p < 0.05) and 202 ± 57 µA ( p < 0.01) on day 48, respectively. In animals with BDNF and simultaneous chronic electrical stimulation (BDNF + ES), the highest thresholds were also measured on day 21 (318 ± 54 µA) but the lowest on day 34 (246 ± 48 µA), whereupon the average threshold increased again by day 41, resulting in a mean threshold of 264 ± 133 µA at day 48. In this group, no significant differences between experimental days were detected. Comparing the mean threshold shifts between days 21 and 48 of the three stimulated groups, no significant differences between the groups were observed ( B). A more detailed overview on the hearing threshold changes for all electrically stimulated animals between days 21 and 48 is provided in . Only in one case in the BDNF + ES group, the hearing threshold increased during the experimental period ( B). In all other cases the threshold decreased. ### 3.3. Impedance Measurements At the time of EABR measurements, electrode impedances were also measured. Impedances increased between implantation on day 21 and day 48 by 3.8 ± 3.0 kΩ in the AP + ES group, by 2.7 ± 1.6 kΩ in the BDNF + ES group, and by 3.8 ± 2.7 kΩ in the BDNF group with consecutive ES. No differences between groups were detected (data not shown). ### 3.4. Histological Results #### 3.4.1. Spiral Ganglion Cell Survival within Each Group All subjects used in this study were implanted in the left cochlea, and the contralateral ear served as internal control. shows the comparison between the mean SGN density of the untreated right ears and the treated left ears within each group. The implanted cochleae of all groups except the AP group showed higher SGN densities than the untreated contralateral ears, even though this difference was significant only in BDNF treated groups with simultaneous or consecutive electrical stimulation ( p < 0.05 in both cases). Comparing the treated cochleae of all experimental groups as well as comparing all BDNF and or ES treated ears with the AP-treated control ears, no differences in the mean SGN density were observed. The mean density of SGN was lowest in AP-treated animals (2.4 ± 1.1 SGN/10,000 µm ). In the other groups, SGN density on day 48 was between 3.8 ± 3.3 SGN/10,000 µm (BDNF) and 4.6 ± 1.2 SGN/10,000 µm (BDNF + ES) with relatively large variations between animals. #### 3.4.2. Protected Spiral Ganglion Cells To evaluate the density of protected SGN per animal, the differences between the densities of surviving SGN in the implanted and treated cochleae and the non-implanted contralateral cochleae were calculated. The mean densities of protected SGN of all four treatment groups ( ) were compared to the AP group (0.5 ± 0.6 SGN/10,000 µm ), and neither chronic electrical stimulation nor BDNF treatment alone had a protective effect on SGN survival (0.6 ± 0.9 and 0.5 ± 1.9 SGN/10,000 µm , respectively). The protection of SGN achieved by the combined treatment with BDNF and ES was statistically significant ( p < 0.01) when compared to the AP group (1.7 ± 0.8 SGN/10,000 µm ). Delivery of BDNF and consecutive ES resulted in 0.5 ± 0.5 protected SGN/10,000 µm and was not different when compared to the AP group. No differences in the density of protected SGN were observed between the AP + ES, BDNF, BDNF + ES and BDNF ⇨ ES groups. Variability in results was lowest in both groups receiving BDNF and ES. #### 3.4.3. Comparison between Basal and Apical Turns Comparing the numbers of SGN in basal (lower and upper basal) and apical (fourth middle and apical) turns, no differences were detected for AP, AP + ES and BDNF ⇨ ES groups. In the BDNF ( p < 0.05) and BDNF + ES ( p < 0.001) groups, SGN density was higher in the basal turn. SGN density was also significantly increased ( p < 0.05) in the BDNF + ES group (6.4 ± 2.0 SGN/10,000 µm ) when compared to the AP group (2.5 ± 1.1 SGN/10,000 µm ) in basal turns ( ). ## 4. Discussions This study was conducted to examine functional and neuroanatomical effects of delayed BDNF treatment with consecutive chronic ES on the deafened guinea pig cochlea. The results demonstrate that chronic ES starting after cessation of the BDNF treatment is able to preserve increased SGN density compared to the untreated contralateral side ( ). This is in line with results from earlier guinea pig [ ] and cat [ ] studies but goes also beyond the known findings, as both earlier studies applied ES and BDNF simultaneously before cessation of BDNF treatments and continuing with ES alone. Previous in vitro and in vivo work indicates that BDNF can promote SGN survival [ , , , , ]. In contrast, we did not detect a BDNF induced increase in SGN survival in the present study. This may be due to the relatively low BDNF concentration of 50 ng/mL used in the present study, which was chosen based on the bioefficacy study performed by Wefstaedt [ ]. It cannot be excluded that this low concentration leads to a different activation of the BDNF receptors (tropomyosin-related kinase B (trkB) and p75NTR) and their following signal cascades, even though in vitro [ ] and in vivo [ ] beneficial effects of this BDNF concentration were reported. In contrast to the aforementioned studies where 50 ng/mL BDNF resulted in neuroprotection but was applied directly to cultured cells or 7 days after deafening in an animal model, cochlear implantation was delayed three weeks after deafening to induce SGN degeneration. Maybe with delayed treatment 3 weeks after deafening, the BDNF concentration of 50 ng/mL used in this study was too low for significant SGN protection in vivo. ES did not significantly protect SGN from degeneration compared to the control group. Previous studies observed a SGN protective effect in vivo evoked by chronic electrical stimulation [ , , , ]. However, this effect is controversially discussed because other studies were not able to reproduce it [ , , ]. One reason for the oppositional observations is that the beneficial effect of ES on neuronal survival depends on various parameters, such as pulse polarity, frequency, pulse width, healthiness of target cells and duration of stimulation. The underlying mechanism of the neurotrophic effect of depolarization is a sustained rise in cytosolic Ca , entering through L-type Ca channels and the following intracellular biological cascade [ , , ]. This effect of intracellular Ca is in contrast to the critical role of cytosolic Ca in mediating neuronal degeneration [ , ]. A hypothesis unifying these observations proposes that intracellular Ca must rise to a particular “setpoint” to promote survival in the absence of neurotrophic factors; degeneration is the result of very high cytosolic Ca [ , , ]. Thus, survival occurs within a range of elevated Ca , with the lower end of the range determined by the Ca setpoint for that neuron and the upper end by that neuron’s sensitivity to Ca -mediated neurotoxicity [ ]. On the other hand, using exactly the same stimulus parameters, a protective effect of AP + ES was found earlier [ ]. The only difference between both studies was that in the earlier study a monopolar stimulation paradigm was used, whereas in the current study, bipolar stimulation was used. As monopolar stimulation requires less current than the bipolar mode to reach threshold [ ], this might explain the different results observed in our studies. The combination of BDNF and ES resulted in a significant SGN protection. This confirms the results of other studies showing a synergistic effect of ES and BDNF regarding preservation of SGN (Shepherd et al., 2005). ES alone—when following a period of combined application with BDNF—was able to maintain the effect of the combined application for two to six weeks in a guinea pig model [ ]. In our setting, ES alone started only after cessation of BDNF delivery but still resulted in an increased neuronal density compared to the untreated contralateral sides. This appears to be slightly surprising as both treatments alone did not evoke an effect. We can only speculate about possible reasons but provide four possible explanations. The first is that when starting ES, BDNF should still be available in the scala tympani from the previous delivery. This period might be short, but there could still be a synergistic effect. The second explanation may be that the subsequent ES, by upregulating the transcription of the BDNF gene, increases the secretion of endogenous BDNF [ , , ] and therefore increases the maybe insufficient initial BDNF concentration of 50 ng/mL to a concentration which causes a biological effect. Additionally, high-frequency neuronal activity, induced by ES, can upregulate the number of BDNF-specific TrkB receptors on the surface of central nervous system neurons [ ]. Due to the increased number of receptors, the ES could lead to an enhanced mode of action of the BDNF (remaining from the previous delivery or endogenously produced due to the ES) in the SGN. However, it should be noted that this effect is up to now not proven for peripheral neurons such as the SGN. The fourth explanation may be a statistical effect because, for some unknown reason, the standard deviation in the group with consecutive treatment is low compared to all other groups. In all other histological measures (number of protected cells and difference between basal and apical parts), the simultaneous BDNF and ES application resulted in an improvement, especially in the basal part, but not the consecutive treatment. When evaluating the final EABR thresholds on day 48, no differences between AP + ES, BDNF + ES and BDNF ⇨ ES groups were detected. It has to be mentioned, however, that for AP + ES and BDNF ⇨ ES, a significant reduction in the threshold from implantation to day 48 was observed, whereas in the BDNF + ES group, there was only a tendency of reduced thresholds. The most likely reason for having not found a similar reduction in the BDNF + ES group, is the one animal with increasing thresholds from day 21 to 48. Impedance values of this animal were at 8.68 kΩ on day 48 and therefore only slightly higher than the mean value on this day in this group (6.4 kΩ). A reduction over time in EABR thresholds was already reported by Kanzaki et al. (2002). They speculated that ES alone improves the functional state of SGN [ ]. In contrast, Shepherd et al. (2005) measured a significant reduction in EABR thresholds only when BDNF was administered, either alone or in combination with ES [ ]. Since we detected decreased thresholds in the AP + ES group, our data support the findings of Kanzaki et al. that ES reduces the EABR threshold over the treatment period. There is a tendency of reduced SGN density in AP-treated ears compared to the respective contralateral ears. The effect may be due to increased neuronal degeneration due to the implantation, the delivery procedure, or the AP itself and is already reported in other studies that found lower SGN densities in deafened AP-treated cochleae when compared with the contralateral untreated deafened ears of the same animal [ , ]. Due to the relatively small sample sizes, the robustness of the statistical results could be limited but all together consecutive BDNF treatment and ES appears to be less effective in preventing neuronal degeneration than a combined application. As in a clinical setting ES typically starts a couple of days or weeks after implantation, a combined treatment right after implantation appears not to be realistic. A setup as used in a cat study [ ] with an early start of BDNF delivery followed by simultaneous application of BDNF and ES for some time and prolonged ES after cessation of BDNF treatment might also be promising. It might be good to compare both approaches under controlled conditions. ## 5. Conclusions Consecutive treatment with BDNF and ES was similarly effective as simultaneous treatment with BDNF and ES in terms of the density of surviving neurons when compared to the untreated contralateral side, but in all other measures the simultaneous treatment was more promising. Therefore, we conclude that under clinical conditions, both treatment strategies, BDNF application and electrical stimulation, should be combined. BDNF could be applied intraoperatively by use of a catheter or a drug depot for a longer release time followed by an early start of ES, as it is already implemented in some clinics worldwide, to ensure an early start followed by a sufficient period of time with a combined treatment.
Mind–body exercise has been proposed to confer both physical and mental health benefits. However, there is no clear consensus on the neural mechanisms underlying the improvements in health. Herein, we conducted a systematic review to reveal which brain region or network is regulated by mind–body exercise. PubMed, Web of Science, PsycINFO, SPORTDiscus, and China National Knowledge Infrastructure databases were systematically searched to identify cross-sectional and intervention studies using magnetic resonance imaging (MRI) to explore the effect of mind–body exercise on brain structure and function, from their inception to June 2020. The risk of bias for cross-sectional studies was assessed using the Joanna Briggs Institute (JBI) checklist, whereas that of interventional studies was analyzed using the Physiotherapy Evidence Database (PEDro) scale. A total of 15 studies met the inclusion criteria. Our analysis revealed that mind–body exercise modulated brain structure, brain neural activity, and functional connectivity, mainly in the prefrontal cortex, hippocampus/medial temporal lobe, lateral temporal lobe, insula, and the cingulate cortex, as well as the cognitive control and default mode networks, which might underlie the beneficial effects of such exercises on health. However, due to the heterogeneity of included studies, more randomized controlled trials with rigorous designs, similar measured outcomes, and whole-brain analyses are warranted. ## 1. Introduction Mind–body exercise is a form of multicomponent exercise that combines movement sequences, breathing control, and attention regulation, which is different from traditional physical exercise [ ]. It is also referred to as movement-based contemplative practice [ ] or mindful movement [ ], which emphasizes moving mindfully, commonly including Tai Chi Chuan (TCC), Qigong, and yoga. TCC is a form of mind–body exercise incorporating physical, cognitive, social, and meditative components [ ]. Qigong involves a set of relatively slow exercises through coordinated physical movements, breathing, and meditative state to cultivate one’s internal energy called “Qi” to achieve body healing, and Baduanjin (BDJ) is one of the most common forms of Qigong [ ]. Yoga is an ancient mind–body exercise which focuses on the present moment, consisting of physical postures (asanas), control of breath (pranayama), and the use of meditation (dyana), and the most common form is Hatha yoga [ ]. Compared with aerobic or resistance exercise, mind–body exercises are relatively low in intensity and slow in pace, particularly suitable for the elderly and groups with chronic diseases [ ]. In recent years, increasing research evidence has shown that mind–body exercise could improve and promote physical health [ , , ] as well as benefit mental health, including improving general cognition, executive function, learning, memory, and verbal fluency [ , , ]. Moreover, it aids in relieving stress [ , ], anxiety, depression, and other negative emotions [ , ] as well as enhancing the subjective well-being of an individual [ , ]. Numerous studies have reported promising results that support the effects of mind–body exercise on health benefits. However, the mechanisms underlying these improvements remain largely unknown. The physical and mental health improvements on the behavior of an individual are often accompanied by changes in the brain structure or function of specific regions or networks [ ]. Therefore, understanding the effects of mind–body exercise on the brain plasticity will significantly help to formulate more scientific interventions and, importantly, improve the behavior and brain health level of healthy or clinical populations. The effects of mind–body exercise on brain plasticity have often been examined by magnetic resonance imaging (MRI). MRI is a frequently used non-invasive neuroimaging technique with remarkable spatial resolution that allows for investigation of changes in cortical as well as subcortical brain regions, including two major modalities: structural MRI (sMRI) and functional MRI (fMRI) [ ]. sMRI provides measures of cerebral anatomy in vivo [ ], and fMRI detects brain activity and network connectivity based on blood oxygenation level-dependent (BOLD) signals [ ]. A few studies have assessed structural or functional brain changes regulated by mind–body exercise using MRI. However, the available studies have several limitations such as small sample size and diverse research designs and outcome measures, resulting in low statistical power and challenges in obtaining consistent changes in the brain regions or networks. To address these limitations, evidence-based medicine suggests that systematic reviews or meta-analyses should be used to combine the findings of multiple related primary studies to make them more persuasive [ ]. To the best of our knowledge, there is no systematic review or meta-analysis study that has integrated findings focused on different types of mind–body exercise as a whole. Previous systematic reviews on TCC [ ] and yoga [ , ], were based on a small number of studies and did not focus on specific brain regions or networks affected by mind–body exercise. In the present study, we conducted a systematic review of MRI-based studies investigating the relationship between mind–body exercise and brain structure and function to elucidate the possible neural mechanisms underlying the health benefits of mind–body exercises. Our findings can provide a theoretical basis for mind–body exercises in promoting the healthy development of the body, mind, and the brain. ## 2. Methods ### 2.1. Literature Search and Study Selection We performed a systematic electronic literature search in PubMed, Web of Science, PsycINFO, SPORTDiscus, and China National Knowledge Infrastructure (CNKI) databases from their inception to June 2020 to identify relevant studies. The databases were searched for articles published in either English or Chinese language using the following terms: “Tai Chi Chuan”, or “Taiji”, or “Qigong”, or “Baduanjin”, or “Wuqinxi”, or “Yoga”, or “mind-body exercise” in combination with “neuroimaging”, or “fMRI”, or “MRI”. Corresponding Chinese words were used in the Chinese databases. Besides, we explored several other sources, including the bibliography and citation indices of the pre-selected papers and direct searches of the names of frequently cited authors. All searched records were imported into EndNote X9 (Thomson Reuters), which facilitated the removal of duplicates. Two reviewers (ZX and ZW) independently selected and checked the eligible articles according to the inclusion criteria. Any disagreements were resolved through a discussion with a third reviewer (ZB). ### 2.2. Inclusion and Exclusion Criteria The inclusion and exclusion criteria of the eligible studies constituted: Participants: study populations consisted of healthy adults or older adults, regardless of sex, racial, and ethnic groups. We excluded studies with subjects who had cognitive impairment or suffering from organic diseases such as diabetes, fibromyalgia, knee osteoarthritis, or tinnitus to avoid interference of the results by these factors. Type of exercise: the experimental group engaged in mind–body exercises, including Tai Chi Chuan (TCC), Qigong, and yoga. Studies with multimodal interventions comprising mind–body exercises were also included to increase the number of enrolled articles. However, we excluded studies that examined the sole effects of mindfulness or meditation since the aim of this study was to examine the effect of holistic mind–body exercises which involve structured movements, breath control, and attention modulation on brain health. Study design: to gain an overall understanding of mind–body exercise-related changes in the brain plasticity of healthy adults, there were no restrictions on the study design. Therefore, randomized and non-randomized controlled interventions and within-subjects intervention studies as well as cross-sectional studies comparing experts to novices were all included. With reference to previous studies [ , ], for the intervention studies, the experimental group must have exercised for at least 4 weeks with more than one session per week, and for the cross-sectional studies, the regular exercise duration must have been not less than 3 years to provide sufficient time for changes in the brain structure and function to occur. Outcomes: the imaging technique was restricted to MRI, including sMRI, task fMRI, and resting-state fMRI. The outcome measures included changes in structure (i.e., gray matter volume, density, and cortical thickness) and task or resting-state (de)activation, functional connectivity for pre- to post-mind–body intervention, or mind–body expert–novice comparison. Literature type: peer-reviewed articles published either in English or Chinese language. ### 2.3. Data Extraction Two independent researchers (ZX and ZW) performed data extraction from the eligible studies. Any disagreements were resolved via discussions with a third researcher (ZB). For the cross-sectional studies, we extracted data on the sample size, group age, group description, main outcome measures, primary MRI results, and the association with behavior results. At the same time, the extracted data for intervention studies included participants and study design, sample size, group age, intervention frequency and duration, main outcome measures, primary MRI results, and the association with behavior results. ### 2.4. Quality Assessment We assessed the methodological quality of the included cross-sectional studies using the Joanna Briggs Institute (JBI) checklist for analytical cross-sectional studies [ , ]. The checklist comprised 8 items and possible answers were “yes”, “no”, “unclear”, or “not applicable”. According to relevant studies [ , ], studies were characterized according to the following: (i) low risk of bias if studies reached more than 70% score “yes”; (ii) moderate risk of bias if “yes” scores were between 50% and 69%; and (iii) high risk of bias if “yes” scores were below 49%. For the intervention studies, we used the Physiotherapy Evidence Database (PEDro) scale developed by the Delphi list [ ]. The scale consists of 11 items which were scored as either 1 (the answer was “yes”) or 0 (the answers were “no”, “unclear”, or “not applicable”). The studies were classified, using the total rating score (item 1 not scored), as having excellent (9–10), good (6–8), fair (4–5), or poor (<4) quality [ ]. In addition, to reduce the risk of bias in the assessment, two researchers (ZX and ZW) scored the quality of the included articles independently. Any conflicting scores between the two researchers were resolved via a discussion with a third researcher (ZB). ## 3. Results ### 3.1. Study Search and Characteristics shows a flow chart summarizing the study selection process recommended by the PRISMA (preferred reporting items for systematic reviews and meta-analyses) guidelines. Overall, 29 studies focused on evaluating mind–body exercise and brain plasticity using MRI met the inclusion criteria. It is worth noting that in 14 cases, authors published multiple articles on one actually performed study; therefore, we merged the related articles into one study, resulting in 15 actual studies. The characteristics of the included cross-sectional and intervention studies are shown in and , respectively, and studies using the same dataset were merged into one column and we reported the overall results. All the included studies involved healthy adult participants, with 10 studies involving elderly participants. There were nine cross-sectional studies which compared mind–body experts to controls. Of these, five studies focused on yoga experts and four focused on TCC experts who had regularly practiced for at least three or more years. The remaining six were intervention studies with different durations of the interventions (range 6–24 weeks) and frequency (range 1–5 sessions per week). Of these, four were randomized controlled trials (RCTs), one was a controlled trial, and one was a “before-and-after” study with no control group. In addition, three studies applied TCC, two studies applied yoga, one applied BDJ, and one applied a multimodal intervention comprising cognitive training, TCC exercise, and group counseling. These intervention studies examined the brain health outcomes at baseline and the end of the intervention. ### 3.2. Quality Assessment The quality assessment results for all the cross-sectional and intervention studies are shown in and , respectively, and studies using the same dataset were merged into one column and the overall results are reported. Most of the included cross-sectional studies were characterized as having low risk of bias ( n = 8), with only one being categorized as having moderate risk of bias. Among these studies, the lack of a clear description of the measurement approach of the exercise conditions was a common problem ( n = 7). The quality of the intervention studies most often ranged between good and excellent ( n = 5), with only one being of fair quality. The most frequent missing point was the description and application of instructor or assessor blinding ( n = 6). ### 3.3. Changes in Brain Regions #### 3.3.1. Prefrontal Cortex Structural and functional differences in sub-regions in the prefrontal cortex (PFC) were reported by various studies. Most prominently, the dorsolateral PFC (dlPFC) and the medial PFC (mPFC) were affected by mind–body exercises in the included studies. Structural changes in the PFC were reported in four cross-sectional studies. Compared with novices, TCC experts [ ] had a greater cortical thickness (CT) in the right middle frontal sulcus (part of the dlPFC) and yoga experts [ ] showed greater CT in the left prefrontal lobe cluster that included part of the middle and superior frontal gyri (MFG and SFG). Greater gray matter volume (GMV) was reported in the orbitofrontal cortex (OFC), right MFG, and the mPFC in yoga experts compared with controls, which were positively correlated with fewer cognitive failures [ , ]. Changes in task-induced brain activation were reported in three cross-sectional studies and one RCT study. For the cross-sectional studies, less activation was reported in the dlPFC during the Sternberg working memory task [ ] and in the left SFG during the Affective Stroop task [ ] in yoga experts relative to controls. Besides, less dlPFC activation was reported regarding the N-back task in the TCC experts compared with the water aerobics practitioners [ ]. However, in the RCT study, there was an increased trend of left SFG activation following the TCC intervention during the modified task-switching fMRI paradigm. Intriguingly, the TCC group with greater PFC activation in the switch condition reported better cognitive function [ ]. Two cross-sectional and two RCT studies reported changes in spontaneous brain neural activity. In the cross-sectional studies, compared with novices, the TCC experts had less regional homogeneity (ReHo) in the dlPFC and lower voxel-mirrored homotopic connectivity (VMHC) as well as greater fractional amplitude of low-frequency fluctuations (ALFF or fALFF) in the MFG [ , , ]. Moreover, two RCT studies reported increased ALFF in the dlPFC after a 6-week multimodal intervention including TCC and after a 12-week TCC intervention [ , ]. Tao et al. also reported increased fALFF in the mPFC after a 12-week BDJ intervention [ ]. #### 3.3.2. Hippocampus/Medial Temporal Lobe Two cross-sectional and three intervention studies reported structure changes in the hippocampus and medial temporal lobe (MTL) following mind–body exercise. Compared with novices, greater hippocampal GMV was observed in experienced yoga experts [ , ]. Both the 12-week TCC and BDJ interventions increased the GMV in the hippocampus/MTL, and the increased GMV was positively associated with improved memory abilities [ ]. Increased hippocampal GMV (compared with controls) and GMD (compared with active sport and passive groups) were reported in two yoga intervention groups, respectively [ , ]. #### 3.3.3. Lateral Temporal Lobe In addition to the hippocampus/MTL, structural changes in the lateral temporal lobe, majorly constituting the superior temporal gyrus (STG) and the middle temporal gyrus (MTG), were reported in three cross-sectional and two RCT studies. Here, TCC experts displayed greater CT in the left STG [ ] and yoga experts exhibited greater GMV in the left STG [ ]. In addition, 8-week TCC intervention increased GMV in the left STG and the right MTG relative to controls and the aerobic exercise group, respectively [ ]. Moreover, significantly changed ReHo in the left STG and MTG was reported following 6-week multimodal intervention including TCC, which both correlated with better cognitive function [ ]. #### 3.3.4. Insula One RCT and three cross-sectional studies reported on structural changes of the insula, including great CT and GMV. Increased CT in insula was observed in TCC experts [ ] and yoga experts [ ] compared with controls. Increased GMV in insula was also reported in yoga experts [ ] and TCC and BDJ intervention groups [ ]. In addition, the reported structural changes in the insula uniquely correlated with pain tolerance in yoga experts [ ]. #### 3.3.5. Cingulate Cortex Only one RCT and two cross-sectional studies reported structural or functional changes in the cingulate cortex. Compared with controls, yoga experts [ ] exhibited both greater CT and GMV in the cingulate cortex, and TCC experts [ ] had less ReHo in the left anterior cingulate cortex (ACC). Meanwhile, 8-week TCC intervention increased GMV in the left precuneus/posterior cingulate cortex (PCC) [ ]. #### 3.3.6. Other Regions Changes in other brain regions were reported in the included studies, including the occipital cortex [ , , ], precentral and postcentral gyri [ , , ], cerebellum [ , , ], putamen/caudate [ , ], and the amygdala [ ], though fewer results were reported. Several studies also found that changed brain regions were positively correlated with better cognitive function [ , , ]. ### 3.4. Changes in Brain Functional Connectivity and Network Two cross-sectional and three RCT studies reported differences or changes in brain functional connectivity. Compared to controls, a higher resting-state functional connectivity (rsFC) between the mPFC and right angular gyrus was reported in the yoga experts [ ], whereas a weaker rsFC between the dlPFC and MFG was observed in the TCC experts [ ]. For the RCT studies, significantly greater rsFC was observed between the left MFG and superior parietal lobule (SPL) after 8-week TCC intervention [ ], as well as between the mPFC and MTL/hippocampus after 6-week multimodal intervention including TCC [ ]. Another RCT study with different a priori regions of interest (ROIs) (dlPFC, hippocampus, and mPFC/PCC) using a seed-based rsFC analysis reported decreased rsFC between the dlPFC and left SFG and ACC after a 12-week TCC intervention, as well as decreased rsFC between the dlPFC and left putamen and insula compared to control groups [ ]. At the same time, both the TCC and BDJ interventions increased the rsFC between the hippocampus and mPFC, and there was no significant difference between the two groups [ ]. Furthermore, the BDJ intervention decreased the rsFC between the mPFC and OFC/putamen. Whereas, the TCC intervention increased the rsFC between the PCC and right putamen/caudate, and the baseline rsFC between the mPFC and OFC was negatively correlated with memory function [ ]. Besides the aforementioned FC results, a few brain networks were reported. TCC experts had less fALFF in the bilateral frontoparietal network (main part of the cognitive control network, CCN) and the default mode network (DMN) [ ]. Moreover, one cross-sectional study used a group independent component analysis (gICA) to examine the different correlations of intrinsic connectivity network and found that there were significant differences in the DMN and sensory-motor network (SMN) of rsFC between the TCC and walking groups [ ]. Froeliger et al. also reported that yoga experts exhibited greater rsFC within the dorsal attention network (DAN) [ ]. ## 4. Discussion Herein, we systematically reviewed evidence of the effect of mind–body exercise on the structure and function of the brain to further understand the possible neural mechanisms underlying the health benefits. We found that both long-term and relatively short-term mind–body exercises induced structural or functional changes, mainly in the PFC, hippocampus/MTL, lateral temporal lobe, insula, and the cingulate cortex and within the CCN and DMN. ### 4.1. Brain Regions #### 4.1.1. Prefrontal Cortex The PFC is one of the later-mature brain regions and plays a critical role in a series of advanced cognitive activities [ ]. There were inconsistent results regarding the structure of and task-related activation in the PFC between the included cross-sectional and intervention studies. First, four included cross-sectional studies reported structural changes in the PFC, including greater GMV and CT [ , , , ]. This implies that long-term mind–body exercise (lasting for at least 3 years of regular practice) can cause positive plasticity in the PFC structure, hence improving the cognitive performance of the individual. However, there were no changes in the PFC structure in the included intervention studies. We speculated that this could be because the relatively short mind–body intervention duration was insufficient for significant changes in the PFC region. Besides, three included cross-sectional studies reported less task-related activation in the PFC, whereas one RCT study reported more [ , , , ]. This might be attributed to two different theoretical frameworks that explain the neural plasticity induced by exercise, i.e., reduced neural activity could reflect improved neural processing efficiency, and increased neural activity could indicate more specialized and enhanced neural processing ability [ ]. Based on this, we speculate that the relatively short-term mind–body intervention (e.g., the 12-week TCC) could have enhanced the neural processing ability of the PFC, resulting in the greater task-related activation. However, with the continuous increase in regular exercise (e.g., more than 3 years), the neural processing efficiency of the PFC could become more and more efficient, resulting in a subsequent decreased task-related activation in the PFC. Taken together, RCT studies of further prolonged duration should be conducted to clarify the effect of intervention duration on the structure and functional activation of the PFC region. The findings of the spontaneous neural activity studies indicated that mind–body exercise could decrease the regional homogeneity and increase the functional specialization and intrinsic activity intensity of the PFC, improving the cognitive function [ , , , , ]. Nevertheless, the sub-regions in the PFC affected by different types of mind–body exercises might not be the same. Particularly, TCC affected the spontaneous neural activity of the dlPFC, whereas BDJ affected the mPFC. This difference could be attributed to the different characteristics related to TCC and BDJ. Compared with TCC, which is much more complex and requires moving the trunk and four limbs by spatial navigation toward oneself [ ], BDJ is much simpler and only involves eight simple fixed movements of arms with almost no movement of legs [ ]. #### 4.1.2. Hippocampus/Medial Temporal Lobe Both cross-sectional and intervention studies reported the effect of mind–body exercise on structural changes in the hippocampus/MTL [ , , , , ]. Notably, the positive changes in the hippocampus/MTL structure were related to the improvement in the cognitive function, particularly the memory function [ , ]. The hippocampus plays a crucial role in learning and memory processing [ ], and the MTL, consisting of the hippocampus, adjacent parahippocampal, and entorhinal cortex, is also essential for memory processing [ ], which shows that different types of mind–body exercises can improve the memory of an individual by changing the structure of the hippocampus/MTL. The effects of mind–body exercises on the hippocampus are similar to the findings of physical activity [ ] and mindfulness meditation [ ], suggesting that physical activity alone or meditation alone, as well as the combination of both, improves the brain structure related to memory. In addition, the intervention study [ ] conducted by Garner et al. indicated that yoga experts have a significantly increased GMD in the hippocampus compared with a sport group (stretching and strengthening training), implying that mind–body exercises are more effective in improving the hippocampal structure than physical exercises alone. However, because of the methodological defects of the study (participants were not randomly allocated), the baseline hippocampal GMD in the yoga group was significantly lower than in the sport group. Therefore, the efficacy of mind–body exercise on hippocampal structure should be further explored using methodologically sound RCT studies. #### 4.1.3. Lateral Temporal Lobe Several included studies reported structural and functional changes in the STG and MTG in the mind–body exercise groups compared to controls or individuals performing aerobic exercises [ , , , ]. The STG is thought to be sensitive to emotional information and plays an important role in a goal-directed behavior [ ]. The MTG has extensive connectivity with the frontal parietal regions and is associated with semantic and memory processing [ ]. A recent meta-analysis of RCT studies showed that physical activity significantly increases the structure of the lateral temporal lobe [ ]. Our results revealed that the mind–body exercises equally or, in some cases, more effectively induced brain plasticity in the lateral temporal lobe, leading to more functional improvements than other forms of exercise, such as aerobics. #### 4.1.4. Insula and Cingulate Cortex Mind–body exercises mainly induced structural changes in the insula and cingulate cortex [ , , , , ]. The insula is believed to be related to interoceptive awareness [ ] and the integration of sensory information to produce emotional experiences [ ], as well as pain processing [ ]. Meanwhile, the extensive connection between the cingulate cortex (particularly the ACC) and the PFC and insula is linked to emotion regulation and attention control [ , ]. Young et al. showed that changes in ACC activity reflect the mental processing of non-judgmental acceptance, one of the core components of meditation training [ ]. Besides, the ACC is also correlated with pain procession [ ]. Villemure et al. indicated that the insula GMV is associated with pain tolerance, with yogis using mental strategies based on relaxation, focusing on the pain, and non-judgmental acceptance to tolerate pain [ ]. Combined with the authors’ interpretations, we believe that those changes could be attributed to the inclusion of meditation training during the mind–body exercise, including present moment awareness and non-judgmental acceptance. Moreover, the repeated and prolonged use of these strategies causes positive plasticity in the related brain regions such as the insula and ACC. This further enhances the level of meditation of the individual and corresponding tolerance to noxious stimuli such as pain. Even so, relatively few included studies reported changes in these two regions; hence, more research is required to elucidate this issue. ### 4.2. Brain Networks Mind–body exercise not only affects a specific brain region, but also the interconnection among different brain regions and various brain networks. This allows more meticulous access to the underlying neural mechanisms. The rsFC results of the included studies mainly reported changes between the PFC (two major sub-regions: dlPFC and mPFC) and other related regions. There were four a priori ROIs in the included rsFC analysis studies: the dlPFC, mPFC, PCC/precuneus, and the hippocampus/MTL. The dlPFC is a vital region of the CCN which plays an essential role in cognitive control processes [ ]. The mPFC, PCC/precuneus, and MTL/hippocampus constitute the main nodes of the DMN which are functionally relevant to internal mental explorations and memory function [ , ]. #### 4.2.1. Cognitive Control Network We found weaker rsFC within the CCN of older TCC experts [ ], as well as older TCC and BDJ intervention groups [ ]. Similarly, a weaker fALFF in the CCN was reported in older TCC experts [ ], and the decreased rsFC and fALFF within the CCN were correlated with better cognitive control and emotion regulation [ , , ]. In contrast, there was an increased FC within the CCN after TCC intervention in young college students [ ]. The conflicting results may be attributed to the different ages of the participants, to some extent. In the first three studies, all of the participants were elderly. Research shows that old people frequently over-recruit frontal neural resources to compensate for the disruption of the CCN and to overcome cognitive control deficits [ ]. However, this hyperactivation usually represents a dysfunctional condition and cannot prevent a decline in cognitive function [ ]. The decreased rsFC or fALFF in the CCN of the old mind–body exercise experts might suggest an increased efficiency of the cognitive control system and eliminate the need for compensatory hyperactivation of the network affected by mind–body exercise. However, Cui et al. assessed young college students whose executive function is still undergoing development, and the increased FC within the CCN after TCC exercise reflects the specialized and enhanced cognitive control. Furthermore, some studies showed that the beneficial effect of physical exercise on the CCN and cognitive function has been widely reported [ ]. Similarly, mindfulness meditation enhances self-regulation through three components, i.e., attention control, emotion regulation, and self-awareness [ ]. It is reasonable to imply that mind–body exercise, with both the physical activity and meditation components, induces more effective functional changes in the CCN to improve self-regulation in both young and older adults. #### 4.2.2. Default Mode Network Consistently increased rsFC within the DMN was reported in yoga experts [ ], a 6-week multimodal intervention including TCC [ ], and 12-week TCC and BDJ interventions [ ]. Besides, the increased rsFC within the DMN was associated with better general cognition and memory function [ , ]. These findings suggest that mind–body exercises, including TCC and BDJ, enhance cognitive function through the same neural mechanisms by increasing intrinsic connectivity between the mPFC and hippocampus/MTL within the DMN. On the contrary, the study of Liu et al. [ ] selected different ROIs (mPFC and PCC) and different methods for multiple comparison correction based on the same dataset, and reported different results, including changes in the direction and related regions. This implies that both TCC and BDJ could improve memory function through other different neural circuits, e.g., decreased rsFC between the mPFC and OFC. With the different results taken into account, we speculate that the DMN is a complex brain network encompassing different vital brain regions (e.g., mPFC and PCC), and mind–body exercise could modulate various features of the DMN, through its multifaceted nature that combines movement, breathing, and attention. Besides, the characteristics among different types of mind–body exercise, such as TCC and BDJ, are not exactly the same, which could affect the DMN through different neural mechanisms. Furthermore, as mentioned above, the two articles [ , ] used different methods for multiple comparison correction based on the same dataset. The former applied the family-wise error (FWE) correction method, whereas the latter utilized the false discovery rate (FDR), which might have different influences on the results. ### 4.3. Limitations Our study faced several limitations. Firstly, because of the wide variety of outcome measures (structure, neural activity, and functional brain connectivity), different study designs, and some studies reporting ROI analyses, there was no quantitative meta-analysis such as activation likelihood estimation (ALE) applied. Secondly, given the relatively few related studies, our systematic review included cross-sectional studies. Nevertheless, due to the inherent defects, the cross-sectional studies could not certainly attribute the group differences in brain structure and function to mind–body exercise. Although the age, gender, and years of education between groups were matched in the included studies, other confounders could still influence the results. For instance, people with a specific brain activation pattern may have the tendency to engage in mind–body exercises. Besides, there is no long-term follow-up study yet, which would be very meaningful to clarify the long-term health effects affected by mind–body exercise. Thirdly, there was a selection and reporting bias due to the focus on a priori defined ROIs rather than whole-brain analyses, particularly in the rsFC analysis studies. Therefore, it is impossible to fully uncover the impact of mind–body exercises on brain plasticity. Fourthly, we still cannot ascertain the role of the physical and mental components of mind–body exercise on brain structural and functional changes due to the lack of direct comparisons between these components. Finally, because of the few relevant studies, we did not include studies of clinical populations with cognitive issues, mood disorders, etc. As an important non-pharmacological therapy, mind–body exercises have shown promising effects on older adults with cognitive impairments [ ] and persons with depression [ ]. Therefore, to clarify the influence of mind–body exercise on the brain plasticity of clinical populations is an important direction for future studies. ## 5. Conclusions In the present study, 15 studies which employed MRI to investigate the effects of mind–body exercise on brain plasticity were included. Our synthesis of results revealed that mind–body exercises induced changes in the structure, neural activity, or functional connectivity in various regions of the brain, primarily the PFC, hippocampus/MTL, lateral temporal lobe, insula, and the cingulate cortex, as well as brain networks, including the CCN and the DMN. These changes were associated with health benefits for healthy adults. However, due to the heterogeneity in the study designs, varied age of participants, exercise types, outcome measures, and a priori ROIs, there were some inconsistent results among the included studies. Therefore, findings of this study should be interpreted with caution. RCTs with rigorous designs and similar measured outcomes, as well as whole-brain analyses, should be conducted to unravel the precise underlying neural mechanisms of mind–body exercise.
Several recent studies confirmed that Attention Deficit Hyperactivity Disorder (ADHD) has a negative influence on peer relationship and quality of life in children. The aim of the current study is to investigate the association between prosocial behaviour, peer relationships and quality of life in treatment naïve ADHD samples. The samples included 79 children with ADHD (64 boys and 15 girls, mean age = 10.24 years, SD = 2.51) and 54 healthy control children (30 boys and 23 girls, mean age = 9.66 years, SD = 1.73). Measurements included: The “Mini International Neuropsychiatric Interview Kid; Strengths and Difficulties Questionnaire” and the “Inventar zur Erfassung der Lebensqualität bei Kindern und Jugendlichen”. The ADHD group showed significantly lower levels of prosocial behaviour and more problems with peer relationships than the control group. Prosocial behaviour has a weak positive correlation with the rating of the child’s quality of life by the parents, both in the ADHD group and in the control group. The rating of quality of life and peer relationship problems by the parents also showed a significant negative moderate association in both groups. The rating of quality of life by the child showed a significant negative weak relationship with peer relationships in the ADHD group, but no significant relationship was found in the control group. Children with ADHD and comorbid externalizing disorders showed more problems in peer relationships than ADHD without comorbid externalizing disorders. Based on these results, we conclude that therapy for ADHD focused on improvement of prosocial behaviour and peer relationships as well as comorbid externalizing disorders could have a favourable effect on the quality of life of these children. ## 1. Introduction Prosocial behaviour does not have a generally accepted, unified definition, but researchers tend to agree to use it as an umbrella term for several behaviours, including helper, supportive, sharing, cooperative and politeness behaviour, without the expectation of a possible reward [ , ]. The first appearance of these behaviours is at around 2 years of age; emotional atonement and empathy play an important role in their development [ ]. Previous studies found that with age and the development of selfhood, prosocial behaviour also develops through the experience of social interactions [ ]. Prosocial behaviour contributes to harmonic relationships in the family, positive social relationships and friendships [ , , ]. Primary school children, who perform high in measures of prosocial behaviour, perceive acceptance and positive social relationships from their peers [ ]. Attention-Deficit/Hyperactivity Disorder (ADHD) is one of most common neurodevelopmental disorders, affecting 4–6% of the primary school population [ , ]. Its occurrence is more common among boys: the gender distribution proportion is 3:1 [ ]. The core symptoms of ADHD are poor attentional performance, impulsivity and hyperactivity [ , ]. According to the latest, fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) at least six (five above age of 17) of the nine symptoms of attention deficit and/or at least six (five above age of 17) of the nine symptoms of hyperactivity must be present to fulfil diagnosis of ADHD. Additional criteria include the onset of symptoms before the age of 12 years, the persistence of at least 6 months, and the impairment of function in at least two situations [ ]. Based on previous studies, two-thirds of children diagnosed with ADHD are diagnosed with at least one comorbid psychiatric disorder [ , , ]. Conduct disorder (CD) is the most common comorbid disorder, appearing in 20% of cases of ADHD [ , ]; moreover, oppositional defiant disorder (ODD) was also found to be one of the most common comorbid diseases [ , ]. According to the DSM-5, conduct disorder (CD) is a recurrent and persistent pattern of behavior in which a child or adolescent violates the fundamental rights of others or more important social norms and rules appropriate to age, while oppositional defiant disorder (ODD) manifests itself as a rebellion against an authority [ ]. ADHD is associated with cognitive, social and emotional impairments [ , ] and negatively affects the child’s relationship with family members [ , , ]. Paap et al. [ ] found a relationship between peer problems and prosocial behaviour in typically developing 7–9 years old children, but ADHD and ODD act as moderator variables, weakening this relationship. Tengsijaritkul and their colleagues [ ] examined functional impairments in treated ADHD children and found that they have lower prosocial scores; moreover, their comorbid medical disorders were associated with higher problem scores. Furthermore, in a clinical study, peer problems and prosocial behaviour of methylphenidate treated children with ADHD and children without ADHD were compared [ ]. The results indicated that children with ADHD show more problems with peers than children without ADHD and teachers appreciated them less as prosocial [ ]. Comorbidity—specifically externalizing comorbid disorders—contributes to increase social difficulties among children with ADHD [ ]. Measuring quality of life could be important for the investigation of function impairment and therapeutic effectiveness among childhood psychiatric disorders, including ADHD [ , , , ]. Functional impairment is a criterion for all psychiatric disorders according to the classification systems. Furthermore, in the case of ADHD, functional impairment needs to be present in at least in two areas, i.e., school/work and social life [ , ]. The concept of quality of life is a multidimensional measure that is broader than functional impairment, as it encompasses the overall health, impairments and effectiveness in several areas of daily life, including academic settings, leisure activities, and social life with family and friends [ ]. All these areas can be affected by the presence of a mental disorder such as ADHD; thus, the assessment of quality of life could add valuable information about the patient’s current status in regard to the focus of treatment as well as measuring its efficacy. In the light of the above-mentioned findings, in the last decade several researchers investigated the effect of ADHD on quality of life [ , ]. Previous studies confirmed that ADHD has a negative influence on the child’s quality of life; these children have a lower level of quality of life than their healthy peers [ , ]. Effective multimodal treatment exists for the management of ADHD, including parental education, cognitive behavioural therapy and medication [ , , , ]. Clinical studies have indicated that pharmacotherapy/multimodal treatment has a positive effect on quality of life and on the remission of ADHD symptoms [ , ]. Peer relationships are important for the social development of children [ ]. Children diagnosed with ADHD often have difficulties developing peer relationships due to their ADHD symptoms, such as impulsivity and poor attention [ ]. Therefore, it is important to examine possible factors connecting to their social functioning, such as their prosocial behavior. Furthermore, children with ADHD have significantly lower quality of life compared to healthy children in many areas including peer relations [ ]. Previous studies have indicated that quality of life can be an important tool to measure the impact of a mental disorder and for assessing the effectiveness of a treatment [ ]. Based on these findings, we examined if a better understanding of peer relationship and prosocial behavior can improve the quality of life of children with ADHD. Additionally, the assessment of prosocial skills and prosocial behavior in treatment naïve children with ADHD can serve as a baseline measurement for monitoring the efficacy of therapies. Although several previous studies have examined the functional impairment in children with ADHD, including such aspects as social functions, peer functioning and prosocial behaviour, the effectiveness of treatment in clinical trials is not filtered out, which is important when we want to evaluate functional impairment [ , , ]. To our knowledge, there has not been any research conducted which explored prosocial behaviour and peer relationships among treatment naïve children with ADHD, nor are we aware of any research that explored the relationship between prosocial behaviour, peer relationships and quality of life. The aim of the current study was to investigate the levels of prosocial behaviour and peer relationship problems among samples which were carefully selected: 1. a treatment-naïve ADHD group of children who were diagnosed both by a child psychiatrist and a structured diagnostic interview, and 2. a control group of children with no previously recognized psychiatric disorders or any psychiatric disorders currently diagnosed by a structured diagnostic interview. Moreover, we wanted to investigate the relationship between prosocial behaviour, peer relationships and quality of life (both the parents’ and the child’s ratings) in both the control sample and the treatment naïve ADHD sample. Finally, our goal was to explore the differences, in terms of prosocial behaviour and peer relationships, between those in the treatment naïve ADHD sample who had comorbid externalising disorders (i.e., CD and/or ODD) and those who did not have comorbid externalising disorders. Treatment naïve children with ADHD show a lower level of prosocial behaviour and a higher level of peer relationship problems than healthy children. A higher level of prosocial behaviour is associated with a higher level of quality of life in treatment naïve children with ADHD and healthy children based both on parental proxy- and children’s self-reports. A higher level of peer relationship problems is associated with a lower level of quality of life in the treatment naïve children with ADHD and healthy children based both on parental proxy and children’s self-report. A lower level of prosocial behaviour is associated with a higher level of peer relationship problems in treatment naïve children with ADHD and healthy children. Treatment naïve children diagnosed with ADHD and comorbid externalizing disorders show a lower level of prosocial behaviour and a higher level of peer relationship problems than treatment naïve children with ADHD and without comorbid externalizing disorders. ## 2. Materials and Methods ### 2.1. Recruitment and Research Participants We admitted into our study a treatment-naïve ADHD group and a healthy control group of children aged 6 to 18 years. The ADHD group of children was recruited from the Vadaskert Child and Adolescent Psychiatric Hospital and Outpatient Clinic, Budapest, Hungary. We used the following inclusion criteria for the treatment-naive ADHD group: children with a diagnosis of ADHD according to a structured diagnostic interview (see below), no previous psychological and/or psychiatric treatment including both psychotherapy and pharmacotherapy in the medical history. We enrolled these children into our study after their psychiatrists diagnosed them with ADHD in the hospital/out-patient clinic, but before their treatment started. In the clinical group, the child was not included into the study if the child’s psychiatrist indicated that intellectual disability had been suspected or confirmed previously or during the current hospital examination. From the clinical group, two different groups were created: one with comorbid externalizing problems, and one without comorbid externalizing problems. Among the ADHD group with externalizing problems, besides the ADHD diagnoses, CD and/or ODD had to be present. Among the group without externalizing problems, neither of these two diagnoses were present. To create the control group, twelve schools were randomly selected from a list of public primary schools of Budapest. Furthermore, two schools from countryside were included, which were selected by researchers. Only primary schools educating children with average intelligence were included. Special schools that educate children with mental retardation have been excluded. Children with any ongoing or previous psychological or psychiatric treatment were also excluded. The absence of any psychiatric disorders was confirmed by a structured psychiatric interview (see below). ### 2.2. Characteristics of Sample The treatment naïve ADHD group consisted of 79 children: 64 (81%) boys and 15 (19%) girls. The mean age of children with ADHD was 10.24 years (SD = 2.51, range: 6–15). Among the ADHD group, 49 (62.8%) children were diagnosed with an externalizing comorbid disorder (CD and/or ODD), while 29 (37.2%) children were not diagnosed with any comorbid externalizing disorders. Gender distribution in the ADHD group without externalizing disorders: 25 (86.2%) boys, 4 (13.8%) girls; while, in the ADHD group with externalizing disorders: 38 (77.6%) boys, 11 (22.4%) girls. The gender and the group (ADHD with or without externalizing disorders) revealed no significant relationship ( χ (1) = 0.879, p = 0.349). The control group consisted of 54 children: 31 (57.4%) boys and 23 (42.6%) girls. The mean age of children in the control group was 9.66 years (SD = 1.73, range: 6–14). There was a non-significant difference in age between the ADHD and the control group (U = 2267, p = 0.344). The gender and the group (ADHD-control) revealed a significant relationship ( χ (1) = 8.758, p = 0.003). The percentage of boys in the clinical group was 81% (64 boys); in the control group, it was 57.4% (31 boys). There was no gender and age difference in any of the variables examined (see and ). ### 2.3. Procedure This study was approved by the Ethical Committee of the Medical Research Council, Hungary (ETT-TUKEB), project identification code: 26182/2011-EKU. The parents of each child and adolescent were provided written informed consent after being informed of the nature of the study. Children/adolescents participated in a diagnostic interview recorded by a psychologist and then completed questionnaires related to the study. No compensation was provided to the participants. ### 2.4. Measures #### 2.4.1. Psychiatric Symptoms and Diagnoses To measure psychopathology and diagnoses both in the clinical and healthy control groups, the modified version of the Hungarian Mini International Neuropsychiatric Interview for Children and Adolescents (MINI Kid) 2.0 [ , , , ] was applied. The MINI Kid is a structured psychiatric interview which assesses the 25 DSM-IV child and adolescent psychiatric disorders. The modified version of the MINI Kid evaluates not only psychiatric disorders, but assesses all psychiatric symptoms, enabling subthreshold disorders to be identified. The interview is suitable for children aged between 6 and 18 years; it was administered to children under 13 years of age in the presence of their parents, while those who were 13 years of age and above of age participated in the interview on their own. #### 2.4.2. Prosocial Behaviour, Peer Problems We used the Hungarian version of the Strengths and Difficulties Questionnaire (SDQ) [ , ], which serves to explore and filter childhood behavioural problems and mental disorders. The questionnaire consists of 25 items; each item can be scored from 0 to 2 (answer possibilities: 0 = not true, 1 = somewhat true, 2 = firmly true), and each scale point is ranged between 0 and 10. The items of the questionnaire are classified into 5 subscales: emotional symptoms, behavioural problems, hyperactivity, peer relationship problems, and prosocial behaviour. In the present study, we focused on the answers given to the prosocial behaviour and peer relationship problems subscales. #### 2.4.3. Measuring Quality of Life The Quality of Life Questionnaire [ ], or “Inventar zur Erfassung der Lebensqualität bei Kindern und Jugendlichen” (ILK) [ ], is a Hungarian version of a subjective measurement of life quality. The original questionnaire consists of 15 items, which pertain to school, family relationships, time spent with peers, time spent alone, and finally, physical and mental health [ ]. The measurement is suitable for children from 7 to 18 years of age. It has a self-rated version both for children and adolescents, and a proxy parent rated version as well. The questionnaire measures in a 5-point Likert scale. Descriptive statistics and internal consistency of the measurements can be found in . ### 2.5. Statistical Analysis After data recording, a 10% inspection followed by data cleaning was performed to create a valid database. Salient cases, i.e., cases where the response exceeded the minimum or maximum score of the questionnaire, have been excluded. The distribution of relative frequencies and the descriptive analysis with means and standard deviations were calculated to describe the sample characteristics and the used measurements; Cronbach’s alpha was used to measure the internal consistency of the measurements. The Shapiro–Wilk test was applied to test the normality of peer relationships problems, prosocial behaviour and quality of life (ILK). As none of the variables show normal distribution, non-parametrical tests were applied. In order to examine the differences in prosocial behaviour and peer relationship problems between the clinical group and the control group, and between the ADHD group with externalizing problems and the ADHD group without externalizing problems Mann–Whitney U tests were performed. Spearman rank correlation coefficients were calculated for evaluating monotonous associations between prosocial behaviour, peer relation problems and self- and parent-rated quality of life. Post hoc power analysis was calculated using G*Power software for detecting significant effects of the results analysed [ ]. Statistical procedures were performed using IBM SPSS 25 statistical software [ ]. ## 3. Results ### 3.1. H1. Prosocial Behaviour and Peer Relationship Problems in the Treatment Naïve ADHD and Control Groups The treatment naïve ADHD group showed a significantly lower value in prosocial behaviour than the control group (power × 0.77). The treatment naïve ADHD group’s value in peer relationship problems was significantly higher than that of the control group (power × 1.00) (see ). ### 3.2. H2–H4. Prosocial Behaviour’s Association with Peer Relationships and Quality of Life presents the relationship between prosocial behaviour, peer relationship problems and quality of life. Prosocial behaviour had a weak positive relationship with the parent’s evaluation of the child’s quality of life both in the ADHD and in the control group. The parent’s rating of quality of life and peer relationship problems also showed a significant, negative moderate association in the ADHD and in the control group. Between prosocial behaviour and peer relationship problems, a negative weak association was detected in the ADHD group, and a negative moderate association in the control group. Furthermore, in the ADHD group, the child’s evaluation of quality of life showed a significant negative weak relationship with peer relationships, and there was no significant relationship with prosocial behaviour. In the control group, the child’s view of life quality did not show a significant relationship with the other items. ### 3.3. H5. Comorbid Externalizing Problems, Prosocial Behaviour and Peer Relationship Problems in the Treatment Naïve ADHD Group The two treatment naïve ADHD groups—i.e., (1) children diagnosed with externalizing comorbid disorders (CD and/or ODD); and (2) children not diagnosed with externalizing comorbid disorders—showed no significant difference in prosocial behaviour (power × 0.37). The two groups showed a significant difference in peer relationship problems. The ADHD group with externalizing comorbid disorders showed higher values in peer relationship problems than the ADHD group without externalizing disorders (power × 0.81) (see ). ## 4. Discussion To our knowledge, the current study was the first to investigate the association between prosocial behaviour and peer relationship problems and quality of life among treatment naïve children diagnosed with ADHD. Furthermore, the present study compared a carefully selected homogeneous treatment-naïve ADHD group with a control group of children with no previously recognized psychiatric disorders or any psychiatric disorders currently diagnosed by a structured diagnostic interview. Based on our results, we can establish that treatment-naïve children with ADHD show lower levels of prosocial behaviour than healthy children; moreover, they have more problems with their peers. These results are consistent with the findings of Paap et al. [ ], which showed high levels of ADHD symptoms and behavioural problems (as perceived by teachers and parents) were associated with low levels of prosocial behaviour and high levels of peer relationship problems. Because of their attentional difficulties, children diagnosed with ADHD are handicapped in those social skills which are learned through observation. In addition, hyperactive and impulsive behaviours contribute to generally unrestrained and overbearing social behaviour that makes children with ADHD highly aversive to peers [ ]. Hay et al. [ ] also suggest that there may be specific neurobiological deficits making it difficult for children with ADHD symptoms to regulate their attention and activity sufficiently to deploy prosocial behaviour. The present study explored the association between prosocial behaviour, peer relationship problems and parents’ and child’s evaluation of life quality in a treatment naïve ADHD group and a control group. A notable finding of our research was that a weak positive relationship exists between prosocial behaviour and the parental evaluation of quality of life both in the treatment naïve ADHD group and the control group, while the self-reported quality of life did not reveal an association with prosocial behaviour in either group. This finding suggests that higher prosocial values have a positive correlation with the child’s quality of life, as evaluated by their parents, in both groups, while, in contrast, they do not have this positive correlation with quality of life evaluated by the child itself. Previous studies have also highlighted that there may be a difference in perception of quality of life between parents and their children [ , ]. When examining the quality of life of children, it has become increasingly clear that in addition to the child’s own subjective judgment, an external observer is also important. The use of proxy reports is recommended in order to obtain a more extensive and reliable picture of the situation of children and adolescents [ ]. According to Mattejat et al. [ ], the evaluation by a parent or caregiver is both subjective and objective because, although they appear as external observers, they are themselves affected by the child’s condition. According to Thaulow and Jozefiak [ ], the parent–child difference in quality of life perceptions is due to children with ADHD being more likely to focus on the present aspects, while their parents are more likely to focus on the child’s future, which is concerned with school and social problems. Presumably, parents may have a greater insight into the child’s difficulties, so it is important when assessing the quality of life of children with ADHD to take both the parent’s and the child’s perspective into account. For instance, the perceived quality of life of a child that is prosocial and shares its toys may not always improve, as peers may not always reciprocate this kindness. In exploring the association between peer problems and quality of life, we found that the evaluation by the parent has a negative but moderate relationship with peer problems, in both the treatment naïve ADHD group and the control group. In addition, when parents report fewer peer relationship problems, they also rate their child’s quality of life more highly. The association between these variables is also detectable in the self-reports of treatment-naïve ADHD children, so we can see that as children with ADHD perceive more problems in their peer relationships, they rate their quality of life lower; however, this association is not detectable in the self-reports of the control group. Previous studies have confirmed that children with ADHD perceive more rejection from their peers then healthy children [ ], which is likely to negatively affect their evaluation of their quality of life. Furthermore, there was a weak negative relationship between prosocial behaviour and peer relationship problems in the treatment naïve ADHD group, and a negative moderate relationship in the control group. This result reveals the importance of prosocial behaviour for peer relationships. The findings highlight that treatment naïve children with ADHD do experience an association between peer relationship problems and quality of life, but not between prosocial behaviour and quality of life, while parents experience both associations. It is important to note that effective therapy for ADHD should not only relieve ADHD symptoms but also improve the child’s quality of life [ ]. As effective treatments geared at other aspects of dysfunction associated with ADHD do not eradicate ADHD children’s peer problems, peer problems need to be targeted directly [ ]. Lowering the number of peer relationship problems could have a favourable effect on quality of life. Preventive and interventional programmes, focusing on the easement of peer relationship problems, could help reduce experiences of exclusion and enhance quality of life among children with ADHD. Since most children with ADHD can be diagnosed with a comorbid psychiatric disorder [ , , ], the present study wanted to explore whether the externalizing comorbid disorders affect prosocial behaviour and peer relationship problems. As we stated earlier, the results of the present study show that children with ADHD show lower levels of prosocial behaviour than the healthy controls; however, these differences are not detectable when we focus on the children with ADHD diagnoses with or without externalizing comorbid diagnoses. Hay et al. [ ] found that aggressive behavioural symptoms were not associated with prosocial behaviour when they took ADHD symptoms into account. We must mention that the statistical power was low in testing this hypothesis due to a relatively small sample size. Whereas there was not a detectable difference in prosocial behaviour between the two groups, children with comorbid externalizing problems are characterised as having more peer relationship problems. CD or ODD can contribute to the child’s difficulties with peer relationships. Considering that there is an association between peer relationship problems and quality of life, therapy that focuses on comorbid externalizing disorders could contribute to reducing peer relationship problems and thus enhance the quality of life of children with ADHD. The findings of the current study must be interpreted in light of certain limitations. First, our study was cross-sectional, which does not allow for any causational conclusions. Based on our results, we can, however, state that we found significant differences between healthy children and treatment naïve children with ADHD in the studied variables, which highlights the importance of learning more about prosociality and peer relationships through further research. Second, there was a difference in gender distribution between the clinical and control groups. This difference, however, reflects the general difference in gender ratio between children with ADHD, especially in clinical practice [ ], and healthy children [ ]. Additionally, it is important to note that the results of the current study were not affected by gender. Third, we did not use any structured measurement of intelligence test to assess the mental ability of children included in our study to reduce the study-load for the included children. Instead, children were encouraged to indicate whether they could understand the questions at the start of the study. Furthermore, each child was accompanied by a study mentor (i.e., parent or researcher) during the completion of the self-rating questionnaire, which made it possible for children to ask for information if needed. Fourth, it is considered a limitation of the study that the MINI Kid diagnostic interview applied in the study was still based on DSM-IV criteria instead of DSM-5. The reason for this was that the DSM-5-based version of the MINI Kid was not yet available at the start of the present study. However, we believe that the differences between the two versions are not essential in the case of the diagnoses of ADHD in children [ ]. While the number of necessary ADHD symptoms above age 17 has changed from six to five and the onset of symptoms and impairments changed from 7 to 12, according to recent studies these changes have not affected childhood prevalence [ ]. Fifth, the comparison of prosocial behaviour in children with ADHD and ADHD with comorbid externalizing disorders would need to be tested on a larger sample since, in this case, the power value was 0.38. For the other hypotheses, the statistical power was adequate (H1/1: power × 0.77; H1/2: power × 1.00; H5/2: power × 0.81). The results of the present study indicate that peer relationship problems, prosocial behaviour, and their relationship to quality of life of children with ADHD are important areas for future research, preferably in a longitudinal design. Understanding which factors play a role in prosociality and peer relationships in children diagnosed with ADHD, could provide valuable insights in the development of ADHD symptoms, as well as their quality of life. ## 5. Conclusions In summary, our research points out that treatment-naïve children diagnosed with ADHD have a lower level of prosocial behaviour and have more peer relationship problems than children not diagnosed with ADHD. Focusing on prosocial behaviour in ADHD therapy could have a favourable effect both on peer relationship problems and on quality of life, since prosocial behaviour has a positive relationship with quality of life, while peer relationship problems have a negative relationship with quality of life. Moreover, based on our study, therapy focusing on comorbid externalizing diagnoses could contribute to reducing peer relationship problems, and enhance quality of life. Social support from parents, teachers, as well as peers is important for all patients with psychiatric disorders, including ADHD. Psychoeducation can be an important factor for parents and teachers to improve acceptance and support towards children with ADHD. However, peers cannot be expected to be more accepting and tolerant to children with ADHD as a result of psycho-education, because they are themselves children as well. Therefore, children diagnosed with ADHD may need extra support to improve their relationship with their peers. Reducing social exclusion and improving peer relationships in addition to effective medication and non-medication therapies can help to protect children diagnosed with ADHD from future loneliness or deviations.
Evidence indicates an association between executive functioning and increased weight, with different patterns ascribed to individual differences (sex, age, lifestyles). This study reports on the relationship between high-level executive functions and body weight. Sixty-five young adults participated in the study: 29 participants (14 males, 15 females) in the normal weight range; 36 participants (18 males, 18 females) in the overweight range. The Iowa Gambling Task (IGT) and Tower of London Task were administered to assess decision making and planning. Planning did not differ in individuals in the normal-weight and overweight groups, and no difference emerged between females and males. However, normal and overweight males and females had different patterns in decision making. On the long-term consequences index of the IGT, females reported lower scores than males. Males in the overweight range had a lower long-term consequences index on the IGT than normal-weight males, while this pattern did not emerge in females. These findings suggest that decision-making responses may differ in the overweight relative to healthy weight condition, with a different expression in males and females. This pattern should be considered in weight loss prevention strategies, possibly adopting different approaches in males and females. ## 1. Introduction Excessive weight is a risk factor for many chronic conditions (e.g., hypertension [ ], diabetes [ ], cardiovascular disorders [ ]) and is related to psychological disorders (including anxiety and depression [ , ]), cognitive dysfunctions [ ], and a general impairment of well-being and quality of life [ ]. Over the last few decades, the prevalence of overweight conditions and obesity has substantially increased worldwide [ , ]. Maladaptive eating behavior is one of the main causes of body weight increase, and it appears to be influenced by many psychological (e.g., mood, impulsivity [ ]; emotion regulation [ ]; attentional bias [ ]) and environmental (e.g., food availability, social pressure [ ]) factors. Moreover, evidence has highlighted an association between impairments in executive functions and weight increase across the life span (for a review: [ , ]), especially in the individuals affected by obesity (i.e., body mass index (BMI) above 30 kg/m ). Cross-sectional studies have shown that poorer performance in executive functioning tasks is more likely to be associated with obesity than normal-weight status [ , ]. Longitudinal studies have observed an association between cognitive impairment and weight gain and between poorer performance on executive tasks and weight loss failure [ , , ]. Most studies on the executive problems associated with obesity have focused on less complex executive functions such as working memory, inhibition, and set-shifting [ ]. However, some studies have investigated the relationship between obesity and more complex executive functions, such as decision making and planning, showing impairment in these functions in association with obesity (BMI > 30 e.g., [ , , ]). Moreover, there are also conflicting findings in the literature examining the relationship between executive functioning and overweight conditions (i.e., BMI between 25 and 30 kg/m ). Studies are poor and report discrepancies, although investigations on this topic could provide insight into the genesis of the association between these variables. For this reason, studying the association between overweight status and high-level executive functions could be relevant. Another important aspect of the relationship between executive functioning and weight is the relationship between executive functioning and biological sex. Previous studies have shown how females and males show different eating behavior patterns [ ]; furthermore, the prevalence and incidence of overweight and obesity differ between sexes [ ]. Females, compared to males, show a higher sensitivity to environmental food cues, which may account for the higher prevalence of obesity in the female population [ ]. Moreover, some authors have highlighted a certain functional difference in executive functioning when gender differences are considered, especially in higher-level executive functions (e.g., decision making), which are more influenced by secondary factors such as metabolic, hormonal, and autonomic factors [ ]. A recent meta-analysis by Rotdge and colleagues [ ] analyzed the association between obesity and decision making as assessed by the Iowa Gambling Task (IGT; [ ]), a gold standard task for assessing decision-making abilities under ambiguous conditions where individuals lack complete knowledge of the different options available. This meta-analysis reported an association between obesity and decision making under uncertain conditions [ ]. Furthermore, the meta-analysis showed that poorer decision making was associated with the failure of a weight-loss program. Poorer performance on decision-making tasks in individuals with obesity [ , , ] is associated with difficulty in making adaptive decisions in daily life that is related to the overeating that leads to weight gain. However, the few studies evaluating the association between IGT performance and overweight (BMI between 25 and 30 kg/m ) did not confirm this finding in obesity [ , ], indicating a possibly different pattern in less severe conditions of body weight. Moreover, the studies did not consider possible sex differences. This study aims to provide new evidence about the relationship between executive functions and body weight by analyzing more complex and less investigated (i.e., decision making and planning) executive functions in overweight conditions, differentiating female and male executive patterns. Specifically, the present study investigated the association between body weight and decision making and planning in a sample of healthy individuals included in the normal weight to overweight continuum, without eating disorders, medical or psychopathological conditions, and severe obesity (BMI > 35 kg/m ). Given previous findings in adults with obesity [ , , ], poorer performance in decision-making tasks under risk (IGT; [ ]) is expected for overweight subjects compared to those with normal weight. According to the hypothesis of a general executive impairment related to overweight status [ ], we expected lower planning functioning in the overweight subjects of our study [ ]. Moreover, considering the possible sex differences [ , ], we expected different patterns in decision making and planning between females and males in the normal weight and overweight ranges. ## 2. Materials and Methods ### 2.1. Participants Six-five participants (32 males and 33 females; mean age: 24 years SD = 3) voluntarily took part in the study. Specifically, 29 participants (14 males; 15 females) reported a BMI under the threshold of overweight (25 kg/m ) and were classified as normal weight; 36 participants reported a BMI beyond the threshold of overweight (18 males; 18 females). details the characteristics of the sample, and , the group scores on the executive functioning tasks. The study included the participants if they did not present an eating disorder diagnosis, food allergies, severe obesity, chronic medical diseases, or any psychological conditions (e.g., anxiety, depression). ### 2.2. Outcomes #### 2.2.1. Demographic and Clinical Information A semi-structured interview was adopted to collect the main demographic information of each participant (gender, age, years of education) and medical and clinical history. #### 2.2.2. Executive Functions Decision Making Decision making was assessed using a computerized version of the Iowa Gambling Task (IGT; [ ]), completely superimposable on the original version [ ]. Apparatus: the task was administered via E-Prime 2.1 software (Psychology Software Tools Inc., Pittsburgh, PA, USA) on a personal computer equipped with a 15-inch monitor. The responses were enabled by four keys of the computer keyboard. Stimuli: Four decks of cards (“A”, “B”, “C”, and “D”) with a red cover in the back and a Joker in the front constituted the stimuli on a green background [ ]. Procedure: each card in the decks was associated with a win or a loss. The decks differed in the frequency and number of wins and losses. Decks A and B were considered disadvantageous, with large short-term wins ($100) but long-term losses. Deck A was associated with more frequent loss but less plentiful than deck B. Overall, decks A and B led to a loss of $250 for every 10 cards drawn. Decks C and D were more advantageous, although characterized by a small short-term payout ($50 each). The two decks differed in the frequency and magnitude of the loss. Deck C had more frequent but lower losses than deck D. Every 10 cards drawn on these decks resulted in a win of $500 with a loss of $250. The amount of money won (written in green) and lost (written in red) was shown for each trial, and the total budget was indicated during the overall task duration. Each participant started with a $2000 credit and was informed that some decks were more advantageous than others. The participant had to press one of the four keys in the keyboard, corresponding to the deck they intended to choose. The test ended automatically after the hundredth selection (100 trials). The locations of the losses in this experiment were adopted by Bechara et al. [ ]. The learning of long-term consequences (LTC) and the bias of infrequent loss (IFL) indices were calculated. The LTC was calculated by subtracting the number of disadvantageous choices from the number of advantageous choices ((C + D) − (A + B)). The IFL was calculated by subtracting the frequent-loss deck choices from the infrequent-loss deck choices ((A + C) − (B + D)). Higher scores in both indices indicated a better decision-making function. An example of the IGT procedure is shown in . Planning Planning abilities were assessed by a computerized version of the Tower of London task [ , ]. Apparatus: the task was administered via Pebl 2.1 software [ ] computer software retrieved from ; GNU General Public License, accessed on 6 January 2021.) on a personal computer equipped with a 15-inch monitor. Participants responded using a computer mouse. Stimuli: on the top of the screen, three colored discs (blue, green, red) were located on a structure with three vertical sticks in a predefined order. The same frame was presented at the bottom of the screen but with movable discs. Procedure: the participant must order the discs one by one to recreate the configuration shown at the top of the screen, employing a maximum of 12 trials. The whole sequence must be carried out mentally before being performed. For each trial, only a predetermined number of movements can be made, and the number of available movements was shown on a vertical bar at the side of the screen. A total score was calculated by the Pebl program, considering the number of trials correctly completed in the minimum possible moves. A lower total score indicated lower planning performances. An example of the TOL procedure is shown in . ### 2.3. Apparatus A digital balance was used to assess the weight of each participant (kg), and a wall-mounted anthropometer was adopted to measure the height (m). BMI was calculated by dividing weight by height (in meters squared). The WHO criteria were adopted to classify BMI (WHO, 2020). Waist and hip circumferences were measured by a tape measure. The waist-to-height ratio (W/Hr; [ ]) and body adiposity index (BAI = ((hip circumference)/((height)1.5) – 18)); [ ]) were calculated as alternative indices of body weight. A digital sphygmomanometer was used to measure the participants’ systolic and diastolic blood pressure, considered as confounding variables in the analysis. ### 2.4. General Procedure Written informed consent was administered to each participant before the evaluation. The research was conducted according to the Helsinki Declaration, and it was approved by the Local Ethics Committee (Department of Dynamic and Clinical Psychology and Health Studies—“Sapienza” the University of Rome; cod. 0000450-15 April 2019). Each participant was tested in a silent, dimly illuminated room with a comfortable temperature. Before the experimental session, where the IGT and TOL were randomly administered, the aims of the study were explained to the participant, and the semi-structured interview was administered. ### 2.5. Data Analysis The descriptive analyses were calculated, considering the sex (males and females) and the weight condition (normal weight, overweight). Univariate analyses of variance (ANOVAs) were carried out to control participants’ differences in age, years of education, and physiological measures (see ). Mixed ANOVAs were carried out to assess the differences between the groups, considering sex and body weight condition, in the LTC and IFL indices of the five blocks of the IGT and the mean score. To assess the planning performances in the groups, an ANOVA on the total score of the TOL was carried out. ## 3. Results Considering the LTC index of the IGT, the ANOVA showed a significant effect of Sex (F = 4.99; p = 0.03; ƞ = 0.07), with females reporting lower LTC scores than males. The significant Sex × Weight Condition (F = 8.97; p = 0.004; ƞ = 0.13) interaction highlighted that males with overweight showed lower LTC scores than normal-weight males (mean difference = −22.50; t = −2.89; p = 0.03). Moreover, normal-weight females reported lower LTC than normal-weight males (mean difference = −27.54; t = 3.69; p = 0.003). No other differences emerged ( p > 0.08) (see ). Considering the IFL index of the IGT no main effects of Sex (F < 1.00; p = 0.99) and Weight condition (F < 1; p = 0.94) were present, nor was the Sex x Weight Condition interaction significant (F < 1; p = 0.74). The ANOVA on the Global score of the TOL did not show significant differences between groups for the main effects of Sex (F < 1; p = 0.67), the Weight condition (F = 2.10; p = 0.15), or the Sex x Weight condition (F < 1; p = 0.94) (see ). ## 4. Discussion Previous studies have confirmed an association between executive functions and both maladaptive eating behavior [ ] and excessive body weight [ , ]. However, a large portion of these studies focused on basic executive functions (i.e., inhibition, working memory, shifting), while the association between more complex executive functions (i.e., problem solving, decision making, planning) and weight status has been poorly analyzed. Moreover, the research on this topic has focused on obesity and not on the earliest stages of weight gain, i.e., individuals in the overweight range. This study is one of the first to analyze the relationship between weight status in healthy individuals of normal weight to overweight ranges and higher executive functioning, focusing on planning and decision making. The present study assumed that planning and decision making, which involve different cognitive mechanisms and neural substrates aimed at controlling goal-directed behaviors, could affect (or be affected by) weight gain. We also assessed the role of self-reported biological sex. Biological sex differences in reward-based decision making have been demonstrated [ , ], suggesting that females tend to focus on short-term reward outcomes, whereas males focus on long-term decision outcomes. Differences in dopaminergic and serotoninergic activity may influence the different risk-taking decision-making performances between males and females [ ]. Sex has also been reported to have played a role in studies on planning, with better performance in males than females in tasks involving planning abilities [ ]. Moreover, different patterns in eating behavior were reported by females and males, influencing the differences in maladaptive eating behaviors that allow people to overeat. The interrelation between overweight status and sex on tasks assessing high-level executive functions could explain the risk of overweight conditions. According to Damasio’s somatic marker hypothesis [ ], some authors [ ] have hypothesized a possible association between decision-making differences and eating behaviors associated with obesity. Decision making overlaps with some aspects of reward sensitivity [ ], and it is characterized by the tendency to assign values and probabilities to behavioral patterns aimed at a specific outcome (e.g., select the more convenient option among several ones). Some specific endogenous (psychological characteristics, hormonal balance) and exogenous factors (social influences, relationships, environmental stimuli), which influence reward sensitivity to food stimuli, could generate overeating behavior [ ]. Hypersensitivity to immediate reward and the failure to generate appropriate responses to visceral signals (e.g., gut activity) [ ] or the presence of impulse-control problems [ , ] may account for individual differences in reward sensitivity leading to differences in decision making. Although studies on obesity demonstrate an association between severe body adiposity and impairment in decision making (for a review, see [ , ]), studies that have analyzed the relationship between decision making and less severe overweight status in healthy populations have not confirmed this association [ , ]. This suggests that obesity models in which overeating is theorized to be associated with poorer decision making [ ] may be true for overweight individuals. Generally, the results of our study agree with previous literature, which has indicated that males are focused on long-term goals, reflecting an adaptive choice of long-term advantageous decks, while females are characterized by an exploratory approach ranging between short- and long-term consequences and are characterized by a more frequent selection of disadvantageous decks [ , ]. When the interaction between sex and weight condition was analyzed, females with overweight and normal weight did not differ in the long-term consequence index on the IGT, while males with overweight showed worse performance than males with normal weight. Different explanations may explain these different patterns. One possible explanation can be ascribed to the role of the central autonomic network (CAN; [ ]), which involves the insula, ventromedial prefrontal cortex (vmPFC), and other cortical areas (e.g., cingulate cortex, sensorimotor cortices) in influencing the performance on executive tasks, including decision-making tasks [ , ]. The CAN controls and modulates autonomic activation (both sympathetic and parasympathetic branches) and central brain activation, influencing cognitive activities, especially executive functioning performance. The CAN differs between males and females due to metabolic and hormonal differences that characterize them [ ]. Taken together with Damasio’s somatic marker hypothesis, which suggests a central role of the vmPFC in modulating the ability to make decisions [ ], this aspect could suggest that overweight status in males is associated with an imbalance of CAN activation, which generates an impairment in the ability to evaluate long-term consequences of a choice adaptively. However, no studies have specifically focused on the role of the CAN in overweight status and obesity, and further studies are needed to highlight the direction of this association. Considering that an alteration in decision making can represent a possible marker of weight gain, the different patterns of males and females could indicate that complex executive functions could influence eating behavioral risk factors associated with obesity in males. In females, other aspects appear to influence the occurrence of obesity, such as social expectations and stereotypes of body image [ ]. Another explanation for the findings of this study may lie in the sex differences in the activation of the reward system associated with overeating and responses to food cues [ ]. fMRI studies have demonstrated that females are characterized by hyperactivation of striate-limbic and frontal-cortical regions in response to food cues, independent of their weight condition, while males show a decreased activation in the middle frontal gyrus (associated with decision-making performance), insula, and cerebellum in response to food assumption [ ]. These differences, manifested in decision-making performance, could justify the behavioral differences in approach to food and the different prevalence of overweight status and obesity, considering sex. However, how these neural differences are manifested behaviorally in obese and overweight populations remains unclear. When studies have analyzed planning, a higher executive function useful for organizing and controlling complex behaviors (e.g., eating habits; [ ]), no differences have emerged, whether considering weight conditions or the sex or the interaction between these variables. The association between planning and excessive body weight [ , ] is little examined and has yielded inconsistent results. Quavam and colleagues [ ], analyzing a group of adolescents, found worse performance in the Tower of London (TOL) task, a measure of planning and problem solving, in adolescents with overweight status and obesity compared to normal weight. However, the authors did not compare overweight and obese adolescents. In contrast, Sweat et al. [ ] did not find a difference between young adults with obesity and those with normal weight in planning abilities assessed by the TOL. To our knowledge, other studies have not analyzed planning performances in individuals with overweight compared to normal weight. In agreement with Sweat and colleagues [ ], our study did not observe significant differences in TOL performance due to weight status. Generally, the results of this study should be interpreted by considering different aspects. Unlike simple executive functions, which could represent a marker of the risk of weight gain, the more complex and integrated executive functions may come into play in a more complex way in overweight conditions, and they can be characterized by a bidirectional relationship with overweight status [ ]. We can hypothesize that some executive functions (e.g., shifting, inhibition [ ]) can represent risk factors for establishing maladaptive behaviors that lead to increased weight independently from other variables such as sex. In contrast, complex cognitive dimensions, such as decision making, are associated with weight conditions differently in males and females. The absence of a general effect of the overweight condition would indicate that executive functions characterized by greater integration of neural networks (e.g., planning and decision making) can be associated with excessive body weight in a complex way involving bidirectional interactions [ , ]. Obesity appears to be related to many brain changes that potentially impact cognitive and executive functions [ ], and the worsening of these abilities will exacerbate inappropriate behaviors causing obesity [ ]. Although the preliminary results of this study allow for interesting considerations, some limitations should be highlighted. First, the sample size was relatively small, which may have prevented highlighting significant differences, limiting the generalizability of results. Another limitation is the cross-sectional design. A longitudinal study would highlight a possible trend in the relationship over time. The poor theoretical background of the study represents another limitation, specifically considering planning, that could have precluded the possibility of developing new inferences about the construct associated with weight condition. A further suggestion could be to consider cognitive tasks involving food cues in future studies in order to identify a possible involvement of high executive functions in response to food cues, rather than a general impairment, in people with moderate overweight conditions. Moreover, further studies should deepen exploration into the role of metabolic, hormonal, and neurochemical differences between males and females in influencing executive functioning and consequent goal-directed behaviors associated with overweight status and its exacerbation in obesity. Finally, although the selection of healthy participants prevented the risk of including possible confounding variables associated with health issues in overweight (e.g., hypertension, metabolic syndrome, eating disorders), further studies should consider including other psychological and physiological variables when comparing the groups. ## 5. Conclusions Investigating the individual aspects that could influence eating behavior and body weight changes appears relevant, considering the role of obesity as a current public health concern. Knowing the role of some specific executive functions in driving complex behaviors, such as eating behavior, can encourage the consideration of body weight changes from a new perspective that allows the inclusion of cognitive variables in weight gain prevention programs. Potentially, these variables could influence people’s approach to food, thus influencing body condition. Although there have been few investigations on this topic, studies on weight loss interventions have emphasized the potential influences of executive functions on the success of these programs [ ]. Understanding which executive functions are involved in overweight conditions and how males and females express them differently will allow new treatment approaches that integrate weight loss programs and executive functions training [ ]. Moreover, such understanding can help us develop an integrated and more suitable theoretical model of the relationship between executive functions and excessive body weight.
The pathophysiology of stoke involves many complex pathways and risk factors. Though there are several ongoing studies on stroke, treatment options are limited, and the prevalence of stroke is continuing to increase. Understanding the genomic variants and biological pathways associated with stroke could offer novel therapeutic alternatives in terms of drug targets and receptor modulations for newer treatment methods. It is challenging to identify individual causative mutations in a single gene because many alleles are responsible for minor effects. Therefore, multiple factorial analyses using single nucleotide polymorphisms (SNPs) could be used to gain new insight by identifying potential genetic risk factors. There are many studies, such as Genome-Wide Association Studies (GWAS) and Phenome-Wide Association Studies (PheWAS) which have identified numerous independent loci associated with stroke, which could be instrumental in developing newer drug targets and novel therapies. Additionally, using analytical techniques, such as meta-analysis and Mendelian randomization could help in evaluating stroke risk factors and determining treatment priorities. Combining SNPs into polygenic risk scores and lifestyle risk factors could detect stroke risk at a very young age and help in administering preventive interventions. ## 1. Introduction Several risk factors and complex pathways are involved in the pathophysiology of stroke. Stroke is the second leading cause of death worldwide after heart attack [ ]. The human genome project has helped in understanding many genetic factors that are associated with stroke [ , , ]. Several studies have reported genetic predisposition to stroke in both human beings and animal models. However, the definition of genetic risk factors for stroke is not well established. Since no single specific gene has been responsible for stroke, it has been hypothesized to be a multifactorial polygenic disorder [ ]. In this study, we used a narrative review method to understand the current advances, clinical applications, and future possibilities of the associations between genetic factors and stroke. ## 2. Genetic Factor Associated with Stroke (Non-Modifiable Factors in Stroke) Several studies, such as the classical twin study which consisted of 15,924 twin pairs have been designed to assess the genetic factors associated with stroke [ ]. Likewise, another twin study provided evidence for genetic factors that may increase the risk of stroke related events, such as death and hospitalization [ ]. This study found greater concordance rates for these associations among monozygotic twins, compared to dizygotic twins [ ]. These two studies were designed long before the human genome project. There could be different environmental effects affecting the results of these studies, which was a major limitation [ ]. Previous studies have reported that first degree relatives are at an increased risk for stroke [ ]. The preponderance of large and small vessel strokes, compared to cardioembolic strokes, is higher among subjects with a family history of stroke [ ]. Sex is an important factor to influence stroke outcome indicating the possible role of the sex chromosome and associated genes; however, recently a review reported no association between sex and stroke [ ]. Similarly, though ethnicity is not widely considered as an important factor affecting acute stroke outcome; it may influence the long-term outcome [ , ]. A recent study identified that levels of lipoprotein-A were significantly associated with adverse stroke outcomes, and were substantially higher in the Black, compared to the White population [ ]. In addition, hematological disorders are responsible for nearly 1.3% of acute stroke. Some of the common hematological disorders associated with stroke include polycythemia vera, sickle-cell disease, Waldenström macroglobulinemia, multiple myeloma, essential thrombocythemia, thrombotic thrombocytopenic purpura, protein C deficiency, Protein S deficiency, antithrombin deficiency, and Factor V Leiden. A substantial number of these disorders have a genetic predisposition. For example, a large proportion of polycythemia vera patients have a mutation in the exon 14 of the JAK2 gene ( JAK2V617F ), whereas a smaller proportion has mutations in the JAK2 exon 12 [ ]. ### 2.1. Heritability Genes in Stroke (Monogenic and Polygenic Inheritance in Stroke Etiology) Several animal model studies were conducted to identify potential candidate genes associated with stroke outcome. These studies analyzed the association of single nucleotide polymorphisms (SNPs) in targeted genes. The SNP of COX-2 and rs20417 genes were associated with early neurological deterioration [ , ]. However, these studies are not supported with further replicational studies and hence warrant further in-depth research. A study reported that several single-gene disorders might influence stroke, such as sickle cell disease, Fabry’s disease, homocystinuria, mitochondrial myopathy, and encephalopathy [ ]. A rare stroke case caused by mutations in the Notch 3 gene (OMIM*600276) showed heritable patterns [ ], which was also reported as a single-gene disorder. Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy (CADASIL) caused by different types of mutations of Notch 3 gene are associated with extensive cerebral small vessel damage, marked by the accumulation of granular osmiophilic material (GOM) [ ]. Molecular evaluation of the vascular smooth muscles in CADASIL patients showed increased oxidation of soluble guanyl cyclase associated with decreased cyclic GMP levels, which impaired vasorelaxation of the cerebral vasculature [ ]. A number of molecular pathways associated with cell adhesion, extracellular matrix components, misfolding control, autophagy, angiogenesis, and transforming growth factor β (TGFβ) signaling pathway are altered in CADASIL. Metabolic impairment, such as diabetes mellitus further expedites the pathological damage to the cerebral small blood vessels in Notch 3 mutation, resulting in endothelium mitochondrial dysfunction and vascular basement membrane injuries [ ]. This suggests that the heritability of Notch 3 mutation increases the risk for ischemic stroke from small vessel diseases, such as CADASIL ( ). Heterozygous mutations in the 3ʹuntranslated region (UTR) of the collagen 4A1 encoding gene may also influence ischemic stroke [ ]. A glycine substitution mutation in the triple-helical domains of COL4A1 and COL4A2 may develop neurological and non-neurological manifestations, including hemorrhagic stroke [ ]. The genomic data enables accurate analysis of heterozygous mutations. Another study identified heterozygous mutations in High-Temperature Requirement Serine protease A1 (HTRA1) encoding gene that manifest as stroke and cognitive decline in people aged more than 45 years [ ]. Other mutations were also identified in the HTRA1 gene that may cause cerebral autosomal recessive arteriopathy in younger people who are between 10 to 30 years of age [ ]. Similarly, mutations in adenosine deaminase 2 (ADA2), cathepsin A (CTSA) and forkhead-box C1 (FOXC1) genes were also found to be associated with autosomal dominant small vessel disease [ , , ]. In addition, there are several other candidate genes under investigation for a possible association with stroke. ### 2.2. Multifactorial Stroke and SNPs It is challenging to identify individual causative mutations in a single gene because many alleles are responsible for minor effects. Therefore, multiple factorial analyses using SNPs were used to gain newer insight by identifying potential genetic risk factors. For example, a study by Mola-Caminal et al. identified a locus located within a candidate gene [ ], which can help in understanding the genetic mechanisms involved in stroke. Newer variants in the gene pals1-associated tight junction (PATJ) were linked to poor functional outcomes at 3-month post-stroke [ ]. rs76221407 was the major SNP variant of the PATJ gene, which was associated with poor outcomes in stroke subjects after 3 months. The locus STRK1 was mapped to identify a susceptible gene for stroke for the first time [ ]. Another study identified a strong association between the phosphodiesterase 4D gene (PDE4D; OMIM 600129*) and two major subtypes of stroke, cardiogenic and carotid stroke. Among 260 PDE4D gene SNPs, six were found to be significantly associated with stroke. Some of the SNPs were from UTR; therefore, these SNPs may affect the transcription of PDE4D [ ]. The 5-lipoxygenase activating protein gene (ALOX5AP; OMIM 603700*) was also associated with an increased risk of stroke [ ]. ALOX5AP SNP haplotypes increase the production of leukotriene B4 in stimulated neutrophils, thereby contributing to vascular inflammation in myocardial infarction and stroke [ ]. The main limitation of studying candidate genes for SNPs and their association with stroke is that they are time consuming and require significant resources [ , ], and could be associated with false positive results. ## 3. Genomic Evaluation in Stroke Several studies were designed during the 1990s to observe the effect of Mendelian genetics and candidate genes on stroke [ ]. Subsequently, the human genome project enabled accurate SNP analysis by using the Genome-Wide Association Study (GWAS) [ ]. ### 3.1. Genome-Wide Association Study (GWAS) in Stroke The first GWAS in stroke, Ischemic Stroke Genetics Study (ISGS) which included 250 patients and controls, was published in 2007 [ ]. This study failed to identify any genetic locus, which was explicitly associated with stroke. Subsequently, studies focused on a specific region of chromosome 9 (9p21.3) and found an association with stroke [ ]. This region was associated with coronary heart disease [ ], and hence it was suggested that heart disease and ischemic stroke share similar polymorphisms. Another research group also studied chromosome 9 and found modest associations between ischemic stroke and variants (rs2383207 and rs10757274) of the 9p21 region [ ]. Finally, six SNPs were identified, including rs2383207 in the 9p21 region, which were independently associated with the ischemic stroke (large artery atherosclerotic subtype) [ ]. This suggests that chromosome 9p21 is an important risk locus that shares SNP variants that are common for both ischemic stroke and coronary artery disease. A case-control study found a significant association between the 4q25 region and the cardioembolic subtype of ischemic stroke [ ]. This region was also associated with all types of ischemic stroke, though to a lesser degree [ ]. This study found that markers of atrial fibrillation, such as rs2200733 and rs10033464, have a strong association with ischemic stroke by increasing the risk for cardioembolic events. Another locus, the 16q22 was also found to be associated with cardioembolic stroke [ ]. GWAS also found robust associations between intracranial aneurysms and loci on 2q, 8q, and 9p21 regions [ , ] The first prospective GWAS on stroke was the Heart and Aging Research in Genomic Epidemiology (CHARGE) study, which included 19,600 participants with 1544 strokes incidence [ ]. This study identified two SNPs (rs11833579 and rs12425791) in the 12p13 region of chromosome 12 and within 11 kb upstream of the gene NINJ2 (Ninjurin 2), all of which were significantly associated with stroke. The GWAS projects for ischemic stroke have identified many SNPs that are associated with stroke [ , , , , , , , , , , ]. Among them, one study identified variants associated with different subtypes of stroke. This study showed that variants close to PITX2 (paired like homeodomain 2) and ZFHX3 (zinc finger homeobox 3) were linked to cardioembolic stroke. Variants on chromosome 9p21 locus and a novel variant on chromosome 7p21.1 within the histone deacetylase 9 (HDAC9) gene were associated with large vessel stroke [ ]. This study suggested that genetic heterogeneity was associated with different stroke subtypes and would further demand subtype-specific studies for understanding genetic alterations in ischemic stroke. Several GWAS consortia have been using and analyzing extensive datasets from major national and international projects. For example, SiGN project contains 14,549 cases from 24 genetic research centers located in the United States ( n = 13) and Europe ( n = 11) [ ]. The MEGASTROKE consortium analyzed multi-ancestry GWAS data from more than 67,000 stroke cases and 454,000 controls and identified 32 significant loci to be associated with stroke [ ]. Among them, two loci were independently associated with large artery stroke, and one with cardioembolic stroke. However, GWAS data has provided different associations between genes and stroke among different population and ethnic groups. For example, variants of the apelin receptor gene (APLNR, rs9943582) were associated with increased risk of ischemic stroke among the Japanese population, while these variants had no association with stroke among the Chinese Han Population [ ]. Similarly, GWAS identified that rs2107595 SNP in the HDAC9 gene was associated with large-vessel ischemic stroke among the European population, while not among the Chinese Han population [ ]. Another GWAS identified an SNP locus on region 10q25.3 of chromosome 10 (rs11196288) to be associated with the risk of early-onset ischemic stroke among the European population [ ]. However, this SNP locus showed different susceptibility levels among the Chinese Han population. Similarly, in another study, there were differing associations between Caucasians and Chinese Han populations with respect to the relationship between SNPs of rs2200733 and rs6843082 on chromosome 4q25 and stroke [ ]. These SNPs were associated with ischemic stroke among Caucasians, but not among the Chinese Han population. These varying results suggested that genetic factors are also modulated by other racial and ethnic factors and could provide unclear results. Therefore, it is recommended that GWAS data should be analyzed after population based sub-grouping. ### 3.2. GWAS and Comorbidities of Stroke GWAS not only identified the genetic basis for stroke but also associations between genetic factors and comorbidities for stroke. The most important comorbidity associated with stroke is hypertension and is responsible for 30–40% of population-attributable risk for stroke [ ]. Other comorbidities associated with stroke are smoking, diabetes, atrial fibrillation, and coronary heart disease. Several loci have been identified which have an association with stroke and its comorbidities. For example, 4q25 region of chromosome 4 [ ] and a variant of the ZFHX3 gene in the 16q22 region of chromosome 16 [ ] were associated with ischemic stroke and atrial fibrillation, 9p21 region was associated with stroke and diabetes [ ], and serine/threonine kinase gene (STK39) variants were associated with stroke and hypertension [ ]. The MEGASTROKE consortium identified a total of 32 loci that were significantly associated with stroke. Among them, five were associated with blood pressure, five with coronary heart disease, two with low-density lipoprotein (LDL) cholesterol, two with atrial fibrillation, two with venous thromboembolism, one with white matter hyperintensities, and one with carotid plaque [ ]. This study identified a strong association between coronary heart disease and large artery stroke as well as blood pressure and all stroke subtypes. This study also found that cardioembolic stroke and large artery stroke, though not small vessel stroke, were associated with venous thromboembolism. However, an interesting finding from this study was that high-density lipoprotein (HDL) cholesterol was inversely associated with small vessel stroke [ ]. Another study reported that the rs4376531 variant among diabetes predicted the risk for atherothrombotic stroke [ ]. Another GWAS study found that C-reactive protein gene polymorphisms increased its synthesis level, which in turn increased the risk for stroke [ ]. ### 3.3. Genomic Determinants of Stroke Outcomes Researchers started using data from GWAS for identifying genetic determinants of stroke outcomes only recently [ ]. A genome wide meta-analysis (GWMA) of 12 stroke cohorts identified that the Pals1-associated tight junction (PATJ) variant was significantly associated with adverse functional outcomes after three months of stroke [ ]. However, the molecular mechanism of how the variants of the PATJ gene led to these outcomes is still unclear. Mola-Caminal et al. reported that the major variant rs76221407 in the PATJ gene was a key genotypic trait associated with poor functional outcomes after three months of stroke onset [ ]. Another GWMA study identified the SNP rs184681 to be significantly associated with functional outcomes of neural plasticity between 60 and 190 days after stroke onset [ ]. In another study, the genetic imbalance was associated with unfavorable outcomes after 2–6 months of stroke, after adjusting for age, sex, race, and stroke subtypes [ ]. ## 4. Preclinical Studies Supporting Genomic Analysis in Stroke It is challenging to study preclinical stroke models, although several animal studies have been designed to overcome the effects of host genetic variations. However, even after genetic background restriction, studies using animal models have been questioned for assessing complex polygenetic disorders, such as stroke [ ]. Nevertheless, animal models are still valuable to study basic mechanisms and factors, such as environmental and dietary factors related to stroke. In addition, animal studies have been used for identifying potential targets involved in inflammatory signaling of stroke outcomes among humans [ ]. Studies have shown that Metastasis-Associated Lung Adenocarcinoma Transcript 1 (MALAT1) expression could be induced in vitro in endothelial cells undergoing oxygen-glucose deprivation (OGD) [ , ]. Transcriptional downregulation of MALAT1 in OGD induced primary mouse brain microvascular endothelial cells led to overexpression of pro-apoptotic factor bim and increased pro-inflammatory cytokines, such as MCP-1, IL-6, and E-selectin [ ]. Moreover, in vivo MALAT1 knockout mice showed severe neurological deficits, compared to wild-type controls in response to transient focal ischemia [ ]. Additionally, other studies have demonstrated that MALAT1 promotes endothelial cell survival, angiogenesis, and vascular integrity in stroke [ , , , , ]. MALAT1 plays a crucial role in regulating post-stroke pathophysiology; however, further studies are required to understand the contexts and conditions under which MALAT1 mediates beneficial versus deleterious outcomes. Upregulated maternally expressed gene (MEG3) in the mouse brain and primary neurons were linked to increased cell death in cerebral ischemia [ , , ]. Long non-coding RNA (lncRNA) MEG3 functions as a competing endogenous RNA (ceRNA) and binds to miR-21 and downregulates the miR-21/PDAC pathway, leading to neuronal death in ischemic neurons [ ]. miR-21 overexpression reverses the effect of OGD reperfusion induced neuronal apoptosis in vitro. Another investigation showed that downregulation of MEG3 was associated with increased micro-vessel density in rat neurons [ ]. Therefore, MEG3 exhibits differential expression after stroke among different species and cell types while downregulation of MEG3 is strongly associated with post-stroke neuroprotection. The Small Nucleolar RNA Host Gene 12 (SNHG12) expression after ischemic injury was increased both in vitro and in vivo [ , , ]. The N2a cell line and mouse primary hippocampal neuronal study has shown higher expressions of lncRNA SNHG12 among neuronal cells undergoing ischemia [ ], while downregulation of miR-199a by SNHG12 decreases cell death and inflammation [ ]. lncRNA SNHG12 improves neuronal survival following OGD reperfusion induced ischemia through miR-199a downregulation by sirtuin-1 upregulation and activation of adenosine 5′ monophosphate-activated protein kinase (AMPK) pathway [ ]. This suggests that increased expression of lncRNA SNHG12 salvages injured ischemic neurons. During transient ischemia, circulating H19 levels become higher in blood and brain among stroke patients. Experimental mouse model studies have shown that knockdown of H19 could decrease edema, infarct volume, and neurological deficits after stroke [ , ]. Though several studies have looked for the role of lncRNA in modulating the post-stroke pathophysiology, only a few have explored the genetic variations of lncRNA and altered expression among stroke patients [ , , , , , , ]. Overall, these studies introduce the possibility that the evaluation of lncRNA expression or lncRNA gene loci could be a useful clinical tool for assessing the risk for developing stroke. ## 5. Extending Genome-Based Evaluation into the Clinical Scenario The global prevalence of stroke is consistently increasing, and there are limited therapeutic interventions. Therefore, developing advanced treatment strategies to manage stroke and post-stroke brain damage is important. Several studies have already completed preliminary research to integrate genetic data into routine clinical practice and precision medicine. Extending genome-based studies could help in developing therapeutic and predictability capabilities in managing the early stages of stroke. ### 5.1. Stroke Risk Prediction in Childhood Several studies were designed for the identification of stroke by gene expression profiling. Studies have predicted ischemic stroke with as high as 80% accuracy through analysis of a panel of 22 genes from peripheral blood mononuclear cells (PBMC) [ , ]. Using the latest technology and developments from genetic data, high-risk individuals could be identified by applying polygenic risk scores for common genetic variants, even during childhood [ , ]. These methods can enable the opportunities for early prevention of stroke. Recently, a study developed a polygenic risk score derived from a panel of 90 SNPs to identify individuals with 35% increased risk for stroke [ ]. Risk scores for stroke based on lifestyle factors, such as smoking, diet, body mass index (BMI), and physical activity have shown that lifestyle risks were similar across all polygenic risk score strata. Recently, studies have applied Mendelian randomization to identify risk for stroke [ ]. For example, a mendelian randomization study identified factors, such as BMI and waist-to-hip ratio, in order to identify individuals with greater risk for ischemic stroke [ ]. The differential effects of LDL and HDL on cardioembolic stroke, small vessel stroke, and large artery stroke observed in the MEGASTROKE consortium study were also confirmed by a Mendelian randomization study [ ]. Similarly, differential effects of type 2 diabetes were observed for different etiological stroke subtypes by the Mendelian randomization studies [ , ]. Mendelian randomization studies could be further applied for identifying novel risk factors for stroke. With the increasing availability of genomic data, Mendelian randomization studies will become more relevant and applicable in clinical practice. Although additional research is required to evaluate and improve genetic risk prediction of stroke, these studies highlight the potential for early risk stratification and prevention of stroke via genetic evaluation. ### 5.2. Exploration of Potential Therapeutics Currently, the pharmacological treatment of stroke is mainly based on recombinant tissue plasminogen activator (rtPA). rtPA was developed based on genetic data. Nevertheless, pharmacological treatment strategies for stroke have not significantly progressed over the years. The FDA first approved RNA-targeting antisense oligos therapy for spinal muscular atrophy in 2017. After that, in 2018, FDA approved the first RNAi therapy as a treatment option for peripheral nerve disease. Increasing transthyretin in tissues is the main reason for this disease and is caused by hereditary transthyretin-mediated amyloidosis [ ]. The exploration of this genetic target could significantly expand the pharmacological applications of stroke treatment in the future. ### 5.3. Exploiting Genetics for Potential Drug Discovery Genomic data offers a great potential for drug development of stroke by identifying causal pathways and drug targets and could determine the safety and efficacy of pharmacological interventions [ , , ]. These approaches were developed to personalize the dose and minimize the side effects. Mendelian randomization studies [ ] and other studies have shown the use of protective variants [ , , ] and have demonstrated naturally occurring human knockouts for phenotypic effects on stroke outcomes [ ]. Currently, phenome-wide association studies (PheWASs) show promise and have analyzed large datasets with detailed genotyping and phenotyping data with multiple traits [ , ]. Therefore, using genetic and phenetic data for potential drug discovery and precision medicine is now an advancing and emerging major research focus. ## 6. Future Directives Several genetic and genomic factors have already been identified for stroke and some overlap with comorbidities as previously described ( ). Many of these studies, such as those associated with vascular risk, monogenic vasculopathies, the leukotriene pathway, and other GWAS require further detailed investigation. Although vasculopathy of CADASIL was not associated with heart diseases, higher rates of myocardial infarction [ ] and unexplained sudden deaths [ ] among these patients require additional investigations. Currently, there are several ongoing GWAS and PheWAS with large sample sizes for identifying newer and undiscovered loci associated with stroke [ ]. Biobanks and databases will enormously expand the opportunity for gene discovery [ ], and thereby, accelerate the progress in this field. However, data from non-European ancestry is inadequate. Therefore, for studies to be effective across all populations, ancestry-specific genetic data should be developed. The development of prospective drugs has now become much easier after GWAS and PheWAS for several common diseases [ ]. However, further, improvement is required to develop novel cell and tissue models to study functional genomics and multilevel omics of stroke. Nevertheless, genetic studies focusing on treatment and recovery after stroke are in their infancy and require much more details for clinical applications [ , ]. Strokes could lead to significant cognitive decline and vascular dementia because of cerebral small vessel diseases. However, there is very little data from studies estimating the heritability of cerebral small vessel diseases. A growing body of evidence from epidemiological and genetic studies suggests that early cerebral small vessel diseases are heritable. It is, therefore, imperative that future studies should address the genetic factors associated with cerebral small vessel disease, as well as the potential clinical outcomes, to assess the genomics of vascular cognitive decline. ## 7. Limitations Though we have reviewed several genetic factors associated with stroke, several others that have not been covered in this review require additional exploration. We have primarily focused on genetic factors that are adversely associated with stroke. There are some protective genetic factors as well which need further exploration. Though we have explored genetic factors associated with stroke, there are epigenetic factors that need additional evaluation. In addition, we could not explore in detail how genetic factors could be incorporated in precision medicine and how genetic data could be integrated with other omics data, such as proteomic, metabolomic, and transcriptomic data since they are beyond the scope of this review. ## 8. Conclusions The prevalence of stroke and the global burden remains high. Therefore, discovering genetic variants and biological pathways offer has revived hopes for novel therapeutics, drug targets, and effective interventions. Genetic information can be used to improve stroke diagnosis and prognosis. Several GWAS and PheWAS have identified many independent loci associated with stroke which could be instrumental in developing newer drug targets and novel therapies. The application of analytical techniques, such as meta-analysis and Mendelian randomization could also facilitate evaluating risk factors and stroke outcomes and prioritizing potential therapeutic targets. Accumulating SNPs into polygenic risk scores and combining them with lifestyle risk factor scores could enable the possibility of identifying individuals who are at a greater risk for stroke even at a younger age.
Purpose: Retinal pigment epithelial (RPE) cells are highly specialized neural cells with several functions essential for vision. Progressive deterioration of RPE cells in elderly individuals can result in visual impairment and, ultimately, blinding disease. While human embryonic stem cell-derived RPE cell (hESC-RPE) growth conditions are generally harsher than those of cell lines, the subretinal transplantation of hESC-RPE is being clinically explored as a strategy to recover the damaged retina and improve vision. The cell-adhesion ability of the support is required for RPE transplantation, where pre-polarized cells can maintain specific functions on the scaffold. This work examined four typical biodegradable hydrogels as supports for hESC-RPE growth. Methods: Four biodegradable hydrogels were examined: gelatin methacryloyl (GelMA), hyaluronic acid methacryloyl (HAMA), alginate, and fibrin hydrogels. ARPE-19 and hESC-RPE cells were seeded onto the hydrogels separately, and the ability of these supports to facilitate adherence, proliferation, and homogeneous distribution of differentiated hESC-RPE cells was investigated. Furthermore, the hydrogel’s subretinal bio-compatibility was assessed in vivo. Results: We showed that ARPE-19 and hESC-RPE cells adhered and proliferated only on the fibrin support. The monolayer formed when cells reached confluency, demonstrating the polygonal semblance, and revealing actin filaments that moved along the cytoplasm. The expression of tight junction proteins at cell interfaces on the 14th day of seeding demonstrated the barrier function of epithelial cells on polymeric surfaces and the interaction between cells. Moreover, the expression of proteins crucial for retinal functions and matrix production was positively affected by fibrin, with an increment of PEDF. Our in vivo investigation with fibrin hydrogels revealed high short-term subretinal biocompatibility. Conclusions: The research of stem cell-based cell therapy for retinal degenerative diseases is more complicated than that of cell lines. Our results showed that fibrin is a suitable scaffold for hESC-RPE transplantation, which could be a new grafting material for tissue engineering RPE cells. ## 1. Introduction The retinal pigment epithelium (RPE) is a layer of hexagonal cells located between the choriocapillaris and the retina’s photoreceptors. The RPE cells are critical for the growth, maintenance, and survival of adjacent photoreceptors and visual function [ ]. Damage to the RPE may lead to photoreceptor degeneration and a range of retinal diseases, such as age-related macular degeneration (AMD) and retinitis pigmentosa (RP) [ ]. People over 50 have an increased risk of retinal degenerative diseases. Visual loss in dry AMD is related to RPE metabolic disorder and photoreceptor geographic atrophy. So far, there is no effective treatment for dry AMD. Choroidal neovascularization (CNV) in wet-AMD can penetrate the Bruch’s membrane (BM) and damage the retinal structure. It is often treated with angiogenesis inhibitors such as anti-VEGF (anti-vascular endothelial growth factor). However, regular intravitreal injections of anti-VEGF could increase the risk of infection, retinal detachment, and even lens damage. Long-term administration and susceptibility to drug resistance or recurrence also burden patients’ with medical expenses. Therefore, there is an urgent need to explore novel therapies for retinal degenerative diseases. Various new technology evaluations have been carried out, including cell transplantation [ , , ]. The RPE cells provides nutrition for photoreceptors, forms the blood-retinal barrier (BRB), and clears metabolites from the subretinal space by phagocytosing the outer segments of photoreceptors, thereby maintaining visual function. In contrast, retinal degenerative diseases are frequently associated with RPE dysfunction. Therefore, RPE cell transplantation may not only ensure its normal function by replacing the damaged cells but could also regulate the subretinal microenvironment, alleviate the subsequent photoreceptor degeneration and visual loss, and maintain retinal homeostasis. Over the past few decades, numerous studies of RPE cell suspension injection into the subretinal space have demonstrated the promise of cell therapy. However, while the short-term results are the most effective, the long-term effects are often poor. This is probably related to the potential unresolved problems: 1. The suspension injection is prone to in situ leakages to form a proliferative membrane; 2. The percentage of attachment and survival of the suspended RPE injection is low; the maldistribution forms non-functional cell clusters, and the abundant apoptotic cells induce the microglia migration, local inflammatory response stimulation, and finally, reducing the graft efficacy; 3. Under aging or pathological conditions, RPE cells without adequate attachments may dedifferentiate into macrophages or fibroblasts through SMAD3, losing mature RPE function. Effective therapy must simultaneously improve the defective RPE cells and the extracellular microenvironment. Therefore, the selection of suitable biomaterials and construction of the corresponding carrier scaffolds are crucial for retinal tissue regeneration. With the interdisciplinary application of clinical medicine, bioengineering, and the increasingly close medical–industrial collaboration, many studies have considered providing a transplanted cell-adhesive matrix so that the cells can maintain a stable microenvironment after transplantation. With the improvement of RPE transplantation by tissue engineering, cells are implanted and expanded in vitro on the surface of a single-layer degradable biomaterial scaffold, and then the whole graft is implanted into the subretinal space. The advantages of such RPE transplantation include organized polarized cell delivery and intact grafts with lower immunogenicity than dispersed cells. Research and optimization of supporting RPE graft materials hold great promise. During development and subsequent implantation, synthetic polymer matrices serve as fundamental carriers for RPE cells. Currently, two polymers are under clinical testing, including polypropylene and polyester. These materials can be modified to form micropores and increase cell adhesion. However, laminin or vitronectin coating might be required to promote cell adhesion. Months after implantation, the polymer slowly degrades and becomes lodged between the RPE and the choroid, causing fibrin deposition and local inflammation in animal studies. Additionally, previous in vivo studies suggest that the rigidity of the material could damage the choroid. Furthermore, the decreased choroidal permeability and low survival rate of the RPE cells are still a concern. However, hydrogel system materials display good biocompatibility, increased flexibility to fit tissue, a fast degradation rate, and excellent mechanical properties. With the close integration of tissue engineering, material science, and clinical medicine, we aim to screen common hydrogels for human embryonic stem cell-derived RPE cell transplantation and to reverse and regenerate retinas for future clinical applications. As a naturally derived biomaterial, gelatin methacrylate (GelMA) and hyaluronic acid methacryloyl (HAMA) hydrogels are commonly used in medical regenerative engineering and tissue engineering [ ]. The cellular effects of gelatin include propagating, diffusing, migrating, adhering, spreading, and activating cells [ ]. Cartilage and vitreous tissue contain large quantities of hyaluronic acid (HA), which is pervasive in the extracellular matrix. Furthermore, HA exhibits good superelasticity, strength, and bio-compatibility in biomaterials. Alginate, an anionic polysaccharide derived from brown algae, is widely used in tissue engineering and cell encapsulation. Purified alginic acid membranes [ , ] and hydrogels [ , , ] have been shown to enhance RPE cell growth and maintain specific functions. Moreover, the cross-linked fibrin network could form due to natural fibrinogen activation. Fibrin hydrogel is currently used clinically as a sutureless closure option during surgical incisions and is commercially produced using cGMP [ ]. In vitro, several types of stem cells can differentiate into RPE cells, including adult stem cells, embryonic stem cells (ESC), and induced pluripotent stem cells (iPSC) [ ]. Recently, human embryonic stem cell-derived RPE cells have been used clinically for the treatment of AMD and Stargardt disease [ , ]. Even though the long-term efficacy has not been determined, their safety in AMD treatment has been established. Previous RPE scaffold transplantations have mainly focused on cell lines, considering the unique characteristics of stem cell transplantation. Here, we evaluated four biodegradable hydrogels: GelMA, HAMA, alginate, and fibrin for ARPE-19 and hESC-RPE inoculations, respectively, as supports for adhesion, proliferation, and uniform distribution of differentiated RPEs in vitro. In addition, the subretinal biocompatibility of the hydrogels was evaluated in vivo. ## 2. Materials and Methods ### 2.1. Culture of ARPE-19 ARPE-19 cells are an established but non-immortalized human RPE cell line obtained from the American Type Culture Collection (ATCC) (Manassas, VA, USA). The cells were grown in Dulbecco’s modified Eagle’s medium (DMEM)/F2 medium (Gibco, Carlsbad, CA, USA) containing 3 mM L-glutamine (Gibco, Carlsbad, CA, USA), 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, 100 μg/mL streptomycin, and incubated at 37 °C in a humidified environment of 5% CO and 95% air. ### 2.2. Culture of hESC-RPEs To stimulate hESC-RPEs development, xeno-free Essential 8 culture Q-CTS-hESC-2 Cell Line media (Gibco, Carlsbad, CA, USA) were employed, as previously described [ ]. In brief, the differentiation of hESCs to RPEs used steps of hyperfusion, acquired pigment foci, and excision. The medium for RPEs spreading from excised pigment foci contained 77% KO-DMEM-CT (Invitrogen, Carlsbad, CA, USA), 20% knockdown serum-free xenograft CT (Invitrogen, Carlsbad, CA, USA), 1% CTS-glutamine-1 supplement (Invitrogen, Carlsbad, CA, USA), 1% MEM-NEAA (Invitrogen, Carlsbad, CA, USA) and 1% 2-mercaptoethanol (Procell, Wuhan, China). Human embryonic stem cell-derived RPE cells were cultured in cell culture dishes at 37 ℃ in an incubator with 5% CO /95% air with the medium changed every 2 days. Proliferating cultures were digested by TrypLE Express (Gibco, Carlsbad, CA, USA) and then passaged 1:4. Additionally, hESC-RPE was evaluated according to morphological characteristics, presence of pigments in cells, bestrophin-1, CRALBP, MITF, and PAX6 immunostainings (BD Biosciences, San Jose, CA, USA). ### 2.3. Synthesis of GelMA GelMA was prepared as previously reported [ ]. Briefly, a beaker with a magnetic stirrer was filled with 100 mL of PBS, and 20 g of gelatin (Sigma, St. Louis, MO, USA) was added and completely dissolved in the solution at 60 °C. Methacrylic anhydride (2 mL, Sigma, St. Louis, MO, USA) was added slowly and stirred vigorously, and the emulsion was rotated at 60 °C for 3 h. Unreacted methacrylic anhydride was removed using dialysis membranes (12–14 kDa, Solarbio, Beijing, China). Subsequently, the reaction products were freeze-dried. ### 2.4. Synthesis of HAMA To prepare HAMA, 1 g of sodium hyaluronate (Bloomage Biotechnology, Hong Kong, China) was dissolved in 100 mL distilled water, then 1 mL methacrylic anhydride was added to reach a final concentration of 1% ( v / v ) and reacted for 24 h (4 °C). Then, 5M sodium hydroxide was added to maintain the reaction solution (pH 8–10). After the reaction, the solution was dialyzed at 4 °C for 3 days. The dialysis bag solution was frozen at −80 °C for 3 h and then freeze-dried for 2 days to obtain HAMA. ### 2.5. Synthesis of Modified Alginate Sodium alginate (Sigma, St. Louis, MO, USA) was transformed with the G RGDY peptide sequence (Holder, Wuhan, China) containing the RGD amino acid sequence using carbodiimide chemistry [ ]. To activate the sodium alginate polymer chain’s carboxylic acid, 1-ethyl-(dimethylaminopropyl) carbodiimide (EDC; Aladdin, Shanghai, China) was used. Then N-hydroxy sulfosuccinimide (Aladdin, Shanghai, China) and peptides were added. Hydroxylamine hydrochloride was added 20 h later to quench the process. Subsequently, the sodium alginate solution was dialyzed in decreasing salt solution for 3 days (MWCO 3500, Solarbio, Beijing, China) and then freeze-dried. ### 2.6. Compressive Measurements The compressive modulus of several hydrogels was determined using stress measurements. Following ISO 7743/ISO 527-2, cylinder-shaped plastic molds were created, and UV-cured hydrogel specimens were inserted into the molds. Hydrogels were examined with the MTS Exceed model E43 mechanical tester at a 1 mm/min rate. The compressive modulus was calculated based on the slope of the linear region (0–5% strain). ### 2.7. Formation of Different Thin-Layer Hydrogels There were seven groups of different concentrations of hydrogels as cell growth substrates: 2% ( w / v ) GelMA, 5% ( w / v ) GelMA, 1% ( w / v ) HAMA, 2% ( w / v ) HAMA, 1% ( w / v ) sodium alginate, 2% ( w / v ) GelMA + 1% ( w / v ) HAMA and 0.5% ( w / v ) fibrin. GelMA and HAMA films were prepared with DPBS and 0.2% ( w / v ) lithium phenyl-2,4,6-trimethylbenzoylphosphonate (LAP, Sigma, St. Louis, MO, USA) and then cross-linked with UV for 15 s (360 nm, Run LED, Shanghai, China). Sodium alginate hydrogels were prepared with 1% w / v RGD sodium alginate, 50 mM Ca-EDTA and 0.1% ( v / v ) acetic acid, and the fibrinogen gel was made by mixing fibrinogen (Haikon, Shanghai, USA) and thrombin (Sigma, St. Louis, MO, USA, 10 U/mL). ### 2.8. Cell Morphology, Proliferation, and Viability in Different Hydrogels ARPE-19 cells were plated on each hydrogel membrane and polyester tissue culture polystyrene (TCP; Costar, Tehama, CA, USA) as control at a density of 1 × 10 cells/cm . Meanwhile, hESC-RPE cells were inoculated onto each membrane with a density of 1 × 10 cells/cm and ovalbumin (VTN, Gibco, Carlsbad, CA, USA) was used as a control. The culture medium was changed every two days. After seven days, the vitality of ARPE-19 and hESC-RPE cells on the hydrogels was assessed by live/dead assay using the LIVE/DEADs kit (Invitrogen, Carlsbad, CA, USA) to determine the influence of six groups of hydrogels with varying concentrations prepared as substrates for RPE growth. Images were captured with an inverted fluorescence microscope (Olympus, Tokyo, Japan). Furthermore, to assess the cell growth of RPE on different hydrogels, the DNA content of RPE was measured at 1, 3, 5, 7, 9, 12, and 14d using the double-stranded DNA HS analysis kit from Qubit (Yeasen, Shanghai, China). ### 2.9. hESC-RPEs Characterization by Immunostaining hESC-RPEs were inoculated at a density of 1 × 10 cells/cm onto selected hydrogels and VTN for 14 ± 3 days. Cells were washed twice with PBS and fixed with 4% paraformaldehyde (PFA) for 20 min. Later, hESC-RPEs were treated with RPE-65 (MA1-16578, Invitrogen, Carlsbad, CA, USA) and ZO-1 (33-9100, Invitrogen, Carlsbad, CA, USA) overnight at 4 °C, followed by incubation with Alexa Fluor 488-labeled goat anti-rabbit (A-11008, Invitrogen, Carlsbad, CA, USA) in the dark at room temperature for 1 h. DAPI was used 10 min after the 1 h incubation. A confocal laser scanning microscope was used to study the stained cells (FV3000, Olympus, Tokyo, Japan). ### 2.10. Secretion Ability of PEDF and VEGF by ELISA hESC-RPEs were inoculated at a density of 1 × 10 cells/cm on the selected hydrogel and VTN as controls for 14 ± 3 days. Pigment epithelium-derived factor (PEDF) levels in supernatants were measured using the PEDF ELISA kit (BioVendor Research, Czech Republic). Vascular endothelial growth factor (VEGF) levels were measured with a VEGF ELISA kit according to the manufacturer’s instructions (Abcam, Cambridge, UK, USA) [ ]. ### 2.11. Subretinal Transplantation of Selected Hydrogel Animal experiments were conducted according to the guidelines of the Statement for the Use of Animals in Ophthalmic and Vision Research (ARVO) and approved by the Animal Care and Use Committee of Dalian Medical University (No. 20190302-42). This research employed 20 mature C57BL/6J mice (males and females, 25 ± 5 g). All mice were obtained from the Dalian Medical University Animal Care Center, individually kept in a 12 h light/dark cycle, and fed with regular mouse food and water at room temperature (25 ± 5 °C). The subretinal space of the right and left eyes were transplanted with the selected hydrogel or control PBS, as previously described [ ]. Mice were rendered unconscious by intraperitoneal injection of 3 mL/kg of 10% chloral hydrate. The pupils of both eyes were dilated using tropicamide eye drops. After incising the dorsolateral portion of the bulbar conjunctiva, the 29G needle was inserted 3 mm posterior to the corneal rim to pierce the sclera. The hydrogel was maintained chilled during the operation. Only 1 μL hydrogel was progressively injected into the subretinal space using a microsyringe. The needle was maintained for 30 s before being removed extremely carefully to minimize hydrogel leakage. The retina was examined through the glass coverslip contact lens. The spherical protrusion on the retina indicated a successful injection. Postoperatively, all animals were given 210 mg/L cyclosporine-containing water, and tobramycin/dexamethasone ointment eye drops. ### 2.12. In Vivo Degradation of the Selected Hydrogel The degradation of the selected hydrogels was followed in vivo by measuring the thickness of the hydrogels over time. Lethal intraperitoneal injections of sodium pentobarbital were administered to animals at 1, 7, and 14 days. Eyes were removed and placed in 4% PFA. The anterior segment was excised, and eyecups were post-fixed in 4% PFA for 2 h and 30% sucrose/PBS overnight. The tissue was dehydrated, cleared, and embedded in paraffin. Paraffin-embedded tissue blocks yielded 10 mm tissue slices. Sections were rinsed in xylene to remove paraffin, rehydrated in ethanol, and then washed in distilled water. Hematoxylin and eosin (HE)-stained sections were imaged under the microscope (IX71, Olympus, Tokyo, Japan). ### 2.13. Statistical Analysis All statistical analyses were conducted by SPSS 13.0 (SPSS, Inc., Chicago, IL, USA). All data were represented as mean ± standard deviation. A one-way analysis of variance (ANOVA) was utilized to compare the differences between different groups. A p -value below 0.05 was considered statistically significant. ## 3. Results ### 3.1. Identification of hESC-Derived RPE Cells We used the human embryonic stem cell line (Q-CTS-hESC-2) as a cell source to induce RPE cell differentiation [ ] ( A). The hESC colonies became superfused after 7 days of spontaneous differentiation and formed pigmented foci after approximately 25 days ( B). After 35 ± 5 days, the pigmented foci were sufficient to be removed and seeded into 6-well plates, enabling RPE cells to adhere and proliferate. hESC-RPE differentiation began on the day the pigmented foci were isolated [ ] ( C). After day 20, RPE cells exhibited pigment accumulation and a typical cobblestone morphology ( D). Immunostaining for bestrophin-1, CRALBP, MITF, and PAX6 in hESC-RPE cells demonstrated high purity and good differentiation ( G). ### 3.2. Evaluate the Mechanical Properties of the Low Concentration Hydrogels The percentage of hydrogel needs to guarantee gelation as low as possible. Rheology and compression tests (modulus E) were used to evaluate the mechanical properties of seven groups of hydrogels. Among them, the strength of 2% GelMA and 1% HAMA was too low to be bonded in the machine. The compression tests showed similar stress–strain curves for the different hydrogels, with an initial linear increase in stress with increasing deformation at small strains and an exponential increase in stress at constant strain at large deformations ( A). The hydrogels of the five groups presented significantly different stiffnesses, with compressive moduli of 1.46 ± 0.31 kPa for 5% ( w / v ) GelMA gels, 0.96 ± 0.13 kPa for 2% ( w / v ) HAMA gels, 21.46 ± 2.4 kPa for 1% ( w / v ) Alg-RGD gels, 2.02 ± 0.04 kPa for 2% ( w / v ) GelMA and 1% ( w / v ) HAMA gels, 1.2 ± 0.06 kPa for 0.5% ( w / v ) fibrin gels ( B). Since 1% ( w / v ) Alg-RGD hydrogels had a significantly higher percentage yield strain than the other hydrogels, these findings allowed us to prepare soft hydrogels by determining specific low concentrations of 5% ( w / v ) GelMA, 2% ( w / v ) HAMA, and 0.5% ( w / v ) fibrin, respectively. ### 3.3. The Normal Cell Morphology, Proliferation, and Viability of hESC-RPE on Fibrin We next examined the cell morphology, proliferation, and viability of ARPE-19 and hESC-RPE cells on different hydrogels in vitro. After the cells were attached to the surface, their distribution was observed using light microscopy. Cells spreading on fibrin showed a uniform distribution similar to the controls. In contrast, other hydrogels formed cell aggregates without adhesion, which were easily washed out after two days ( A,B). Only fibrin (0.5%) resulted in a similar cell adhesion, morphology, and proliferation of hESC-RPE and ARPE-19 cells compared to the commercial VTN and TCP controls. In contrast, hESC-RPEs on other hydrogels formed additional vesicles, which were thought to be characteristics of apoptosis ( B, red bars). The ARPE-19 and hESC-RPE cells cultured on fibrin maintained tight junctions, which were pigmented and formed a cobblestone monolayer of cells. ( A,B; 14 days cell culture). As seen above, diluting the hydrogels caused the viscosity to drop dramatically ( ) and could explain the inability of the majority of low-concentration solutions to facilitate uniform cell distribution and adhesion. Live/dead cell labeling also demonstrated that ARPE-19 and hESC-RPE cells survived on fibrin hydrogel ( C). Moreover, the DNA content measurements showed a dramatic increase in ARPE-19 proliferation on fibrin, especially during the first 5 days and then stabilized after day 7. The DNA content on day 5 was eight times higher than on day 1 ( D). Meanwhile, the DNA content of hESC-RPE cells continued to increase slowly for nearly 2 weeks; the DNA content on day 14 was 6 times higher than that of day 1 ( E). These results suggest that fibrin hydrogels provide a physiologically relevant microenvironment to support hESC-RPE survival and proliferation. ### 3.4. Functional Protein Secretion of hESC-RPE on Fibrin The biological functions of hESC-RPE on fibrin hydrogels are essential for practical applications and studying cell behavior. ZO-1 is a tight junction protein that plays a critical role in the formation of epithelial cell polarity and is an important marker of epithelial cell differentiation. At the same time, RPE-65 is positively related to the maturity of RPE cells. Immunofluorescence staining showed a positive expression of ZO-1 in RPE cells cultured with fibrin (the intercellular junction was green), with a hexagonal pattern in most cells. RPE-65 expression was also positive ( A). We then evaluated the secretion capability of PEDF using ELISA. The level of PEDF secretion in hESC-RPE cells on fibrin (6764 ± 2448 ng/cm , at 48 h) was significantly higher than on VTN (1936 ± 1861 ng/cm , at 48 h). hESC-RPE cells on fibrin and VTN did not differ in VEGF secretion ( C). These data indicate that hESC-RPEs on fibrin partially increase the secretory function of hESC-RPE cells to produce the PEDF growth factor, enhancing RPE survival and facilitating retinal cell differentiation and maturation. ### 3.5. In Vivo Immunogenicity and Degradation of Fibrin in the Subretinal Space A schematic diagram of the subretinal injection via the external route is shown in A. We used a microsyringe at 2 mm behind the corneal limbus to pierce the bulbar wall slightly parallel to the sclera obliquely and injected about 1 μL solution into the right eye. If the local retina of the fundus showed a circular bulge under the operating microscope, the injection was considered successful ( B). HE staining confirmed the existence of hydrogel 1 day after implantation. Subsequently, the HE slices were examined to verify fibrin gel degradation and retinal integrity in the implant area ( D). Retinotomy and the head of the optic nerve were used to locate the implant. At the location of the retinotomy, a scar thicker than the surrounding retina was discovered. One week after implantation, eosinophilic hydrogel appeared distal to the retinotomy site. The neurosensory retina on the gel was found to be in good condition with a proper anatomical structure and outer photoreceptor segments in contact with the fibrin hydrogel. Furthermore, we did not find any roseola node formation or immune cell infiltration. The fibrin appeared to retain a relatively smooth outside surface. The fibrin was degraded within 14 days ( E). These were mainly confined to the weakening of the gel’s outer edge after one week. The seven groups of hydrogels with different concentrations prepared as the substrates for ARPE-19 as well as hESC-RPE cell growth. Only fibrin protein hydrogels allowed cells to form a uniform distribution at low concentrations, while other types of hydrogels formed cell aggregates without adhesion and were easily washed off after two days. ( A ) Optical images of ARPE-19 on different hydrogels. ( B ) Optical images of hESC-RPEs on different hydrogels, while damaged cells with special vesicles were measured by the red bar. ( C ) Representative confocal microscopy images of RPE on fibronectin, which was stained with calcein and ethylene dimer for live/dead assay. ( D ) The DNA content of ARPE-19 on fibrin as well as the TCP control. ( E ) The DNA content of hESC-RPEs on fibrin as well as the VTN control. hESC-RPE matured over 2 weeks, and functional proteins secreted on fibrin. ( A ) hESC-RPE cells were cultured on fibrin. Representative phase and immunostained images are shown for ZO-1, RPE-65 expression. ( B ) ELISA of hESC-RPE against PEDF after differentiation ( n = 3). ( C ) ELISA of hESC-RPE against VEGF after differentiation ( n = 3): ****: p < 0.0001. Immunogenicity and degradation of fibrin in the subretinal space. ( A ) Schematic of the subretinal space injection. ( B ) Macroscopic view of injected fundus of a mouse eye. ( C ) Close up image of HE stained histological sections where the injection was administered, revealing the healthy retina. ( D ) Micrographs of HE-stained tissue sections of animals at 1, 7 and 14 days. The fibrin implants were eosinophilic and most of the gel was retained for 1 week. After two weeks, fibrin gels were no longer evident. Both time periods reveal a healthy neuroretina inside the implanted location (arrows indicate fibrin). ( E ) Graph showing the change in fibrin protein thickness over time in animals. Thickness indicates remaining fibronectin implants. GCL: ganglion cell layer. ONL: outer nuclear layer. INL: inner nuclear layer. RPE: retinal pigment epithelium. Ch: choroid. ## 4. Discussion This work aimed to identify a scaffolding or covering material suitable for the implantation of hESC-RPE cells during subretinal transplantation. We screened hydrogels that would aid in adhesion and differentiation, offer sustained survival for surgical implantation of hESC-RPE cells, and disintegrate rapidly within several days. Our results showed that fibrin is suitable for this application. Damage to the BM and RPE cell layers, such as stromal damage and drusen deposition, are important causes of retinal diseases, including early AMD [ ]. It is difficult for donor RPE cells to attach to the recipient’s diseased BM during RPE transplantation. Even if they can be reattached to the recipient’s lesioned BM, the differentiation function of the transplanted cells is significantly limited [ ]. Therefore, the quality and quantity of BM that provides a scaffold for cell attachment could determine the fate of RPE grafts [ ]. Injecting retinal progenitor cell (RPC) suspension leads to the survival of only a few cells [ ]. Biodegradable scaffolds enhance transplanted cell survival. The optimal BM replacement is one of the keys to the success of RPE transplantation; it should be biocompatible, biodegradable, and bioabsorbable and guide proper cell adherence, differentiation, and proliferation [ ]. Traditionally, solid scaffolds comprised of synthetic polymers such as poly(L-lactic acid)/poly(lactic-co-glycolic acid) (PLLA/PLGA), poly(3-caprolactone) (PCL), and poly(glycerol-sebacate) (PGS) have been used for this purpose [ ]. These scaffolds are rigid, and their implantation in the subretinal space is invasive and may result in retinal detachment [ , ]. Recent research has focused on biodegradable hydrogels as deliverable scaffold systems for stem cells [ ]. Hydrogel has the advantage of high water content, which can wrap cells with a similar structure to the extracellular matrix, permeabilize nutrients, and is less invasive than solid scaffolds [ , , ]. This study investigated four biodegradable hydrogels as cell growth carriers: GelMA, HAMA, sodium alginate, and fibrin. ARPE-19 and hESC-RPE were inoculated and evenly distributed on these hydrogels and constructed to mimic retinal adhesion, proliferation, and growth of BM functionality. Although all materials were derived from natural sources, our hydrogel scaffolds were relatively thick and did not contain the five layers of natural BM. Therefore, our chosen hydrogel might sustain RPE cells like natural BM but without the biophysical cues. GelMA is commonly used for cell culture and tissue engineering scaffolds [ , ]. Photocrosslinking improves GelMA stability at physiological temperatures and permits fine-tuning of mechanical characteristics [ , ]. Hyaluronic acid has limited cell adhesion and has been modified with collagen, fibrin, RGD peptide, and gelatin. One study combined GelMA and HAMA to seed human umbilical vein endothelial cells (HUVECs). This hydrogel combination proved promising for cardiovascular tissue engineering [ ]. RGD-alginate hydrogels have been used to transplant rat fetal retinal tissue [ ] and stimulate neural differentiation in mouse ESCs [ ]. Alginate preserves primary and adult hRPE cell viability [ , ]. Encapsulation in 1% alginate hydrogel enhances the pigmented RPE phenotype of human, porcine primary adult RPE cells and the expression of RPE markers such as RPE65 and tyrosinase. Meanwhile, the 3D culture of hPSCs in RGD-alginate hydrogel enhances the formation of retinal tissue [ ]. In addition, sodium alginate and hyaluronic acid are used in ophthalmic products, including intraocular products [ ], and are well tolerated by the eye. However, protein-based materials should have low concentrations to achieve a controlled onset of degradation and degradation rates within days to months after subretinal injection. There is evidence of ultrastructural and cellular damage in the inner retinal layers as well as pre-retinal hemorrhage with collagen-based support materials [ , ]. Fibrinolytic enzymes are routinely clinically used in the eye, unlike other hydrogels. Ocriplasmin and alteplase come in various doses and have mild retinal side effects [ , ]. Moreover, fibrin-protein hydrogels containing peptidases are safe and degradable scaffolds for the subretinal implantation of iPSC-RPE [ ]. There have been several successful clinical trials of hESC-RPE cell suspension transplantation, which could potentially treat AMD. Although several promising hydrogels have been successfully used for the multiple biomedical applications described above, few studies have investigated cellular behavior and the tissue responses to hydrogels as RPE growth vehicles for hESCs. However, the introduction of RPE cells in implants generates xenografts and thus triggers an immune response. Therefore, low concentrations of degradable scaffolds are urgently needed for rapid hydrogel degradation and successful integration. In our study, we compared the growth of hESC-RPE and ARPE-19 cells on seven groups of four biodegradable hydrogels with different concentrations. We found no significant difference in the adhesion rates of the RPE cells between the fibrin and conventional VTN groups after 4 h of inoculation at the same density. In contrast, the adhesion rate of the other materials was significantly lower. Moreover, changing the medium two days after inoculation, the hESC-RPE cells on most scaffolds, other than fibrin, were washed away due to their non-adherence with the scaffold. The contact of HAMA with hESC-RPE also showed a change in cell morphology, which may be related to apoptosis. It indicated that fibrin was more suitable for the adherent growth of hESC-RPE and ARPE-19 cells. In the cell proliferation experiment, we found that the number of adherent cells in the fibrin group was fewer than that in the control VTN group on the first day after inoculation; however, the proliferation in the fibrin group was significantly faster than that in the control group on the seventh day. On day 14, cell proliferation peaked in both groups, with no significant difference between the fibrin and control groups. Notably, the number of hESC-RPE cells in the fibrin group was still lower than that in the control group on day 1, which may be due to the ability of cells to adhere and proliferate. Some academics argue that the proliferation ability of hESC-RPE cells on different supports is related to the structure of the matrix itself and the interaction between cells and the matrix, including the morphology and chemical structure of the matrix membrane. The mechanical differences between the hydrogels could explain the difference in the adhesion and proliferation of RPE cells; the mechanical properties of fibrin were similar to BM. However, increased cell adherence does not guarantee cell proliferation and differentiation ability. Therefore, further immunofluorescence and ELISA experiments should be conducted for relevant verification. ZO-1 is a tight junction protein that plays an essential role in the formation of epithelial cell polarity and is an important marker of epithelial cell differentiation, while RPE-65 is positively related to the maturity of RPE cells. As a type of epithelial cell, differentiated RPE cells usually have polarity, and tight junction functions constitute the outer barrier of the retina. The expression of ZO-1 in RPE cells and the arrangement of the RPE monolayer in vitro were consistent with the characteristics of the normal differentiation of RPE cells in vivo. We found that on the 14th day of culture, hESC-RPE cells on both fibrin and control groups formed RPE monolayers, ZO-1 staining was complete and orderly, and most of them surrounded a single hESC-RPE cell. These observations suggested that hESC-RPE cells on fibrin maintain typical epithelial differentiation characteristics and have polarity and tight junction function. Moreover, overexpression of RPE-65 was strongly related to the proper differentiation and high purity of hESC-RPE cells on fibrin. Additionally, a variety of cytokines, such as PEDF and VEGF, are secreted by the RPE cells. hESC-RPE cells cultured on fibrin secreted more PEDF than control VTN, enhancing RPE survival and facilitating retinal cell differentiation and maturation [ ]. Studies show that PEDF plays a neuroprotective role on retinal cells in the pathogenesis of neurodegenerative diseases. We selected healthy adult C57BL/6J mice and implanted fibrin and PBS in the subretinal space without cells. Histopathological sections found that although there was a certain degree of foreign-body reaction (monocyte infiltration), fibrin was generally accepted by the subretinal space of healthy adult C57BL/6J mice without apparent rejection. Therefore, preliminary studies indicate that fibrin hydrogel has general biocompatibility in the subretinal space. In this study, we selected typical biodegradable natural hydrogels with good biocompatibility for hESC-RPE implantation. Using scaffold-free RPE microtissue delivery [ ] or even microcarriers as carriers for subretinal cell transplantation [ , ], the use of biogel materials should not be limited to monolayer cell transplantation scaffolds. We had previously conducted several studies related to microgel cell transplantation. and attempted to make it feasible, for the first time, to use microgels as injectable fillers for retinal cell transplantation for the treatment of retinal diseases. Therefore, the premise of our material optimization was to ensure that it was conducive to cell adhesion, differentiation, and growth as a classy coating material, regardless of conditions qualifying it as a monolayer graft scaffold of graft hardness, thickness, and so on. Considering that the introduction of implants into RPE cells will lead to xenotransplantation, which results in an immune response, low concentration is an urgent need for the rapid degradation and successful integration of hydrogels. We further used conventional compression tests to evaluate the mechanical properties of the hydrogels. After selecting fibrin as the growth scaffold for hESC-RPE transplantation, considering the different ways of future transplantation and the different purposes of usage, we did not combine fibrin with cells or make a monolayer cell scaffold complex for transplantation; we only carried out pure fibrin hydrogel transplantation to explore the degradation and evaluate its biocompatibility for further animal experiments. Nevertheless, to understand the use of hydrogels for intraocular transplantation to optimize the biological function of transplanted cells, in the anticipation of further design and embellishment of the material properties to combine cells to double-promote disease treatment, the simple fibrin hydrogel material is still limited. Therefore, it is a crucial issue that needs to be resolved using further animal experiments and clinical studies. Future studies should also investigate the scaffold as a dual-use platform, combining regenerative cells to deliver drugs and biologics to ocular tissue. ## 5. Conclusions In this study, we investigated four typical biodegradable and biocompatible hydrogels as supports for hESC-RPE subretinal transplants as a potential treatment for AMD. hESC-RPE cells retained the cobblestone-like morphology, specific protein expressions, polarized morphology, and maturation-related functional PEDF/VEGF secretion capability only with the fibrin hydrogels. After degradation of the fibrin hydrogel within 2 weeks, the retina appears to reattach to the underlying RPE. To our knowledge, this is the first report on screening typical biodegradable hydrogels for RPE adhesion and proliferation of hESCs. Our data suggest that biodegradable fibrin hydrogels have suitable mechanical properties for easy transscleral-driven subretinal implantation and can be considered biocompatible scaffolds for functional hESC-RPE subretinal transplantation.
Catatonia is often a presentation of extreme anxiety and depression. Missing the diagnosis of catatonia would lead to improper treatment, which could be life-threatening. A thorough physical and psychiatric assessment is required for detecting the catatonic symptoms, especially, mutism and negativism in patients with depression. We discuss the case of a 58-year-old female that was incorrectly diagnosed and treated for major depressive disorder (MDD). The patient was then correctly diagnosed with MDD with catatonic features and improved once benzodiazepine (BZD) was started. The preferred BZD was lorazepam, with a success rate of complete remission of up to 80% in adults. Treatment was started with lorazepam 1–2 mg and improvement was seen within the first ten minutes. We believed the addition of BZD in a psychotropic regimen could improve both catatonia and depression, and should be continued for 3–6 months to prevent relapses and recurrences. ## 1. Introduction The number of catatonic patients among acutely ill psychiatric inpatients varies from 7.6 to 38% [ ]. A higher proportion of catatonic patients have comorbid bipolar disorder (43%) [ ] and schizophrenia (30%) [ ]. Currently, there are three significant subtypes of catatonia, namely, retarded, excited, and malignant catatonia [ ]. The presentation of catatonia falls between retarded and excited subtypes, and rarely presents as a hallmark picture of either. Catatonia is often a presentation of extreme anxiety [ ]. Missing the diagnosis of catatonia would lead to improper treatment, which could be life-threatening as it may lead to arrhythmia or hyperthermia [ ]. Furthermore, treating catatonia with an antipsychotic increases the patients’ risk of developing neuroleptic malignant syndrome [ ]. With an improper diagnosis, they would not be put on the appropriate preventive care such as deep vein thrombosis, pulmonary embolism, contractures, and pressure ulcers [ ]. Here, we present a case of depression with catatonic symptoms diagnosed after meticulous observation and psychiatric evaluation. The change in the diagnosis led to the addition of benzodiazepine (BZD) in inpatient management, which ultimately led to drastic improvements in the patient outcomes. ## 2. Case Presentation This case report conformed to the Declaration of Helsinki. Case that is discussed herein was only stated after getting a verbal consent from the patient and approval of the State Hospital. Ms. L, a 58-year-old Caucasian female, was transferred to us from another inpatient psychiatric facility for further management. She had a long-standing history of depression and anxiety disorders. As per their notes, she met the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-V) diagnostic criteria of major depressive disorder (MDD), recurrent episode. They started the patient on mirtazapine 15 mg twice daily, fluoxetine 20 mg in the morning for depression, and buspirone 15 mg twice daily for anxiety. During our intake, the patient complained of anxiety resulting in a decrease in her function of day-to-day living. The patient stated she had limited intake of fluids for the past three weeks. She described her mood as “I do not have moods; I lost the ability to cry or do anything.” Patient’s insight into her underlying illness and judgment was limited. She was continued on her medical regiment and started on a soft diet and meal replacement shakes due to her decreased appetite. The next day, the patient was interviewed in her room as she refused to get out of her bed. She had a minimal verbal response and also showed negativism by refusing to participate with the treatment team. The patient diagnosis was changed to MDD with catatonic features. She was immediately started on lorazepam 1 mg twice daily, while continuing her other psychiatric medications. Within a few hours, the patient was seen outside her room and communicating with staff. The patient also stated that her appetite was returning. The patient continued to improve as her inpatient management continued, eventually denying anxiety and depression. She remained compliant with her psychotropic regimen. On day 6, the patient was taken off from suicide and self-harm precautions as she denied suicidal ideation. The patient was discharged with the antidepressant regimen and was continued on lorazepam for catatonic symptoms. ## 3. Discussion The findings of this case show the importance of a thorough physical and psychiatric assessment for detecting catatonic symptoms in patients with MDD. Our patient’s catatonic symptoms were either missed or not recognized by the initial treatment team. During her admission at our facility, we identified the patient’s reluctance to get off the bed, partial mutism, and negativism to fulfill criterion A of the DSM-V’s criteria for MDD subtype catatonia. MDD caused our patient’s catatonia with severe anxiety, and no associated delirium, which fulfilled criteria B, C, and D. Also, the patient was not functional due to the severity of the symptoms (meeting criteria E). There were two key differentials for this patient—serotonin syndrome and elective mutism. The patient did not meet the criteria for Hunter serotonin toxicity decision rules, thus ruling out serotonin syndrome. She also lacked a history of personality disorder that was often accompanied in patients of elective mutism. A prospective study by Worku and Fekadu (2015) stated that catatonia involved the reduction or inhibition of the GABA receptors that connected the basal ganglia with the cortex and thalamus in the right orbitofrontal lobe. Of note, among catatonic patients, only the right orbitofrontal activity was reduced; the left orbitofrontal lobe activity remained unchanged [ ]. Patients with anxiety also showed a reduction in activity in the orbitofrontal area [ ]. We believe the success of BZD in our patients could be due to the alleviation of anxiety. It could be hypothesized that lorazepam decreased our patient’s anxiety with improvement in catatonic symptoms, which indirectly led the patient to function at full potential. For patients that are being treated for MDD subtype catatonia, the preferred BZD is lorazepam, with a success rate of complete remission of up to 80% in adults and 65% in children [ ]. As per the guidelines, treatment is started with 1–2 mg of lorazepam every four to twelve hours [ ]. Improvement in patients can sometimes be seen within the first ten minutes after initiating lorazepam [ ]. Overall remission with BZD is achieved within four to ten days, although the patient is managed for a total 3–6 months with a slow tapering. And, for the patients who do not respond to BZD, Electroconvulsive therapy (ECT) or a combination of BZD and ECT was recommended [ ]. It is essential to obtain a detailed psychiatric history of patients with severe MDD to observe for certain catatonic features (including mutism and negativism), which can be often missed. Treatment for MDD subtype catatonia differs from MDD. For MDD subtype catatonia, adding a BZD has shown to improve both catatonia and depression while also preventing relapses and recurrences. ## 4. Conclusions Timely intervention of catatonia becomes vital to prevent any long term morbidities. In patients with MDD subtype catatonia, the symptoms of catatonia may get confused or over looked as symptoms of MDD. Clinicians need to maintain a high level of suspicion. BDZ have a proven record of success in treating catatonia.
The aim of our study was to evaluate the role of morphogenetic variability in functional outcome of patients with ischemic stroke. The prospective study included 140 patients with acute ischemic stroke, all of whom were tested upon: admission; discharge; one month post-discharge; and three months post-discharge. The age was analyzed, as well. The Functional Independence Measure (FIM) test and the Barthel Index (BI) were used for the evaluation of functional outcomes for the eligible participants. We analyzed the presence of 19 homozygous recessive characteristics (HRC) in the studied individuals. There was a significant change in FIM values at discharge ( p = 0.033) and in BI values upon admission ( p = 0.012) with regards to the presence of different HRCs. Age significantly negatively correlated for the FIM score and BI values at discharge for the group with 5 HRCs ( p < 0.05), while for BI only, negative significant correlation was noticed for the group with 5 HRCs at three months post-discharge ( p < 0.05), and for the group with 3 HRCs at one month post-discharge ( p < 0.05) and three months post-discharge ( p < 0.05). Morphogenetic variability might be one among potentially numerous factors that could have an impact on the response to defined treatment protocols for neurologically-impaired individuals who suffered an ischemic stroke. ## 1. Introduction Most stroke patients demonstrate functional improvements over time. Such improvements might be connected with compensatory processes, that could be to the certain degree explained by the brain spasticity [ ]. Previous studies have evaluated the role of candidate genes on neurological deficit, functional ability, and social participation in stroke patients [ , , ]. Further, it has been suggested that genetic variations might be used in the prediction of neural injury recovery [ ]. It should be noticed that the rehabilitation outcome of patients who suffer from neurological conditions rests on a complex interaction among the patient’s baseline status, modifiable and non-modifiable individual factors, and the treatment program [ ]. However, the presence of different variations of functional outcomes after a stroke is still not fully elucidated. This might be explained by the fact that different parts of the brain, along with different degrees of neurological injury and present comorbidities, influence an individual’s functional recovery potential. Therefore, functional outcome prediction is complex and difficult. Thus, factors specific to pathophysiological stroke subtypes, as well as biological and genetic factors, should be evaluated [ ]. Furthermore, a better understanding of biomarkers in motor recovery after the stroke would be of great value for creating personalized rehabilitation treatment modes and proper selection for trials dealing with rehabilitation interventions [ ]. The aim of our study was to evaluate the role of morphogenetic variability in the functional outcome of patients with ischemic stroke. ## 2. Methods ### 2.1. Study Group The prospective study included 140 patients who were diagnosed and treated with acute ischemic stroke by a board-certified neurologist and referred to rehabilitation treatment that was conducted by board-certified physiatrist at a specialty hospital for cerebrovascular diseases, “Sveti Sava” in Belgrade. The participants were tested on four different occasions (at admission—Group 1; upon discharge—Group 2; one month post-discharge—Group 3; and three months post-discharge—Group 4). The age was analyzed, as well (participants were between 65–80 years old). Enrolled patients were included in a standard physiotherapy program after the stabilization of overall health parameters. The physiotherapy included kinesiotherapy procedures five times per week. All eligible participants and/or legal guardians were informed about the study protocol and consent was obtained. The study was approved by Institutional Review Board and followed the principles of good clinical practice. ### 2.2. Functional Status Estimation The Functional Independence Measure (FIM) test was performed for the evaluation of functional (motor and cognitive) status of the eligible participants. The FIM is composed of 18 items in total, where 13 refer to the motor subscale and five are cognitive subscales [ ]. The motor subscale gives the information regarding: self-care, sphincter control, transfers, and locomotion, while the cognitive subscale analyzes communication and social cognition. There is a seven-grade scoring system for every item, where the total sum can range between 18–126 [ ]. The Barthel Index (BI) was used to measure the functional outcome in the tested patients. The scale is composed of 10 tasks and was scored from 0 to 100, where lower scores represent greater nursing dependency [ ]. ### 2.3. Tested Homozygous Recessive Characteristics We implemented the homozygous recessive characteristics (HRC) test [ , , ] to estimate the degree of recessive homozygosity in the eligible participants. The HRC test was developed for the evaluation of the proportion of HRCs that are clearly expressed and considered as qualitative traits, thus being markers of chromosomal homozygosities in every individual [ , , , , , , ]. The studied HRCs are the markers of genes located on different chromosomes [ ]. We analyzed the presence of 19 HRCs in the studied individuals, only marking as the present trait characteristics that appeared extreme. In the region of the human head, we tested 13 HRCs: attached ear lobe (OMIM number 128900), continuous frontal hair line (OMIM number 194000), blue eyes (gene location 15q12, 15q13, OMIM number 227220; 5p13 OMIM number 227240; 14q32.1, OMIM number 210750; 9q23 OMIM number 612271), straight hair (1q21.3, OMIM number 139450), soft hair and blond hair (gene location 15q12, 15q13, OMIM number 227220; 14q32.1, OMIM number 210750; 12q21.3 OMIM number 611664; 11q13.3, OMIM number 612267), double hair whorl, opposite hair whorl orientation (OMIM number 139400), an inability to roll, fold, and curve the tongue (OMIM number 189300), ear without Darwinian notch, ability to produce a guttural “r”, and color blindness (gene location Xq28, OMIM number 303800). In human arms, we tested six HRCs: proximal thumb hyperextensibility, index finger longer than the ring finger (OMIM number 136100), left-handedness (gene location 2p12-q22, OMIM number 139900), right thumb over left thumb (hand clasping) (OMIM number 139800), top joint of the thumb >45°, and three tendons in the wrist (OMIM) [ ]. ### 2.4. Statistical Analysis The results were presented as mean values (MV) with standard deviation (SD). A one-way ANOVA test was performed to evaluate the presence of statistical significance between continuous variables. The Mann–Whitney U test was done to analyze the statistical significance in functional scores changes between two different times of observation. To estimate the degree of correlation between age and scores of functional tests that were performed in defined times of observation, we used Spearman’s correlation test. For evaluation and quantification of variability that can be explained between different functional scores in defined time of observation among individuals with different amount of HRCs, we introduced η = Sum of squares (Between groups)/Sum of squares (Total) × 100, where sum of squares was gained from the one-way ANOVA test, and the results were presented as percentage (%) [ ]. Statistical significance was set at p < 0.05. ## 3. Results There was no significant change in FIM values for Group 1 ( p = 0.077), Group 3 ( p = 0.141) and Group 4 ( p = 0.075) with regards to the presence of different amount of HRCs, while a significant change was noticed for Group 2 ( p = 0.033) ( and ). However, as the number of HRCs increased, there was a decrease in FIM score in Group 2 and Group 3; however, in Group 1 and Group 4, patients with three and four HRCs had higher FIM scores versus those with 5–8 HRCs, pointing to the unchanged trend that an increased level of genetic homozygosity leads to a less favorable FIM score. For all groups (η = 7.08%; η = 8.55%; η = 5.94%; and η = 7.12%), the low effects size of a different amount of HRCs was noticed to be associated with FIM scores ( ). We have shown that there was non-significant change in the BI values for Group 2 ( p = 0.074), Group 3 ( p = 0.117), and Group 4 ( p = 0.159) regarding the presence of different amounts of HRCs, while significant change was noticed for Group 1 ( p = 0.012) ( and ). However, as the number of HRCs increased, there was a decrease in BI values in Group 2; however, in Groups 1, 3, and 4, patients with three and four HRCs had higher BI scores versus those with 5–8 HRCs, pointing to the unchanged trend that an increased level of genetic homozygosity leads to a less favorable BI score. For all groups (η = 10.27%; η = 7.13%; η = 6.30%; and η = 5.70%), the low effects size of a different amount of HRCs was noticed to be associated with BI scores, with the highest effects size for Group 1 ( ). In , we presented a statistical interpretation of changes in functional scores for both FIM and BI between defined times of observation. There was significant change in the functional scores between admission and discharge, between admission and one month post-discharge, and between one and three months post-discharge, for both FIM and BI ( ). A significant difference in the functional scores for both FIM and BI between discharge and one month post-discharge, between discharge and three months post-discharge, and between one and three months post-discharge was noticed for individuals with a number of HRCs between 4–7 ( ). For the BI, a significant change in scores was noticed, as well, between discharge and three months post-discharge for groups of individuals with the number of HRCs of 3 and 8 ( ). Age significantly negatively correlated for FIM score values at discharge for the group of tested individuals with 5 HRCs ( r = −0.418; p < 0.05), while for BI, negative significant correlation was noticed for tested individuals with 5 HRCs at discharge ( r = −0.371; p < 0.05) and three months post-discharge ( r = −0.362; p < 0.05), and for tested individuals with 3 HRCs one month post-discharge ( r = −0.756; p < 0.05) and three months post-discharge ( r = −0.756; p < 0.05) ( ). ## 4. Discussion Despite the fact that numerous genetic loci were found to be associated with stroke risk, the role of genes on functional outcome in stroke patients is less clear [ ]. This is of particular importance since 28% of stroke survivors are still dependent on others after one year after the event [ ]. Stroke recovery biomarkers were proposed in the study of Boyd at el. for the purpose of better understanding and ability to predict long-term outcomes after a stroke [ ]. Therefore, we proposed the evaluation of morphogenetic variability on the functional outcome in patients with ischemic stroke as an additional tool that would bring better understanding of the possible underlining mechanisms responsible for better outcome in these patients. Our findings demonstrated that, as the number of tested HRCs increased in studied individuals, there was decrease in the values of FIM and BI scores on all tested occasions. However, only significant changes of these scores was noticed for FIM after discharge, while for BI this occurred at admission. Presence of such differences might be explained by the fact that different determinants were tested in FIM and BI scores. When graphically presented, it was noticed that different variations in FIM and BI scores at defined time points were present for patients with a different amount of tested HRCs. This might bring to light the assumption that both genetic and individual (modifiable) factors could have, to a certain degree, the potential to influence functional outcomes after a stroke. Moreover, we might stress that the possible presence of significant population-genetic differences between stroke patients with various amounts of tested HRCs could exist, with certain preferential phenotypes for greater potential of better functional outcome. A higher degree of genetic homozygosity, followed by a decrease in functional improvement and increase in variability of functional outcome, particularly three months post-discharge for both FIM and BI, might bring these patients into a specific state of genetic-physiological homeostasis where certain mechanisms will influence the potential for functional recovery, along with medicamentous and rehabilitation treatment. Furthermore, it is worth mentioning that an increase in genetic (recessive) homozygosity could enlarge the genetic load degree, potentially causing a decrease in body immunity [ ], with the potential effects on the ability of the organism to respond in a less wide variation to the treatment methods, thus reducing the potential for functional improvement. Numerous studies we performed show that a higher degree of genetic homozygosity leads to a decrease in genetic variability in what may change, together with an increase of genetic loads and genetic physiological homeostasis [ , , , , , ]. We may presume that all this influence decreases the number of possible body responses to action in numerous environmental factors. Taking this into the consideration, we may explain the less favorable results on the rehabilitation tests of those individuals with higher degree of genetic homozygosity compared to patients with lower level of homozygosity. In our study, when age was introduced as a correlation variable, we noticed that significant correlations were different for FIM versus for BI. It should be stressed that there are still opposing opinions regarding the sensitivity of FIM and BI scores in evaluation of stroke patients. Dromerick et al. [ ] stated that FIM is more sensitive, while Sangha et al. [ ] stated that BI was cited in trials of superior quality. Our study pointed out that BI was somehow more sensitive versus the FIM score when age was correlated with regard to the different number of HRCs. This might suggest that certain phenotypes with regard to age could have greater potential for different aspects of functional outcomes in ischemic stroke patients. Therefore, in a different age group of stroke onset, individuals with a different amount of tested HRCs might be better candidates for defined treatment modes in order to achieve optimal functional improvement of certain functional aspects. Our findings imply the justification for the proposal of personalized rehabilitation treatment modes in patients with ischemic stroke. There are several limitations regarding this study. The first limitation refers to the study group, where patients from one population (Serbian)were studied. Another limitation to be considered is potential specific variations, modifiable and non-modifiable, in different populations, and phenotype classes. Furthermore, the number of patients should be considered as limiting factor; therefore, a larger sample study is advised. ## 5. Conclusions In conclusion, our findings will contribute to the better understanding of the potential determinants which could predict response and, finally, outcome to defined treatment protocols, particularly in neurological patients with various degrees of functional disabilities, thus providing important information to practitioners in clinical settings for establishing optimal clinical decision-making strategies. By better understanding of selective biomarkers and their role in stroke recovery, we could be less challenged in the dilemma of the patient’s potential for recovery. Morphogenetic variability, therefore, might be one among potential numerous factors that could have impact on response to a defined treatment protocols for neurologically-impaired individuals who suffered the ischemic stroke.
Neuroimaging studies in the area of mindfulness research have provided preliminary support for the idea of fear extinction as a plausible underlying mechanism through which mindfulness exerts its positive benefits. Whilst brain regions identified in the fear extinction network are typically found at a subcortical level, studies have also demonstrated the feasibility of cortical measures of the brain, such as electroencephalogram (EEG), in implying subcortical activations of the fear extinction network. Such EEG studies have also found evidence of a relationship between brain reactivity to unpleasant stimuli (i.e., fear extinction) and severity of posttraumatic stress symptoms (PTSS). Therefore, the present paper seeks to briefly review the parallel findings between the neurophysiological literature of mindfulness and fear extinction (particularly that yielded by EEG measures), and discusses the implications of this for fear-based psychopathologies, such as trauma, and finally presents suggestions for future studies. This paper also discusses the clinical value in integrating EEG in psychological treatment for trauma, as it holds the unique potential to detect neuromarkers, which may enable earlier diagnoses, and can also provide neurofeedback over the course of treatment. ## 1. Introduction Interest and research into the area of mindfulness have had a steep growth trajectory in recent decades. Owing to this is the development of various mindfulness-based interventions as well as the growing body of evidence on its efficacy for a broad range of populations—be it to improve general well-being amongst healthy individuals [ , ], or in the treatment of clinical disorders such as depression and anxiety [ , ], chronic pain [ , ], insomnia [ , ], and substance abuse [ , ]. This surge in popularity has been paralleled in the media, with some authors criticizing the “hype” especially with regard to the overestimation of its clinical effectiveness [ ]. Arguably, this could also be perceived as true for any research area and/or intervention that is still in its inception; and as such, it demonstrates the invaluable theoretical and clinical importance in balancing the existing knowledge and claims of mindfulness with scientific and clinical evidence. Initiatives toward this end include neuropsychological studies that seek to explore the underlying mechanisms through which mindfulness may be yielding its suggested positive effects [ , , , , , , ]. Across the literature, there may be slight variations as to how specific mechanisms of mindfulness are referred to but, in general, the proposed theoretical framework of mindfulness is suggested to include (i) attention regulation, (ii) body regulation, (iii) change in perspective of self, and (iv) emotion regulation [ ]. Within this framework by Holzel et al. [ ], ‘emotion regulation’ was conceptualized to consist of (a) reappraisal, and (b) exposure, extinction, and reconsolidation, whereby the former refers to the non-judgmental response to emotions, and the latter refers to the process of being exposed to and consciously affected by adverse experiences, but without responding reactively whether through physical symptoms, thoughts, or feelings. Although these various mechanisms have been suggested, it could be perceived that the strength of the evidence base differs between one suggested mechanism to another. For instance, there are relatively more studies (although also, this is still a growing research area) that have investigated attention regulation as a mechanism of mindfulness, in comparison to that in exploring the role of “exposure, extinction, and reconsolidation” (i.e., fear extinction) in mindfulness [ , , ]. As such, there is arguably a need to further explore fear extinction as one potential underlying mechanism of mindfulness, as this could hold important implications for the use of mindfulness-based interventions as an evidence-based practice for fear-based disorders such as anxiety, phobias, and some responses to trauma [ ]. The idea of fear extinction as a process that underlies mindfulness has been suggested in several papers [ , , ], and was the focus of a recent review [ ]. Specifically, in the review, parallel findings in the neuroimaging literature of fear extinction and mindfulness were discussed; emerging evidence was argued to hold important implications for trauma-based psychopathologies. It also discussed the importance of corroborating client-reported and/or clinician-rated effectiveness of mindfulness-based interventions with neuropsychological measures to augment the current literature, as this could fully characterize how mindfulness may facilitate fear extinction. Therefore, the present paper aims to provide a brief conceptual review that expands the neuropsychological context of this argument into the neurophysiological literature of mindfulness and fear extinction—particularly, that utilizing electroencephalogram (EEG) measures. The basis for specifically extending this argument into the EEG literature stems from considerations around the feasibility, clinical significance, and cost-effectiveness of EEG measures in contrast to neuroimaging techniques. Whilst neuroimaging techniques hold considerable contribution in advancing the theoretical and empirical knowledge of mindfulness, it presents formidable challenges from a clinical perspective, as neuroimaging techniques are not always available in clinical settings. Where there are, they may be a costly assessment for patients. Moreover, neuroimaging tools, such as the magnetic resonance imaging (MRI) scanner, have the potential to induce claustrophobic anxiety amongst patients, which in turn, interferes with treatment progress [ ]. This is in contrast to the use of the EEG, which is a non-invasive, neurophysiological method of passively monitoring and recording electrical activity in the brain (i.e., brain waves). Moreover, as the EEG is arguably more feasible and cost-effective, it holds greater clinical significance to be incorporated as an adjunct neurofeedback treatment in mental health settings [ ]; this neurofeedback has been demonstrated with specific regard to enhancing mindfulness-related capacities [ ]. Put together, there is likely to be theoretical, empirical, and also clinical benefits, in understanding the neurophysiological (i.e., EEG) workings and relationships between fear extinction and mindfulness. Therefore, this paper puts forward this seminal step by providing a brief discussion on the following: (i) mindfulness, (ii) fear extinction, including neural correlates via neuroimaging and neurophysiological techniques, (iii) the link between mindfulness and fear extinction as illustrated through EEG findings on mindfulness, (iv) implications for trauma, and (v) future studies in this area. ### 1.1. Mindfulness Historically, mindfulness has its roots in the 2500-year-old spiritual practice of Buddhism. However, it was not until the early 1980s that mindfulness was translated into a Western, non-religious context as a practice or a technique. Since then, mindfulness meditation has commonly been described as the awareness and attention that is directed purposefully in a non-judgmental manner from one moment to the next [ ]. There are a number of ways through which mindfulness can be cultivated, such as, through the mindful exercise of Qigong, Tai Chi, and/or yoga [ , ]. However, when reported in a research or clinical context, mindfulness meditation has been given predominance as an approach to developing mindfulness [ , ]. In a review by Lutz, Slagter, Dunne, and Davidson [ ], neuropsychological evidence was discussed in support of the theoretical framework they suggested—that is, mindfulness meditation that encompassed two forms of meditation, specifically, focused attention meditation (FA) and open monitoring meditation (OM). FA involves the deliberate focus of attention on an object (e.g., a sensation caused by breathing) with the recognition, and thus refocusing, of attention back toward the object as and when the mind wanders. OM, on the other hand, is practiced after the initial use of FA and entails the non-reactive monitoring of experiential phenomena (i.e., the physiological sensations, thoughts, and/or emotions), moment to moment, without an explicit focus on any specific object. In this way, mindfulness meditation takes on a non-reactive stance or approach toward interoceptive experiences, including aversive emotions and memories. Associated forms of meditations—specifically, loving-kindness/compassion-focused meditations—have also been suggested to incorporate both FA and OM meditations [ , ]. Stemming from the concept and practice of mindfulness meditation, various mindfulness-based interventions have been developed, including the well-established mindfulness-based stress reduction program (MBSR) [ , ] and mindfulness-based cognitive therapy (MBCT) [ , ], which are frameworks that have in turn influenced the development of other mindfulness-based interventions (e.g., mindfulness-based relapse prevention for substance abuse and mindfulness-based eating awareness therapy for binge eating) [ , ]. Another such mindfulness-based intervention that has been introduced specifically to the area of trauma is mindfulness-based exposure therapy; however, this will be explored toward the end of this paper. ### 1.2. Fear Extinction A brief background to the construct of fear extinction has been presented in a former paper [ ], and also more broadly elsewhere [ ]. However, it should be further reiterated that fear extinction does not imply the unlearning of the association formed between the neutral, unconditioned stimuli and the feared, conditioned stimuli. Instead, fear extinction has been argued to imply the learning of a new memory that competes without erasing the original fear memory [ , ]; alternatively, it has been suggested to imply the reconsolidation of the original fear memory with new contextual associations [ , ]. The differences between these two conceptualizations can be best understood with respect to the findings by Gershman et al. [ ]. Particularly in their study, rats who experienced an ‘abrupt’ extinction (i.e., removing the feared stimuli all at once) were suggested to have formed a new, competing memory that weakened over time and gave rise for the original, fear memory to resurface again. This is in contrast to rats who experienced gradual extinction (i.e., by gradually removing the feared stimuli), whereby the original fear memory was suggested to have been modified as opposed to forming a new competing memory. As a result, the rats who experienced a gradual removal of the feared stimuli had significantly lower rates of experiencing a return in symptoms following a lapse in duration. The understanding of this extinction paradigm could perhaps suggest a framework for exposure work with humans—specifically, by supporting humans gradually develop coping strategies that they can practice whilst being exposed to the feared stimuli, which hypothetically may then modify the original fear memory over a series of clinical sessions. In the context of mindfulness, these strategies would include cultivating mindful attention and awareness of thoughts, emotions, and bodily sensations when exposed to the stimuli or when they are brought to mind, whilst mindfully responding with non-reactivity, curiosity, and non-judgment; as opposed to triggering a reactivation of the threat system, or waiting out the threat response when exposed to the feared stimuli. This is further discussed later in this paper, in relation to the link between mindfulness and fear extinction. #### 1.2.1. Neural Correlates of Fear Extinction The neuropsychological mechanisms of fear extinction have been reviewed extensively in several reviews elsewhere [ , , ], and were also briefly reviewed in a recent paper [ ]. Of note, implicated brain regions include the amygdala (i.e., the brain region associated with emotional processing, including that of fear expression), the hippocampus (i.e., the brain region involved in memory consolidation and reconsolidation, and thus, in signaling the safety context of extinction), and the ventromedial prefrontal cortex (vmPFC; i.e., the brain region instrumental in decision making and emotion regulation, including the processing of risk and fear). Collectively, these brain regions have been implied in fear extinction through the harmonious down-regulation of the amygdala by the vmPFC and the hippocampus [ , ]. #### 1.2.2. Neurophysiological Literature on Fear Extinction using EEG Whilst the fear extinction network has typically been implied in subcortical brain regions using neuroimaging studies, neurophysiological studies utilizing EEG have also been employed. The ability of the brain to link an aversive stimulus to a neutral stimulus (which becomes a conditioned, fear stimulus) was theorized on early principles of association [ ]. According to this Hebbian principle, the linking process is initiated when a neuron continuously contributes to the firing of another, and that the synchronous activation of two neurons (or neuron systems), which may lie closely next to each other (i.e., a millimeter in range) or in distinct cortical lobes, strengthens the connection between them. Advances in neuroscience since then have been able to largely validate this theory (for a review, see [ ]). EEG analyses of brain activity are mainly grouped into two categories: the time domain or frequency domain of EEG. The former typically utilize event-related potential (ERP), which is the measure of brain response that is time-locked to the onset of an event (e.g., a sensory stimulus). ERPs reflect the EEG activity that is evoked by the presented stimuli/event. The frequency domain analyses of EEG include the analyses of spectral power (i.e., the magnitude of a measured signal against its frequency), event-related synchronization and desynchronization (ERS/ERD; i.e., a relative increase and decrease in power, respectively), as well as coherence/synchronization across brain regions (i.e., sources of brain activity that are approximately phase-locked with each other). Additionally, by adopting a source localization technique via, for example, low-resolution electromagnetic tomography (LORETA) [ ], the sources of brain activity associated with a certain event may be implicated. ##### Event-Related Potentials Studies using ERPs have demonstrated increased P300, which is a positive deflection in voltage with a latency of approximately 250–500 ms, in response to emotional stimuli, including threatening visual or auditory stimuli [ , , , ]. The P300 has been described to play a role in the processing of the stimulus context as well as levels of attention and arousal [ , ]. More specifically, the P300 family is made up of interacting subcomponents, P3a and P3b. P3a originates from frontal distribution to reflect stimulus-driven attention or working memory during task processing, whereas P3b originates from temporal–parietal distribution to reflect attention associated with memory-updating processes, and is relevant for future memory processing [ ]. Accordingly, it has been suggested that whilst P3a is related to task-irrelevant distractors, only P3b is related to the valence or arousal of targets [ ]. Therefore, in combination, robust evidence appears to suggest a hippocampal origin for the P300 potential, although the relative contribution of the hippocampus to the P300 potential is less clear [ ]. The P300 is also one of the ERP components that makes up a cluster referred to as the late positive potentials (LPP). Whilst there might be slight variations across studies as to what constitutes the LPP, it is typically computed as the average amplitude within the time window of 300–600 ms after a stimulus, across central (C3, C4, and Cz), parietal (P3, P4, and Pz), and occipital (O1, O2, and Oz) sites [ ]. Similar to the role of P300, the LPP has been suggested to reflect the deeper and motivated processing of emotional information [ , , , ]. As such, it is almost expected that the LPP has also routinely been implied in the processing of emotionally salient stimuli, including those that imply threat [ , , , ]. In combination, the P300 and LPP potentials allude to an overarching motivated attentional process to arousing stimuli, which may in part (and not exclusively) be threatening. ##### Source Localization As stated earlier, fear extinction is typically implied with activations of subcortical brain regions. As such, source localization analyses with EEG measures may be helpful in implying the subcortical regions that are involved—although it should also be noted that such source-based EEG analyses ought to be interpreted with caution [ , ]. Alterations in vmPFC-localized (infralimbic in rodents) gamma activity were indicated in the extinction of conditioned fear, whilst anterior cingulate cortex (ACC)-localized (prelimbic in rodents) theta activity has been associated with the expression of conditioned fear [ , , ] (the studies by Fenton et al. [ , ] were conducted with rats, and therefore resulting findings in prelimbic and infralimbic cortex were suggested as the rodent homologs of the ACC and vmPFC in humans, respectively). Altered gamma activity, which was found in Mueller et al., [ ] was also found in the left hippocampus—a region that, as indicated earlier, is implied in the recall of fear extinction. The findings from Mueller et al. [ ] have been notable as the study was conducted with humans, and therefore is an impetus to elucidating the valuable use of EEG in fear extinction research in humans. In the amygdala, theta activity has been implied in response to emotional arousal [ ], including stimuli with a negative valence (e.g., threatening stimuli). Theta activity has also been suggested to couple with gamma activity in the amygdala during fear expression and extinction. Specifically, in periods of fear, theta–gamma coupling in the amygdala was enhanced, while gamma power was suppressed [ ]. On the contrary, periods of relative safety were related to an enhanced amygdala-localized gamma power, which showed a medial PFC–amygdala directionality and was also found to be a consequence of theta activity in the medial PFC. Together, these findings suggest that amygdala-localized gamma activity couples with amygdala-localized theta activity during fear expression, and medial PFC-localized theta activity during fear suppression. Moreover, by combining multiple site local field potential, studies conducted with mice found evidence of coupled theta activity in the amygdala–hippocampus–PFC cortical circuits during fear extinction [ , ]. Findings by Lesting et al. [ ] were further able to demonstrate a direction in this theta interaction, with PFC-localized activity in the lead of hippocampal-localized and amygdala-localized theta activity. The finding of an interaction between these regions is supported by functional neuroimaging studies, which similarly show the vmPFC, the hippocampus, and the amygdala to be implied in the fear extinction network, as discussed above [ , ]. ### 1.3. Link between Fear Extinction and Mindfulness Experiential avoidance—that is, the intolerance and/or maladaptive efforts to avoid distressing thoughts, emotions, and/or physiological sensations—plays a central role in the maintenance of learnt fear [ ]. In contrast to this, the practice of mindfulness encourages a non-judgmental and non-reactive monitoring of those distressing experiences within one’s conscious awareness. Accordingly, it has been suggested that the conscious awareness of one’s aversive thoughts, emotions, and/or bodily sensations, concurrent with the non-reactive response toward them, may desensitize the aversive strength of those experiences, leading to the extinction of a fear response toward them [ , , ]. In other words, mindfulness encourages an extinction of the feared response by altering how we relate to and experience the feared stimuli—that is, by embodying mindful attention and awareness of thoughts, emotions, and bodily sensations when exposed to the stimuli, and then, by mindfully choosing to respond to the stimuli with curiosity and non-judgment, as opposed to reacting to the stimuli with an automatic, fight/flight response. From this perspective, mindfulness has been stated to demonstrate similarities with the concept of ‘exposure and avoidance prevention’ seen in exposure therapy, and has therefore been proposed as a form of psychological exposure [ ]. Mindfulness also differs from mere habituation in the process of fear extinction, such that it cultivates increased self-awareness, non-judgment, and curiosity aspects, which may arguably enhance the modification of the original feared memory, as discussed in the context of the findings by Gershman et al. [ ]. To elaborate, the pairing between the neutral (e.g., loud bang) and objectively safe conditioned stimulus (e.g., setting) that typically results in a conditioned fear response would now be modified with an ability to be aware of and describe interoceptive responses (i.e., thoughts, emotions, and bodily responses), in a non-judgmental, non-reactive, and curious manner, resulting in a positive shift in the conditioned response. #### 1.3.1. Neurophysiological Literature on Mindfulness Using EEG Recent efforts have been made to review the neuroimaging findings of mindfulness with respect to exploring the link between mindfulness and fear extinction [ ]. Similarly, this paper strives to further explore this link, but with studies utilizing EEG methods instead. illustrates the various EEG studies on mindfulness conducted with non-clinical samples that are reviewed in this section. It is noted that the findings summarized here are only those that are deemed relevant to the purpose of the current paper in understanding the link between fear extinction and mindfulness. ##### Event-Related Potentials Implicated ERPs that have recurrently been found in studies that have investigated the mechanisms and/or effects of mindfulness are P300 and LPP [ ]. With specific regard to P300, studies have found mindfulness to be associated with an increase in P300 in response to targeted stimuli [ , , , , ] and a decrease in P300 in response to distractor stimuli [ , ] or in association to higher self-reports of ‘decentering’ [ ]. These results have in turn led to the idea of efficient distributed attention in mindfulness-based meditations, whereby meditators are better able to allocate attention between relevant and irrelevant stimuli as demanded by the task [ , ]. van Leeuwen et al. [ ] specifically demonstrated this by showing that among Zen meditators (comprising of both FA and OM), meditators had increased attention to small, detailed targets in comparison to controls with no meditation experience (the results mentioned here are those relevant to the P300 only. Complete results indicate that meditators processed small stimuli (embedded within a larger stimuli) at P1, N2, and P3, in comparison to controls, who only processed small stimuli at P1. Similarly, meditators processed large stimuli (that were made up of the smaller stimuli) at N1, N2, and P3, in comparison to controls, who only processed large stimuli at P3. Together, this indicates a greater ability among meditators to engage and disengage attention between spatial locations.). However, following a four-day OM-only based meditation, meditators with extensive FA meditation experience had reduced capacity to attend to the small, detailed targets, from pre-retreat to post-retreat. van Leeuwen et al. thereby concluded that whilst FA-based meditations cultivate the focusing of attention to expected stimuli, OM-based meditations train a more distributed attention with the ability to allocate and reallocate attention in response to the demands of a task. It is acknowledged that these findings pertain to the effects of mindfulness on attention regulation, whereby these findings specifically suggest the improved allocation of attention as indexed by increased P300 to relevant stimuli and a decrease in P300 to irrelevant stimuli. However, it is arguable that these findings may hold clinical relevance in the context of fear extinction—particularly, with how attention is allocated to arising stimuli in a fear context. To elaborate further, these findings, which suggest an improved allocation of attention following mindfulness, might imply that mindfulness-based meditations and practices may be helpful in cultivating and strengthening the skill of detaching or disengaging from arising stimuli that may otherwise trigger a threat-based reaction that could narrow one’s focus of attention on that particular target or feared stimuli. Stemming from this assumption, it would therefore be interesting to investigate how mindfulness training might alter these attentional resources in the fear extinction context, where attention toward an initially feared stimulus is expected to decrease, and would therefore be indicated by a decrease in P300. Given the overlap between the P300 and LPP in reflecting deeper and motivated processing of emotional information as described earlier, the LPP have also been indicated in EEG studies on mindfulness [ ]. In particular, an inverse correlation has been found between dispositional mindfulness and LPP in view of unpleasant and highly arousing images [ ]. Similarly, findings by Sobolewski, Holt, Kublik, and Wróbel [ ] have found meditators to experience lower LPP in response to negative valence stimuli, but were no different from controls in response to positive valence stimuli, suggesting that meditators were better able to regulate negatively arousing emotions. On the other hand, Egan, Hill, and Foti [ ] found increased LPP regardless of affective valence and arousal. Egan et al. [ ] attributed this finding to the nature of their study, such that the brief mindfulness instruction in their study requested participants to focus their attention to external stimuli (pictures on the screen), which would have in turn led to increased LPP to reflect emotional processing of the stimuli to which focus was directed. As such, the ERP findings thus far allude to the role of mindfulness meditation in the motivated allocation of attention resources, which could have important implications for how attention is allocated toward feared stimuli in the context of fear extinction. ##### Spectral Power and Coherence Further neurophysiological evidence on mindfulness typically suggests an increased oscillation in alpha and theta frequencies [ , , ]. Together, increased alpha and theta oscillations, with the latter mostly occurring in the frontal midline region (which includes the PFC and ACC), have been suggested to imply enhanced attentional processing toward internalized stimuli [ , ]. However, of interest to the current paper are alterations in gamma and theta activity, which as elaborated under “Neurophysiological literature on fear extinction using EEG”, have been found to be associated with fear extinction and expression respectively. With specific attention to gamma activity, the majority of mindfulness-based studies have found an increase in gamma activity [ , , , , , , , ]. Increases in gamma power have further been revealed to be positively associated with years of meditation [ , , ]. Yet, there has also been evidence of decreased gamma activity following mindfulness meditation [ , ]. In their study, Berkovich-Ohana et al. [ ] found that the deactivation of the default mode network (DMN—the network associated with mind wandering) is indicated by a reduced overall inter-hemispheric gamma mean phase coherence when transitioning from resting state to a time production task, and therefore concluded their results to suggest a reduction in mind wandering in higher trait mindfulness (interhemispheric phase coherence refers to the alignment of oscillatory phases between homologous cortical regions (e.g., the left and right dorsolateral prefrontal cortex). In the context of the results by Berkovich-Ohana et al. [ ], the homologous cortical regions are the entire hemispheres). These results are also better understood when examined with respect to spectral power. Particularly, where the deactivation of the DMN was identified as a decrease in gamma power over frontal and midline regions, meditators showed lower trait frontal gamma power, indicating lower mind wandering. However, meditators were also found with greater trait and state posterior gamma power, which was attributed to greater attentional skills and awareness of arising interoceptive and external stimuli [ ]. These findings in the posterior regions have similarly been found in other studies as well [ , , ]. Interestingly, Lutz et al. [ ] found a ratio between gamma and theta activity, whereby, in contrast to controls, long-term practitioners were found to display a higher ratio of gamma to low frequency bands (i.e., theta and alpha bands) at an initial resting state, which was then enhanced during meditative practice and maintained at a post-meditation resting state. Long-term practitioners were also found to have a larger size of gamma synchrony patterns over lateral frontoparietal regions in comparison to controls; and that increases in synchrony size when shifting from resting to meditative states were greater for meditators than control. These findings could possibly reflect changes in attentional and affective processes as a result of mindfulness practice. Additionally, this ratio could also suggest an interactive role between gamma and theta activity, which has previously been discussed, are also implied in the extinction and expression of fear, respectively. Also worth noting are findings by Milz et al. [ ], which elucidated the difference between conventional, head-surface coherence and intracortical lagged coherence that utilizes EEG tomography. Specifically, Milz et al. concluded that functional connectivity using conventional, head-surface coherence shows increases in coherence (which included increased gamma coherence), whereas functional connectivity using intracortical lagged coherence resulted in decreases in coherence (which included lowered theta coherence). Milz et al. argued that in line with the findings by Lehmann et al. [ ], there may be no association between conventional, head-surface coherence and intracortical lagged coherence, and that a comparison of results yielded between head-surface conventional coherence and intracortical lagged coherence may not be possible. Of note, Miltz et al. interpreted the decreases in intracortical lagged coherence found between the cognitive control and sensory perception areas of the brain, to possibly imply focused attention on bodily sensations without the need for cognitive reasoning. On the other hand, the increases in conventional, head-surface coherence were hypothesized to possibly indicate increased source strength, as demonstrated by Pascual–Marqui [ ]; nonetheless, they also argued for the possibility of other contributing factors besides an increase in source strength. Therefore, in light of the mounting evidence from EEG studies on the mechanisms of mindfulness, the use of source localization measures to meaningfully clarify varied activity across frequency bands is of paramount importance. Moreover, as it is apparent that neurophysiological findings thus far primarily indicate the implications of mindfulness in attentional processes, the investigation of mindfulness within the fear extinction context, where specific subcortical regions are implied, would require analyses that address such specifications (e.g., the use of LORETA source estimation in [ ]). ### 1.4. Implications for Trauma Echoing the collated neuroimaging evidence on mindfulness and fear extinction [ ], the existing state of the EEG literature on mindfulness and fear extinction suggests promising evidence of a relationship between the two constructs, but nonetheless awaits future empirical studies for this relationship to be confirmed. As previously suggested [ ], such research efforts would also hold invaluable clinical importance, as it might shed some light on the efficacy of mindfulness-based interventions for clinical disorders such as posttraumatic stress disorder (PTSD) [ , , , , ], specifically where this is characterized by an impaired functioning of the fear extinction network. Evidence-based treatments that are commonly employed in the treatment of trauma include prolonged exposure, cognitive processing therapy, trauma-focused cognitive behavioral therapy, and eye-movement desensitization therapy, with the relative superiority of any one therapy yet to be definitive [ , , , ]. Of particular concern are the high dropout rates that are commonly observed with these trauma-focused approaches, as these patients then continue to suffer from symptoms of PTSD [ , ]. In contrast, King and Favorite [ ] noted that most patients (who in this context were veterans) showed high levels of engagement with mindfulness (i.e., MBCT), and had lower dropout rates than what is typically observed with trauma-focused approaches. This could possibly indicate the need for additional strategies, such as emotional regulation and distress tolerance—as cultivated through mindfulness training—to support and promote the engagement of patients in trauma-focused treatment [ , , ]. As has previously been contended [ ], the majority of studies that have found support for the efficacy of mindfulness-based treatments for PTSD are suggestive in nature, such that findings are largely limited to that of self-reported and/or clinician-rated measures of mindfulness, PTSD severity, and/or of secondary measures exploring functional status, general distress, or quality of life [ , , , , , , , , , ]. In view of this, the adoption of a multimodal approach (i.e., neurobiological and behavioral measures) is necessary to corroborate existing findings as well as to enable discrepancies between findings to be examined [ , ]. Importantly, the integration of neuropsychological measures in clinical studies could provide valuable information on the mechanisms of the studied mindfulness-based intervention, which could then inform how these interventions can be augmented as a treatment approach for this population. Thus far, the literature on mindfulness has examined various forms of mindfulness practices and/or interventions, as well as various mindfulness-based meditations under the umbrella term of ‘mindfulness’. This is problematic, as comparisons between varied states, experiences, skills, and practices in mindfulness may yield it difficult to collate findings, and may therefore lead to premature conclusions [ ]. For instance, in the neurophysiological literature, Lee et al. [ ] suggested mindfulness training to possibly lead to increases in gamma oscillations across multiple brain regions; but additionally, they argued that the specific brain region at which this occurs may depend on the type of meditation delivered. Building on this notion, it is likely that a mindfulness intervention that is targeted toward enhancing the learning and processing of fear extinction may in turn lead to increase in gamma oscillations in brain regions relevant to the fear extinction network, particularly the vmPFC, as described in Mueller et al. [ ]. It has also been argued that participation in mindfulness interventions that are not tailored to specific mental health issues (e.g., PTSD) could possibly lead to deteriorations or a worsening of symptoms [ ]. Van Dam et al. [ ] alluded to the potential risks to participants listed by the MBCT Implementation Resources [ ], which include the heightened likelihood of suicide, depression, negative emotions, and intrusive flashbacks amongst trauma patients. It was further made clear that mindfulness practices are not to replace standard psychiatric intervention for trauma, as mindfulness practices still lack clinical studies and evidence that clearly demonstrate their efficacy. Therefore, it is likely that the emerging evidence on the benefits of mindfulness may support its use as an adjunctive treatment instead. Currently, this is noted with the inclusion of mindfulness-based practices as a component in dialetical behavioral therapy for borderline personality disorder, which traditionally has a high rate of trauma history [ ]. Beyond this, it should be acknowledged that the research area of mindfulness is relatively still in its infancy; and therefore, its use as a standalone or first-line treatment first necessitates greater research, and is currently limited to the controlled context of clinical studies. #### 1.4.1. Mindfulness-Based Exposure Therapy A novel therapy that has recently been introduced is mindfulness-based exposure therapy (MBET), which was developed by a team of clinicians and researchers [ ] at the Veterans Affair Ann Arbor, Michigan, US for the treatment of PTSD amongst veterans. The MBET is a 16-week non-trauma focused therapy that incorporates exposure from prolonged exposure therapy, which is one of the standard interventions used with PTSD patients, and is supplemented with mindfulness training from MBCT, self-compassion exercises, and psycho-education on PTSD. In vivo exposures conducted in MBET are conducted with avoided situations/activities that are deemed to be objectively safe, and with no imaginal exposure or processing of trauma histories. On the whole, the intervention consists of four modules: (i) PTSD psycho-education and relaxation strategies, (ii) mindfulness of body and breath exercises and in vivo exposure to feared but objectively safe stimuli (i.e., there is no processing of trauma memories), (iii) mindfulness of emotion and in vivo exposure, and (iv) self-compassion training. MBET has been trialed in two studies [ , ], and are influential such that they have incorporated pre- to post-neuroimaging measures to corroborate pre- to post-changes in PTSD symptom severity among veterans using the Clinician Administered PTSD Scale (CAPS) [ ]. However, instead of fear extinction, investigated changes in brain integrity were particular to social–emotional processing (i.e., the processing of emotional information from faces of other individuals [ ]) and the functional connectivity in the default mode network (DMN: the network associated with mind wandering [ ]). As expected, PTSD symptom improvement following MBET was associated with increased activity in the dorsal medial PFC [ ] and increases in the DMN (particularly, the posterior cingulate cortex (PCC)) resting state functional connectivity with dorsolateral PFC regions, and that this PCC–dorsolateral PFC connection was correlated with improvement in avoidant and hyperarousal symptoms of PTSD [ ] (the findings reported here are those that have been deemed relevant to the aim of this paper. Readers are directed to the original articles of both studies for further results). Together, these findings demonstrate how MBET might be influential for the brain network associated with the emotional regulatory processing of distressing internal experiences during mind wandering. #### 1.4.2. Neurophysiological Literature on PTSD Using EEG However, in the neurophysiological literature, no study (at least to our knowledge) has sought to explore the benefits of mindfulness for PTSD using EEG measures. Nonetheless, several reviews [ , , , ] have sought to explore the differences in EEG correlates between PTSD and non-PTSD individuals. Specifically, it was found that in comparison to individuals without PTSD, individuals with PTSD demonstrated increased amplitudes in the P50 and P300 family ERPs to aversive stimuli, as well as increased alpha rhythms, and that these increases were correlated with the severity of the posttraumatic stress symptoms (PTSS). Further discussion in the review by Karl et al. [ ] suggested the abnormal P300 amplitudes in PTSD to possibly indicate functional changes in the medial frontal–amygdala neural pathways, which as discussed, is implicated in the fear extinction network. Further support comes from the study by Lee, Yoon, Kim, Jin, and Chung [ ], which found decreased connection strength and communication efficiency in gamma and beta activity among individuals with PTSD; these were also significantly correlated with the severity and frequency of PTSD symptoms in general, as well as specific symptoms of re-experiencing and increased arousal. In view of the indication of gamma activity in the extinction of conditioned fear [ ], it could be argued that findings by Lee et al. [ ] may have reflected non-adaptive fear regulation, which led to increased PTSS, including that of re-experiencing and increased arousal. Building on existing neurophysiological findings of mindfulness (as listed in ), it would therefore be worth exploring the link between mindfulness and fear extinction, and how this link may play a role in altering the relationship between PTSS and its neurophysiological (i.e., EEG) markers. ### 1.5. Future Studies It is worth reiterating that the use of mindfulness practices in a clinical context still awaits greater studies of methodological rigor. As such—and particular to the area of trauma—future studies are warranted to examine the link between mindfulness and fear extinction for trauma-based symptoms by (i) exploring the pre- to post-changes in brain reactivity to fear-evoking stimuli amongst individuals with PTSD following the delivery of a mindfulness-based intervention, and (ii) to determine if the changes in brain reactivity are associated with changes in posttraumatic stress symptoms from pre- to post-intervention. Drawing from the findings of the extant literature that have been discussed, it is anticipated that participants will demonstrate decreased amplitudes at P300 and LPP ERPs—ERP components, which as discussed, have been identified as relevant to fear regulation when processing emotionally arousing visual stimuli in past studies [ , , ]. It is also expected that participants will exhibit increased gamma activity—that is, vmPFC, hippocampal [ ], and/or amygdala-localized [ ]—as well as lowered theta activity [ , , ]. Findings from such studies would be especially pertinent to advancing the theoretical knowledge of the link between mindfulness and fear extinction; consequently, they would be of clinical significance on the use of mindfulness-based interventions with clients presenting with PTSS in mental health settings. Moreover, the use of neurophysiological measures in the study could also elucidate its use as a neuromarker for PTSS severity, which may enable earlier intervention and better prognosis and/or prevention of more complex cases of PTSD. Therefore, implications from potential studies would be in line with suggestions by Graham and Milad [ ], on using the fear extinction model to enhance the current understanding of treatments for anxiety disorders (e.g., PTSD). They additionally argued that the neural circuits of fear extinction were ideal neuromarkers of symptom severity [ ], hence also supporting the integration of neurophysiological measures in future studies to feasibly track neural changes over the course of treatment. See . ## 2. Conclusions As illustrated through this brief review, EEG studies in the integrated research areas of mindfulness and fear extinction are still vastly limited, and we await further studies to build from and to confirm the preliminary findings documented here. This review, consequently, also hopes to have shed some light on the empirical and clinical value of EEG measures in confirming the link between fear extinction and mindfulness. Indeed, the integration of the neuropsychological research areas of mindfulness, fear extinction, and trauma is still in its early conception, but arguably holds invaluable clinical significance that could enhance treatments for fear-based and trauma-related disorders.
Transcranial direct current stimulation (tDCS) is the application of a weak electrical direct current (1.5&#xa0;mA), which has the ability to modulate spontaneous firing rates of the cortical neurons by depolarizing or hyperpolarizing the neural resting membrane potential. tDCS in patients with depressive disorders has been proven to be an interesting therapeutic method potentially influencing pathologic mood states. Except one study, no alterations in mood could be confirmed applying tDCS in healthy participants. In this study, bifrontal or bioccipital stimulation was applied in 17 healthy subjects during 20 minutes with 1.5 mA in a placebo-controlled manner. Bifrontal stimulation consisted of both anodal and cathodal placement on right and left dorsolateral prefrontal cortex (DLPFC) in two separate sessions. Using a set of self-reported moodscales (SUDS, POMS-32, PANAS, BISBAS) no significant mood changes could be observed, neither with bifrontal nor bioccipital tDCS. As already demonstrated by previous studies, we confirmed the minimal side effects and the safety of this neuromodulation technique.
Transcranial focused ultrasound (tFUS) neuromodulation provides a promising emerging non-invasive therapy for the treatment of neurological disorders. Many studies have demonstrated the ability of tFUS to elicit transient changes in neural responses. However, the ability of tFUS to induce sustained changes need to be carefully examined. In this study, we use the long-term potentiation/long term depression (LTP/LTD) model in the rat hippocampus, the medial perforant path (mPP) to dentate gyrus (DG) pathway, to explore whether tFUS is capable of encoding frequency specific information to induce plasticity. Single-element focused transducers were used for tFUS stimulation with ultrasound fundamental frequency of 0.5&#xa0;MHz and nominal focal distance of 38&#xa0;mm tFUS stimulation is directed to mPP. Measurement of synaptic connectivity is achieved through the slope of field excitatory post synaptic potentials (fEPSPs), which are elicited using bipolar electrical stimulation electrodes and recorded at DG using extracellular electrodes to quantify degree of plasticity. We applied pulsed tFUS stimulation with total duration of 5&#xa0;min, with 5 levels of pulse repetition frequencies each administered at 50&#xa0;Hz sonication frequency at the mPP. Baseline fEPSP is recorded 10&#xa0;min prior, and 30+ minutes after tFUS administration. In N&#xa0;=&#xa0;16 adult wildtype rats, we observed sustained depression of fEPSP slope after 5&#xa0;min of tFUS focused at the presynaptic field mPP. Across all PRFs, no significant difference in maximum fEPSP slope change was observed, average tFUS induced depression level was observed at 19.6%. When compared to low frequency electrical stimulation (LFS) of 1&#xa0;Hz delivered to the mPP, the sustained changes induced by tFUS stimulation show no statistical difference to LFS for up to 24&#xa0;min after tFUS stimulation. When both the maximum depression effects and the duration of sustained effects are both taken into account, PRF 3&#xa0;kHz can induce significantly larger effects than other PRFs tested. tFUS stimulation is measured with a spatial-peak pressure amplitude of 99&#xa0;kPa, translating to an estimation of 0.43&#xa0;&#xb0;C temperature increase when assuming no loss of heat. The results suggest the ability of tFUS to encode sustained changes in synaptic connectivity through mechanism which are unlikely to involve thermal changes.
Elephants are thought to possess excellent long-term spatial-temporal and social memory, both memory types being at least in part hippocampus dependent. Although the hippocampus has been extensively studied in common laboratory mammalian species and humans, much less is known about comparative hippocampal neuroanatomy, and specifically that of the elephant. Moreover, the data available regarding hippocampal size of the elephant are inconsistent. The aim of the current study was to re-examine hippocampal size and provide a detailed neuroanatomical description of the hippocampus in the African elephant. In order to examine the hippocampal size the perfusion-fixed brains of three wild-caught adult male African elephants, aged 20-30&#xa0;years, underwent MRI scanning. For the neuroanatomical description brain sections containing the hippocampus were stained for Nissl, myelin, calbindin, calretinin, parvalbumin and doublecortin. This study demonstrates that the elephant hippocampus is not unduly enlarged, nor specifically unusual in its internal morphology. The elephant hippocampus has a volume of 10.84&#xa0;&#xb1;&#xa0;0.33&#xa0;cm&#xb3; and is slightly larger than the human hippocampus (10.23&#xa0;cm(3)). Histological analysis revealed the typical trilaminated architecture of the dentate gyrus (DG) and the cornu ammonis (CA), although the molecular layer of the dentate gyrus appears to have supernumerary sublaminae compared to other mammals. The three main architectonic fields of the cornu ammonis (CA1, CA2, and CA3) could be clearly distinguished. Doublecortin immunostaining revealed the presence of adult neurogenesis in the elephant hippocampus. Thus, the elephant exhibits, for the most part, what might be considered a typically mammalian hippocampus in terms of both size and architecture.
The ability of Mn(2+) to follow Ca(2+) pathways upon stimulation transform them into remarkable surrogate markers of neuronal activity using activity-induced manganese-dependent MRI (AIM-MRI). In the present study, a precise follow-up of physiological parameters during MnCl2 and mannitol infusions improved the reproducibility of AIM-MRI allowing in-depth evaluation of the technique. Pixel-by-pixel T1 data were investigated using histogram distributions in the barrel cortex (BC) and the thalamus before and after Mn(2+) infusion, after blood brain barrier opening and after BC activation. Mean BC T1 values dropped significantly upon trigeminal nerve (TGN) stimulation (-38&#xa0;%, P&#xa0;=&#xa0;0.02) in accordance with previous literature findings. T1 histogram distributions showed that 34&#xa0;% of T1s in the range 600-1500&#xa0;ms after Mn(2+&#xa0;)+&#xa0;mannitol infusions shifted to 50-350&#xa0;ms after TGN stimulation corresponding to a twofold increase of the percentage of pixels with the lowest T1s in BC. Moreover, T1 changes in response to stimulation increased significantly from superficial cortical layers (I-III) to deeper layers (V-VI). Cortical cytoarchitecture detection during a functional paradigm was performed extending the potential of AIM-MRI. Quantitative AIM-MRI could thus offer a means to interpret local neural activity across cortical layers while identification of the role of calcium dynamics in vivo during brain activation could play a key role in resolving neurovascular coupling mechanisms.
Evidence is considered as to whether behavioral criteria for diagnosis of post-traumatic stress disorder (PTSD) are applicable to that of traumatized animals and whether the phenomena of acquisition, extinction and reactivation of fear behavior in animals are also successfully applicable to humans. This evidence suggests an affirmative answer in both cases. Furthermore, the deficits in gray matter found in PTSD, determined with magnetic resonance imaging, are also observed in traumatized animals, lending neuropsychological support to the use of animals to probe what has gone awry in PTSD. Such animal experiments indicate that the core synaptic circuitry mediating behavior following trauma consists of the amygdala, ventral-medial prefrontal cortex and hippocampus, all of which are modulated by the basal ganglia. It is not clear if this is the case in PTSD as the observations using fMRI are equivocal and open to technical objections. Nevertheless, the effects of the basal ganglia in controlling glutamatergic synaptic transmission through dopaminergic and serotonergic synaptic mechanisms in the core synaptic circuitry provides a ready explanation for why modifying these mechanisms delays extinction in animal models and predisposes towards PTSD. In addition, changes of brain-derived neurotrophic factor (BDNF) in the core synaptic circuitry have significant effects on acquisition and extinction in animal experiments with single nucleotide polymorphisms in the BDNF gene predisposing to PTSD.
In mammals, the superior olivary complex (SOC) of the brainstem is composed of nuclei that integrate afferent auditory originating from both ears. Here, the expression of different calcium-binding proteins in subnuclei of the SOC was studied in distantly related mammals, the Mongolian gerbil (Meriones unguiculatus) and the gray short-tailed opossum (Monodelphis domestica) to get a better understanding of the basal nuclear organization of the SOC. Combined immunofluorescence labeling of the calcium-binding proteins (CaBPs) parvalbumin, calbindin-D28k, and calretinin as well as pan-neuronal markers displayed characteristic distribution patterns highlighting details of neuronal architecture of SOC nuclei. Parvalbumin was found in almost all neurons of SOC nuclei in both species, while calbindin and calretinin were restricted to specific cell types and axonal terminal fields. In both species, calbindin displayed a ubiquitous and mostly selective distribution in neurons of the medial nucleus of trapezoid body (MNTB) including their terminal axonal fields in different SOC targets. In Meriones, calretinin and calbindin showed non-overlapping expression patterns in neuron somata and terminal fields throughout the SOC. In Monodelphis, co-expression of calbindin and calretinin was observed in the MNTB, and hence both CaBPs were also co-localized in terminal fields within the adjacent SOC nuclei. The distribution patterns of CaBPs in both species are discussed with respect to the intrinsic neuronal SOC circuits as part of the auditory brainstem system that underlie the binaural integrative processing of acoustic signals as the basis for localization and discrimination of auditory objects.
During the period extending from 1910 to 1970, Oscar and C&#xe9;cile Vogt and their numerous collaborators published a large number of myeloarchitectonic studies on the cortex of the various lobes of the human cerebrum. In a previous publication [Nieuwenhuys et al (Brain Struct Funct 220:2551-2573, 2015; Erratum in Brain Struct Funct 220: 3753-3755, 2015)], we used the data provided by the Vogt-Vogt school for the composition of a myeloarchitectonic map of the entire human neocortex. Because these data were derived from many different brains, a standard brain had to be introduced to which all data available could be transferred. As such the Colin 27 structural scan, aligned to the MNI305 template was selected. The resultant map includes 180 myeloarchitectonic areas, 64 frontal, 30 parietal, 6 insular, 17 occipital and 63 temporal. Here we present a supplementary map in which the overall density of the myelinated fibers in the individual architectonic areas is indicated, based on a meta-analysis of data provided by Adolf Hopf, a prominent collaborator of the Vogts. This map shows that the primary sensory and motor regions are densely myelinated and that, in general, myelination decreases stepwise with the distance from these primary regions. The map also reveals the presence of a number of heavily myelinated formations, situated beyond the primary sensory and motor domains, each consisting of two or more myeloarchitectonic areas. These formations were provisionally designated as the orbitofrontal, intraparietal, posterolateral temporal, and basal temporal dark clusters. Recently published MRI-based in vivo myelin content mappings show, with regard to the primary sensory and motor regions, a striking concordance with our map. As regards the heavily myelinated clusters shown by our map, scrutiny of the current literature revealed that correlates of all of these clusters have been identified in in vivo structural MRI studies and appear to correspond either entirely or largely to known cytoarchitectonic entities. Moreover, functional neuroimaging studies indicate that all of these clusters are involved in vision-related cognitive functions.
Path integration is a navigation strategy that requires animals to integrate self-movements during exploration to determine their position in space. The medial entorhinal cortex (MEC) has been suggested to play a pivotal role in this process. Grid cells, head-direction cells, border cells as well as speed cells within the MEC collectively provide a dynamic representation of the animal position in space based on the integration of self-movements. All these cells are strongly modulated by theta oscillations, thus suggesting that theta rhythmicity in the MEC may be essential for integrating and coordinating self-movement information during navigation. In this study, we first show that excitotoxic MEC lesions, but not dorsal hippocampal lesions, impair the ability of rats to estimate linear distances based on self-movement information. Next, we report similar deficits following medial septum inactivation, which strongly impairs theta oscillations in the entorhinal-hippocampal circuits. Taken together, these findings demonstrate a major role of the MEC and MS in estimating distances to be traveled, and point to theta oscillations within the MEC as a neural mechanism responsible for the integration of information generated by linear self-displacements.
Rational decision theories posit that good choices should be based solely on information that is relevant to the choice at hand. However, introducing an inferior option that would never be chosen can influence choices among other relevant options, known as decoy effect. We used functional magnetic resonance imaging (fMRI) combined with a simple gambling task to investigate the neural signature of decision-making in or against the influence of the decoy effect in inferior and superior phantom decoy conditions. The fMRI results show that compared with choosing against the influence of the dominated phantom inferior option, choosing in the influence of the same option was associated with stronger activation in bilateral caudate and weaker functional connectivity between the left ventral anterior cingulate cortex (vACC) and the left caudate. Phantom inferior effect selectively enhanced the connectivity from the caudate to the vACC but not vice versa. Choosing in the influence of the dominated phantom superior option engaged greater activity in the left dorsal ACC and stronger functional connectivity between the left dACC and bilateral anterior insula. Furthermore, the direction of the phantom superior effect was restricted from left dACC to the anterior insula, but not vice versa. Our findings suggest that a phantom inferior decoy may boost the value of the target via the reward network, whereas a phantom superior decoy may diminish the value of the target option via the aversion network. Our study provides neural evidence to support that valuation is context dependent and delineates differential neural networks underlying the influence of unavailable inferior and superior decoy options on our decision-making.
Asymmetries in gray matter alterations raise important issues regarding the pathological co-alteration between hemispheres. Since homotopic areas are the most functionally connected sites between hemispheres and gray matter co-alterations depend on connectivity patterns, it is likely that this relationship might be mirrored in homologous interhemispheric co-altered areas. To explore this issue, we analyzed data of patients with Alzheimer's disease, schizophrenia, bipolar disorder and depressive disorder from the BrainMap voxel-based morphometry database. We calculated a map showing the pathological homotopic anatomical co-alteration between homologous brain areas. This map was compared with the meta-analytic homotopic connectivity map obtained from the BrainMap functional database, so as to have a meta-analytic connectivity modeling map between homologous areas. We applied an empirical Bayesian technique so as to determine a directional pathological co-alteration on the basis of the possible tendencies in the conditional probability of being co-altered of homologous brain areas. Our analysis provides evidence that: the hemispheric homologous areas appear to be anatomically co-altered; this pathological co-alteration is similar to the pattern of connectivity exhibited by the couples of homologues; the probability to find alterations in the areas of the left hemisphere seems to be greater when their right homologues are also altered than vice versa, an intriguing asymmetry that deserves to be further investigated and explained.
Narcolepsy is a chronic disorder of the sleep-wake cycle with pathological shifts between sleep stages. These abrupt shifts are induced by a sleep-regulating flip-flop mechanism which is destabilized in narcolepsy without obvious alterations in EEG oscillations. Here, we focus on the question whether the pathology of narcolepsy is reflected in EEG microstate patterns. 30 channel awake and NREM sleep EEGs of 12 narcoleptic patients and 32 healthy subjects were analyzed. Fitting back the dominant amplitude topography maps into the EEG led to a temporal sequence of maps. Mean microstate duration, ratio total time (RTT), global explained variance (GEV) and transition probability of each map were compared between both groups. Nine patients reached N1, 5 N2 and only 4 N3. All healthy subjects reached at least N2, 19 also N3. Four dominant maps could be found during wakefulness and all NREM- sleep stages in healthy subjects. During N3, narcolepsy patients showed an additional fifth map. The mean microstate duration was significantly shorter in narcoleptic patients than controls, most prominent in deep sleep. Single maps' GEV and RTT were also altered in narcolepsy. Being aware of the limitation of our low sample size, narcolepsy patients showed wake-like features during sleep as reflected in shorter microstate durations. These microstructural EEG alterations might reflect the intrusion of brain states characteristic of wakefulness into sleep and an instability of the sleep-regulating flip-flop mechanism resulting not only in pathological switches between REM- and NREM-sleep but also within NREM sleep itself, which may lead to a microstructural fragmentation of the EEG.
Brain waveforms reconstructed at source level, like in beamforming, suffer polarity indeterminacy, which precludes direct group averaging of associated waveforms. We describe a polarity alignment method as an alternative of averaging rectified (i.e. absolute value) waveforms. Using MEG from an auditory localisation task, we compare the ability of the two approaches to enable signal detection in the primary auditory cortex over increasing sample size. The two methods are comparable in signal detection sensitivity, but the alignment method preserves group-average polarity alternation.
Most of the motor mapping procedures using navigated transcranial magnetic stimulation (nTMS) follow the conventional somatotopic organization of the primary motor cortex (M1) by assessing the representation of a particular target muscle, disregarding the possible coactivation of synergistic muscles. In turn, multiple reports describe a functional organization of the M1 with an overlapping among motor representations acting together to execute movements. In this context, the overlap degree among cortical representations of synergistic hand and forearm muscles remains an open question. This study aimed to evaluate the muscle coactivation and representation overlapping common to the grasping movement and its dependence on the stimulation parameters. The nTMS motor maps were obtained from one carpal muscle and two intrinsic hand muscles during rest. We quantified the overlapping motor maps in size (area and volume overlap degree) and topography (similarity and centroid Euclidean distance) parameters. We demonstrated that these muscle representations are highly overlapped and similar in shape. The overlap degrees involving the forearm muscle were significantly higher than only among the intrinsic hand muscles. Moreover, the stimulation intensity had a stronger effect on the size compared to the topography parameters. Our study contributes to a more detailed cortical motor representation towards a synergistic, functional arrangement of M1. Understanding the muscle group coactivation may provide more accurate motor maps when delineating the eloquent brain tissue during pre-surgical planning.
Paired associative stimulation (PAS), a form of non-invasive cortical stimulation pairing transcranial magnetic stimulation (TMS) with a peripheral sensory stimulus, has been shown to induce neuroplastic effects in the human motor, somatosensory and auditory cortex. The current study investigated the effects of acoustic PAS on late auditory evoked potentials (LAEP) and the influence of tone duration and placebo stimulation. In two experiments, 18 participants underwent a PAS with a 4 kHz paired tone of 400 ms duration using 200 pairs of stimuli (TMS-pulse over the left auditory cortex 45 ms after tone-onset) presented at 0.1 Hz. In Experiment 1 this protocol was contrasted with a protocol using a short paired tone of 23 ms duration (PAS-23 ms vs. PAS-400 ms). In Experiment 2 this PAS protocol was contrasted with sham stimulation (PAS-400 ms-sham vs. PAS-400 ms). Before and after PAS, LAEP were recorded for tones of 4 kHz (same carrier frequency as the paired tone) and 1 kHz as control tone. In Experiment 1, there was a significant difference between LAEP amplitudes of the 4 kHz tone after PAS-23 ms and PAS-400 ms with higher LAEP amplitudes after PAS-23 ms. Before both conditions, no difference could be detected. In Experiment 2 we observed a significant overall decrease in LAEP amplitudes pre to post PAS. Unspecific decreases of LAEP following PAS with a long paired tone (PAS-400 ms) might be related to habituation effects due to repeated presentation of sound stimuli which are not evident for PAS with a short paired tone (PAS-23 ms). Interpreting this result using the concept of temporal integration time allows us to discuss it in the context of spike-timing dependent plasticity. ## Introduction Transcranial magnetic stimulation (TMS) can inhibit or facilitate neuronal activity via non-invasive electromagnetic stimulation. Electric currents are induced in superficial brain areas via rapidly changing magnetic fields generated by a coil of wires acting as an electromagnet (Barker et al. ; Di Lazzaro et al. ; Merton and Morton ; Siebner and Ziemann ). Applied repetitively, TMS (rTMS) induces neuroplastic changes via mechanisms of long-term potentiation (LTP) or depression (LTD) (Rossi and Rossini ; Thut and Pascual-Leone ). Depending on the frequency of the applied pulses, rTMS has inhibitory (up to 1 Hz) or facilitatory (over 1 Hz) effects on brain activity (Hallett ; Robertson et al. ; Siebner and Ziemann ). Paired associative stimulation (PAS) is one specific rTMS protocol which combines direct stimulation of the brain via very low-frequency rTMS (e.g. 0.1 Hz) with a corresponding peripheral sensory stimulation (e.g. direct stimulation of the area of the somatosensory cortex representing the hand via rTMS combined with electric stimulation of the hand) (Wolters et al. ). The timing of peripheral and central stimulation is crucial as the assumed neuroplastic mechanism is spike timing dependent plasticity (SDTP) (Wolters et al. ). According to the model of STDP the synaptic strength between two neurons is enhanced if postsynaptic activity is preceded by presynaptic activity. Inversely, the link weakens if the postsynaptic neuron is activated prior to the presynaptic neuron (Markram et al. ). In the model of STDP the pairing needs to occur within a critical time period [“tens of milliseconds or less” (Markram et al. )] in order to induce changes on synaptic strength. In addition, the closer the pairing of pre- and post-synaptic activation is, the more pronounced are the effects (Levy and Steward ; Markram et al. ; Song et al. ). TMS pulses are considered to induce postsynaptic activity and peripheral stimulation is associated with presynaptic activity (Tzounopoulos et al. ). For the exact timing between TMS and peripheral stimulation, the transition time of the stimulus from the periphery to the cortex has to be considered. Several studies of PAS of the somatosensory and motor cortex have revealed results that concur with the concept of STDP (Stefan et al. , , ; Wolters et al. , ). These studies have also found that the effects of PAS can be measured for up to 90 min following the intervention (Stefan et al. ; Wolters et al. ). Besides the extensively investigated motor and somatosensory system, two PAS pilot studies of the auditory cortex were conducted so far by our workgroup (Engel et al. ; Schecklmann et al. ). In the first study by Schecklmann et al. it could be shown that inhibitory effects of PAS were associated with a close timing between the cortical arrival of a sine tone and the application of a TMS pulse as indicated by a decrease in the amplitude of the long-latency auditory evoked potentials (LAEP) N1 and P2. The N1 and P2 are part of the P1–N1–P2 complex, originating in the secondary auditory cortex with latencies of 50 ms, 100 ms and 200 ms respectively (Buettner et al. ; Woods et al. ). With a temporal gap of 5 ms between the TMS pulse (45 ms) and the onset of the N1-component (50 ms) of the LAEP, the PAS condition with an inter-stimulus interval of 45 ms between tone onset and TMS pulse proved most effective in this study. Among several open questions, three were investigated by the subsequent work of Engel and colleagues in two experiments (Engel et al. ). One open question was whether the effects of auditory PAS take place on the level of the primary or secondary auditory cortex. This can be investigated by using auditory steady state responses (ASSR) (Santarelli et al. ) elicited by tones of different amplitude modulations (AM) as dependent variable rather than AEP. For the purposes of the current study it is sufficient to know that there is evidence that the ASSR elicited by tones with an AM of 40 Hz represents activity of the primary auditory cortex (Maurer et al. ) and that the ASSR of tones with an AM of 20 Hz represents activity of the secondary auditory cortex (Plourde et al. ; Presacco et al. ). Another open question was related to the importance of the duration of the paired tone. In comparison to motor/somatosensory PAS which uses peripheral electric stimuli with a duration in the range of microseconds (Stefan et al. ; Wolters et al. ), the duration of the auditory stimulus (400 ms) in the study by Schecklmann et al. was rather long. A longer duration might focus attention on the peripheral stimulus itself which was shown to influence PAS effects in the motor cortex (Stefan et al. ). The duration of the paired tone was therefore varied, resulting in two conditions. The most effective protocol of the study by Schecklmann et al. ( ) (featuring an inter-stimulus interval of 45 ms and a paired tone of 400 ms duration) was tested against a protocol featuring a tone of the shortest possible length that allows for a pure sine tone (23 ms) to see if the decreases in LAEP amplitudes during the first study are related to attention processes. The authors were especially interested in the effects of pure tones (even shorter durations produce click-like sounds), having in mind a potential application of auditory PAS in the treatment of tonal tinnitus. A third open question was whether or not the effects of the study by Schecklmann et al. are caused or at least influenced by mechanisms of habituation caused by the repetitive application of acoustic stimuli. Therefore, in another experiment, the PAS protocol derived from the study by Schecklmann et al. was contrasted with a sham condition. In sum, Engel and colleagues found a significant sham-controlled decrease specifically in the amplitude of the 20 Hz ASSR after PAS with a paired tone of 400 ms duration (Engel et al. ). This was interpreted as a carrier-frequency specific inhibitory effect of auditory PAS taking place in the secondary auditory cortex. No effects of the PAS intervention were found for the amplitudes of the 40 Hz ASSR. This was explained by a possible lack of appropriate stimulation of the main generators of the 40 Hz ASSR due to the specifics of their anatomical origin and properties of the TMS stimulation (Engel et al. ). However, since only 40 Hz ASSR were used to measure the effects of the duration of the paired tone in this study, the influence of paired tone duration on PAS effects remains unclear. The study by Engel et al. and the study by Schecklmann et al. differed with respect to the dependent variables which were ASSR and LAEP respectively. Based on the abovementioned conclusions, auditory PAS should induce the same sham-controlled tone specific effects for LAEP (comparable to the effects of the 20 Hz ASSR). Aim of the present work is the re-analysis of the work of Engel and colleagues (Engel et al. ) analyzing LAEP which were the dependent variable in the study by Schecklmann et al. According to the published LAEP and ASSR findings, we hypothesize sham-controlled reductions of LAEP induced by the PAS. Longer stimulus duration is associated with increased perception of and therefore attention to the stimulus (Debner and Jacoby ; Overgaard et al. ; Sandberg et al. ). Thus, we also hypothesize attention-related differences in the effects for short (PAS-23 ms) vs. long (PAS-400 ms) durations of the paired tone (larger effects for a longer duration). This hypothesis is based on the reported influence of attention on neuroplasticity in the human motor cortex induced by different non-invasive stimulatory techniques such as rTMS (Conte et al. , ), tDCS (Antal et al. ), and PAS (Stefan et al. ). Particularly the 2004 study by Stefan et al. showed larger effects of PAS when attention is directed to the paired somatosensory stimulus. ## Methods ### Subjects With respect to sample size calculation, two statistical contrasts were the same in the present study as in the pilot study which were based on the most effective protocol [PAS(45 ms)] of the pilot study (Schecklmann et al. ). For the contrast post vs. pre PAS(45 ms), the effect size was d = 1.51 and with a power of 80%, alpha of 5% and two-tailed testing this resulted in a minimum sample size of 6. For the comparison between effects on LAEP for the 4 kHz tone vs. effects on LAEP for the control tone (1 kHz) (pre-post PAS differences) the sample size was 15 based on an effect size d = 0.791. Based on the comparison with effects for both control conditions during the pilot study which served as sample size estimator for the sham condition in this experiment the minimum required sample size was n = 10 for the contrast against the 0.1 Hz control condition (d = 1.05) and n = 13 for the contrast against 1 Hz (d = 0.85). Based on these calculations, 18 healthy students (10 female, 8 male) of the University of Regensburg participated in the current study, all of whom completed both experiments. The participants’ average age was 21.3 ± 2.4 (SD) years (range of 19–28 years). All participants were right-handed. Their intelligence quotient (IQ), as evaluated by the MWT-B-Questionnaire (a German language questionnaire testing the subjects verbal IQ) (Lehrl ) was 118.4 ± 11.5 (104–143). One subject, for whom German was not the native language, had to be excluded from this particular assessment. Participants’ hearing thresholds were tested prior to the experiment using a standard audiometer (Midimate 622D, Madsen Electronics, USA) for frequencies ranging from 125 Hz to 8 kHz. Hearing thresholds above 30 dB HL were an exclusion criterion. Directly prior to stimulus presentation in the first session of each experiment, the sensation level (SL) for all tones used as stimuli in the experimental procedures was measured using Adobe Audition 3.0 (Adobe Systems Inc., USA). All acoustic stimuli were presented via in-ear foam headphones at 60 dB SL (E-A-RLink Foam Ear tips for Insert Earphones, Microsonic Inc., USA). Further exclusion criteria were any history or presence of severe physical or mental disorders. All participating subjects were informed about any risks inherent with any of the experimental procedures before the first experimental session and gave written informed consent for all procedures they underwent. The experimental procedures were in agreement with the Declaration of Helsinki and approved by the Board of Ethics of the University of Regensburg (09-001). ### General Design of the Study The same 18 subjects were included in Experiment 1 and 2. Both experiments consisted of two different PAS conditions (Experiment 1: long vs. short paired tone duration; Experiment 2: verum vs. sham stimulation). For both experiments, PAS conditions/sessions were done with an interval of 1 week in between. The interval between Experiment 1 and 2 was about 6 months. Standard condition for both experiments was PAS with a 4 kHz sinus tone of 400 ms length which was paired with TMS pulse 45 ms after tone onset (PAS-400 ms) which was shown to be the most effective (i.e. maximal inhibition of LAEP) condition in the first auditory PAS study (Schecklmann et al. ). Dependent variables were late auditory evoked potentials (LAEP) which were measured before and after PAS. Before and after each PAS, LAEP were recorded for pure tones of 1 kHz and 4 kHz carrier frequency, 800 ms duration with a rise- and fall-time of 75 ms. All tones were amplitude modulated at a frequency of 40 Hz during Experiment 1. During Experiment 2, two additional tones were included in the measurement of LAEP featuring the same properties as the other two tones except for an amplitude modulation of 20 Hz. The tones were presented 70 times each in a randomized order with a randomly varying inter-stimulus interval of 2300–2800 ms at 60 dB SL. Analyses of the AM results in ASSR have been presented in a separate publication (Engel et al. ). ### Experiment 1 During Experiment 1, two PAS conditions varying tone duration were contrasted. Both consisted of 200 pairs of stimuli (a pure tone of 4 kHz presented binaurally, followed by a TMS pulse over the left auditory cortex 45 ms after the tone onset) presented at a frequency of 0.1 Hz. The standard condition PAS-400 ms with a tone duration of 400 ms was contrasted to a condition with a tone of 23 ms duration (PAS-23 ms). See Fig.  for a visualization of the timing of the different stimuli of the PAS-400 ms and the PAS-23 ms condition. The order in which each subject was presented with the two different paradigms was pseudo-randomly assigned and balanced across all participants. To avoid any possible interaction of effects of the two different paradigms, the two experimental sessions were conducted exactly one week apart. Each stimulation protocol lasted about 30 min. The LAEP recordings took approximately 7 min. Visualization of the timing between acoustic stimulus and TMS during the PAS-400 ms (top) and the PAS-23 ms (bottom) condition ### Experiment 2 Experiment 2 contrasted PAS-400 ms with a sham stimulation (PAS-400 ms-sham). See Fig.  for a visualization of the timing between tone onset and TMS stimulation for the PAS-400 ms condition. For the sham condition, the back of the TMS coil was placed against the subjects’ head. This results in exposure to only a very weak magnetic field (see Section “ ”) while retaining the feeling of the coil against the head of the participants as well as the clicking noise of the TMS-device whereas the tactile sensation of the TMS pulse was absent. The order in which the different paradigms were administered was pseudo-randomized and balanced across all participants with a 1-week interval between experimental sessions. LAEP were recorded before and after each PAS. In this experiment, LAEP were recorded for tones of 1 kHz and 4 kHz carrier frequency amplitude modulated at 20 Hz or 40 Hz respectively. The 20 Hz amplitude modulation was introduced in order to analyze the data in the context of another study (Engel et al. ) with respect to 20 Hz ASSR generated primarily in the secondary auditory cortex as opposed to 40 Hz ASSR that originate mainly in the primary auditory cortex. As LAEP have neural generators in the secondary auditory cortex, it is not of relevance to differentiate between PAS effects on LAEP evoked by tones with a different amplitude modulation. For the present analysis, we averaged the LAEP of the 40 Hz and 20 Hz AM tones. Because of the inclusion of two new amplitude modulated tones in the measurement of LAEP, this step took approximately 14 min during Experiment 2. See Fig.  for a visualization of the overall setup of Experiments 1 and 2. General design of Experiments 1 and 2 and intervals between experiments and experimental sessions ### TMS Settings At the first session of each experiment, the subjects’ resting motor threshold (RMT) was determined following the protocol described by Pridmore et al. ( ). The term RMT describes the intensity at which a TMS-pulse over the motor cortex elicits a visually perceptible muscle twitch in five out of ten trials. The intensity of the TMS pulses during the PAS was defined to be 110% of the RMT but not more than 60% of the maximal intensity of the stimulation device. This limitation was introduced with concern to the participants’ safety and comfort. The coil position for stimulating the left auditory cortex was determined according to a standard protocol based on the international 10–20 EEG system (2.5 cm above T3 on the line between T3 and Cz and then 1.5 cm in the posterior direction perpendicular to the line T3-Cz) (Langguth et al. ). The same protocol was used in the 2011 study by Schecklmann et al. ( ). All experimental procedures used a water-cooled figure-of-eight coil (MAGPRO, Medtronic, USA) with a sixfold decreased magnetic field on the back side of the coil, as determined by as determined by unpublished measurements whose technical details are described in previous work (van Doren et al. ). This decrease in the magnetic field is of relevance for the sham condition, applied during Experiment 2. During Experiment 1, we recorded an average RMT of 49.2 ± 4.9% of the maximal output of the TMS device. The resulting average intensity of the TMS pulses administered during PAS (110% RMT) was 53.8 ± 4.8%. The corresponding values for Experiment 2 were an RMT of 49.6 ± 5.4% and a TMS intensity of 53.8 ± 5.1%. Comparing the participants’ RMT between Experiment 1 and Experiment 2, no significant difference could be found (T = 0.414; df = 1,17; p = 0.684). Since 110% of their RMT would have exceeded 60% of the maximal intensity of the stimulation device, two subjects were stimulated with an intensity of 60% in Experiment 1 and three in Experiment 2 (due to the abovementioned concern about the participants’ safety and comfort). ### Recording and Processing of EEG Data We used an EEG cap (62 channels) corresponding in size to the diameter of the subjects’ heads (Braincap TMS, Brain Products GmbH, Germany). The EEG cap was connected to an amplifier (Brainamp DC, Brain Products GmbH, Germany), powered by a battery (Power Pack, Brain Products GmbH, Germany). The signal was recorded with BrainVision recorder (Brain Products GmbH, Germany). During the EEG recordings, the FCz electrode was used as reference and the AFz electrode as ground electrode. The sampling rate of the EEG was 500 Hz and the impedance level of the EEG electrodes was kept below 10 kΩ. Preprocessing of EEG data consisted of segmentation of data from 2 s before until 2.5 s after each tone-onset. Next, the EEG data was subjected to both a high pass filter of 0.1 Hz as well as a low pass filter of 90 Hz. The segments were then visually examined for any artifacts, such as muscle contractions. Contaminated segments were manually rejected. EEG channels with low signal-to-noise ratio (no signal, 50 Hz artifacts, highly variable signal) were excluded for the next preprocessing steps. Not more than 5 channels were allowed to be excluded (in most cases it was one or two channels except for one measurement with three and one with five bad channels). Data were then assessed using an independent component analysis (ICA) to identify further artifacts, such as eye blinks, which were rejected under visual control for back-transformation of the components. Finally, all segments were re-examined for any artifacts. After artifact rejection all channels were re-referenced to an average reference allowing the reconstruction of the recording reference FCz. EEG-channels that were excluded earlier were reconstructed by interpolation of the surrounding channels. Measurements of all subjects were examined with regard to the number of segments still remaining after this step. At least 59 segments remained for all sessions. To make the measurements more comparable, only the first 59 artifact-free segments of each dataset were taken for further analysis, reducing the total number of recorded LAEP segments that were used for further interpretation from 70 to 59. Next, the segments were again filtered with a low pass filter of 18 Hz in order to eliminate all steady state responses from the data that result from stimulation with amplitude modulated tones. Processing to this point was conducted using the MATLAB (Mathworks, USA) toolbox EEGLAB (Delorme and Makeig ). For further analysis the data was transferred to the FieldTrip toolbox (Oostenveld et al. ). The 59 trials of each measurement were averaged and baseline corrected for the interval 300 ms before the onset of the tones. The resulting data was then examined with regards to its plausibility as components of LAEP, considering the topography as well as the time course of fluctuations in the EEG. ### Identification of LAEP For reasons of plausibility and in order to define the channels and the time of interest for data analyses, the EEG data recorded prior to the PAS was inspected with regard to the evoked activity. These pre-PAS data revealed clear auditory evoked activity as indicated by fronto-central topography and a negative deflection at 100 ms (LAEP N1) followed by a positive peak around 200 ms (LAEP P2) after tone onset. The topographies of the N1 and the P2 component for the tones of 40 Hz and 20 Hz amplitude modulation as well as the trajectories of the recorded LAEP for the 20 Hz and 40 Hz amplitude modulated tones in the region of interest can be seen in Fig.  . Topographies (top) of the N1–P2-complex and trajectories (bottom) of the N1 and P2 for 40 Hz (left) and 20 Hz (right) amplitude modulation tones for measurements before PAS. Asterisks and lines mark the channels and time of interest which were averaged for subsequent analyses for all conditions The time of interest was chosen according to the points were signals crossed the zero line which was at 100 ms and 180 ms for the N1 and at 180 ms and 300 ms for the P2. Based on these topographies of pre-PAS evoked activity, the channels of the electrodes F3, F1, Fz, F2, F4, FC3, FC1, FCz, FC2, FC4, C3, C1, Cz, C2, C4, CP1, CPz and CP2 were chosen as channels of interest for further analyses. Time and channels of interest were averaged for 20 Hz and 40 Hz AM tones as trajectories and topographies were similar and as LAEP are considered to have neural generators in the secondary auditory cortex. The N1 and the P2 components are thought to originate from neural generators in the secondary auditory cortex, with some contribution from other cortical regions (Eggermont and Ponton ; Woods et al. ). No clear P1 peak was evident which is similar to the findings of the study by Schecklmann et al. ( ). As in this study, we averaged the channels and time intervals of interest for the N1 and P2 and calculated the peak-to-peak difference of both LAEP (P2–N1) which was used as dependent variable. ### Statistical Evaluation of the EEG Data The average of channels and time intervals of interest were exported to SPSS (IBM Corp., USA) and assessed using repeated measure analyses of variance (rm-ANOVAs). Overall rm-ANOVAs for Experiments 1 and 2 were calculated. For Experiment 1 a three-factorial rm-ANOVA was performed with the within-subject factors ‘time’ (pre PAS vs. post PAS), ‘tone duration’ of (23 ms vs. 400 ms) and ‘frequency’ (4 kHz (same carrier frequency as the paired PAS tone) vs. 1 kHz control tone). For Experiment 2, a three-factorial rm-ANOVA was calculated with the within-subject factors ‘time’ (pre PAS vs. post PAS), ‘PAS’ (active vs. sham) and ‘frequency’ (4 kHz (same carrier frequency as the paired PAS tone) vs. 1 kHz control tone). LAEP for both 20 Hz and 40 Hz AM tones in Experiment 2 were averaged (see methods and results). Overall rm-ANOVAs were performed two-tailed with a statistical threshold of 5%. For post-hoc tests we used subsequent rm-ANOVAs and t-tests using a Bonferroni adjusted significance threshold. In addition, we report effect sizes, i.e. partial eta-squared for rm-ANOVA (small effect: 0.02≤η  < 0.13; medium effect: 0.13≤η  < 0.26; large effect: η  > 0.26) and Cohen’s d for t-tests (small effect: 0.2 ≤d < 0.5; medium effect: 0.5 ≤d < 0.8; large effect: d > 0.8). All variables were normally distributed as indicated by non-significant Kolmogorov–Smirnov tests and thus fulfilled the assumptions of the used parametric tests. ## Results ### Experiment 1 The rm-ANOVA with the factors tone duration (PAS-400 ms vs. PAS-23 ms), time (pre vs. post PAS) and frequency [4 kHz (PAS paired frequency) vs. 1 kHz (control tone)] revealed a statistically significant main effect of tone frequency (F = 101.708; df = 1,17; p < 0.001; η  = 0.857) representing higher amplitudes for the 1 kHz in contrast to the 4 kHz tone which is a common finding (Maurer et al. ; Vesco et al. ). The main effects of tone duration and time were not statistically significant. In addition, we found a non-significant threefold interaction effect with medium effect size (F = 3.782; df = 1,17; p = 0.069; η  = 0.182). The non-significant interaction effects frequency by time (F = 3.562; df = 1,17; p = 0.076; η  = 0.173) and tone duration by time (F = 3.685; df = 1,17; p = 0.072; η  = 0.178) with medium effect sizes can be neglected in the light of the threefold interaction. Comparable to the study by Schecklmann et al. which also showed a non-significant overall interaction effect (p < 0.1), based on the medium effect size for the threefold interaction and a priori formulated effects of tone length we did exploratory analyses. Bonferroni corrected post-hoc rm-ANOVAs were done for both frequencies and revealed a significant duration by time interaction effect with large effect size (F = 7.795; df = 1,17; p = 0.026; η  = 0.314) for the 4 kHz tone but not for the 1 kHz control tone with negligible effect size (F = 0.197; df = 1,17; p > 0.999; η  = 0.011). Bonferroni corrected post-hoc t-tests for the 4 kHz tone revealed a significant difference with medium effect size between LAEP amplitudes after PAS-400 ms and PAS-23 ms (T = 2.849; df = 17; p = 0.044; d = 0.672) with no difference before the PAS (T = 0.279; df = 17; p > 0.999; d = 0.066) which is associated with a descriptive (statistically non-significant) increase of LAEP amplitudes after PAS-23 ms with small effect size (T = 1.762; df = 17; p = 0.384; d = 0.415) and another descriptive decrease after PAS-400 ms (T = 2.015; df = 17; p = 0.240; d = 0.475) with small effect size. For details see Fig.  . Individual changes of the N1–P2-complex from pre to post PAS for Experiment 1. Mean of each condition is indicated by the bold lines. *Bonferroni corrected statistically significant difference in LAEP amplitude between PAS-400 ms and PAS-23 ms for post PAS ### Experiment 2 The rm-ANOVA with the factors PAS (verum vs. sham), time (pre vs. post TMS) and frequency (4 kHz vs. 1 kHz) revealed a significant main effect of frequency with large effect size (F = 155.524; df = 1,17; p < 0.001; η  = 0.901) presenting again higher amplitudes for the 1 kHz in contrast to the 4 kHz tone and a significant main effect with large effect size of time (F = 2.370; df = 1,17; p < 0.001; η  = 0.626) which indicates a decrease of LAEP amplitudes from pre to post PAS stimulation. Importantly, this decrease occurs for all PAS conditions during experiment two, including the sham stimulation. Other main and interaction effects were not significant. For details see Fig.  . Individual changes of the N1–P2-complex for Experiment 2. Mean of each condition is indicated by the bold lines. 20 Hz and 40 Hz AM tones were averaged for each condition. *Statistically significant change in LAEP amplitude from pre to post PAS ## Discussion The aim of the current study was the replication of former findings of Schecklmann et al. and the re-analyses of the work of Engel et al. who both showed decreases in secondary auditory cortex activity as indicated by LAEP (Schecklmann et al. ) or 20 Hz ASSR (Engel et al. ) respectively after PAS-400 ms. We expected differences in PAS effects as a result of different durations of the paired tone [larger effects for a longer duration of the paired tone (PAS-400 ms)] and sham-controlled decreases in LAEP amplitudes. There are two main results of this study. First, the post-PAS LAEP amplitudes of the 4 kHz tone differ significantly (large effect size) between short (PAS-23 ms) and long (PAS-400 ms) paired tones with higher LAEP amplitudes after PAS-23 ms which is mirrored by a non-significant increase (small effect size) in LAEP amplitudes from pre to post PAS-23 ms and a non-significant decrease (small effect size) from pre to post PAS-400 ms (Experiment 1). Secondly, LAEP amplitudes decreased unspecifically from pre to post stimulation as can be seen by the significant main effect of time in Experiment 2 independent from the experimental factors which means reductions in LAEP amplitudes for the active and also the sham condition. These effects are similar in Experiment 1 but only on a descriptive level (see Figs.  and ). Therefore, a long duration of the paired tone (PAS-400 ms) seems to be associated with inhibitory effects (independent of TMS). A short duration of the paired tone (PAS-23 ms) seems to have at least no inhibitory effects on LAEP amplitudes. These findings are not in line with our hypotheses. Based on the analyses of the ASSR of the same data set, as well as findings from studies of the motor cortex we expected to find sham-controlled tone specific decreases in amplitude for the LAEP as well as an influence of the duration of the paired tone with larger effects for a longer stimulus duration (PAS-400 ms). Up until now, we argued that all experimental findings of auditory PAS studies can be explained by assuming STDP as the governing mechanism of neuroplastic changes induced by PAS. In the following discussion, we try to provide possible explanations for these new, inconclusive results. The decrease in LAEP amplitude from pre to post stimulation for both carrier frequencies (with the notable exception of the LAEP for the 1 kHz, 40 Hz AM tone) and after every PAS condition including the sham condition of Experiment 2 (and also, even if just on a descriptive level during Experiment 1—except for the LAEP for the 4 kHz tone after PAS-23 ms) may be caused by mechanisms of habituation or adaptation, as the repeated presentation of a sensory stimulus can lead to a decrease in the brains response to that stimulus (Lanting et al. ). According to the model of stimulus specific adaptation, processes of adaptation happen specifically for tones with the exact same features, including carrier frequency and amplitude modulation (Perez-Gonzalez and Malmierca ). This is in line with our findings that a decrease in LAEP amplitude happened for all tones, since the number of amplitude modulated tones of different features was exactly the same. There is evidence (Pantev et al. ) that LAEP are more susceptible to habituation effects than ASSR which would explain the discrepant findings between the analysis of ASSR (Engel et al. ) and the current LAEP findings. Another possible mechanism might be decrease of vigilance over time for which LAEP are susceptible. For ASSR (Engel et al. ) it seems to be less relevant. This effect may partially explain the difference between the current results and that of the study by Schecklmann et al. ( ) in which the subjects performed a simple visual attention task instead of listening passively. The lack of experimental control for the decrease of amplitude of the LAEP is therefore a main limitation of the analysis of LAEP. Another limitation is that the sham-control was only done for PAS-400 ms (Experiment 2) and not for PAS-23 ms (Experiment 1). Nonetheless, this decrease in LAEP amplitude was observed only for the conditions using a 400 ms long 4 kHz paired tone during PAS, whereas a decrease was not evident for the LAEP amplitudes after the PAS condition using a 23 ms long 4 kHz paired tone as mirrored by a non-significant increase with small effect size. This was found only for the LAEP evoked by a 4 kHz tone (paired with PAS) and not by the control tone of 1 kHz. This finding might be explained by the concept of “temporal integration time” (TIT) for AEP. The duration needed for a stimulus to induce particular AEP-components increases with the latency of these components. It has been suggested that for at least some of the neuronal generators of the N1 and later components, tones of a duration of at least 24 ms are needed in order for those generators to be sufficiently activated (Alain et al. ). This implies that the tone of 23 ms duration we used in the PAS-23 ms was too short to activate these neuronal groups reliably. A tone of a duration of 23 ms is more likely to evoke only earlier AEP-components, since the P1 components can be easily measured after short clicks used for example for P50 gating experiments (Wilde et al. ; Yadon et al. ). If, based on the concept of “TIT”, we assume that a 23 ms tone induces only AEP components of a latency of 50 ms and earlier, the majority of the neurons activated by such a short tone are stimulated before the TMS pulse that is administered 45 ms after tone onset. For these neurons, the temporal order of peripheral stimulation and TMS pulse is clearly in favor of facilitatory effects according to the model of STDP. As amplitudes of later components are dependent on effects in earlier components (Maurer et al. ) this effect might nonetheless be measured with LAEP. These considerations might explain the descriptive, but frequency-specific increase in LAEP amplitudes for the PAS condition using a shorter paired tone (PAS-23 ms). In this case, the induced neuroplastic effect would even be strong enough to override the unspecific inhibitory effects seen for all other conditions. Our finding of a difference in PAS effects depending on the duration of the paired tone as well as the lack of a difference in effects for the active and the sham condition is in contrast to the ASSR findings of the same data set which showed no effects for different durations of the paired tone but sham-controlled decreases for the 20 Hz ASSR. We hypothesized that the 20 Hz ASSR specifically activates the secondary auditory cortex and does not act on the level of the primary auditory cortex (since this cortical field is located too deep within the skull for the magnetic field of the TMS to reach it) as measured by 40 Hz ASSR. Based on the superposition theory ASSR are generated by the superposition of AEP of a very specific latency/frequency. In combination with the exact timing of PAS TMS effects were specific for the 20 Hz ASSR. 20 Hz ASSR were not measured for Experiment 1 which investigated the effects of different durations of the paired tone. LAEP amplitudes may not show sham-controlled effects as the neural generation of the N1–P2 complex is not only based on auditory cortex activity covering specific PAS effects. Otherwise and as outlined above the LAEP findings reported in this study suggest effects of acoustic PAS only on AEP with latencies shorter than 50 ms which show a lower variability in their latencies when compared to AEP with longer latencies and are generated more exclusively in the auditory cortex and are thus prone to PAS effects. In conclusion, the results of the present study seem to suggest that most of the effects on LAEP recorded after PAS-400 ms are due to unspecific mechanisms, such as habituation or adaptation, rather than to the mechanisms caused by the stimulation itself. This might, however, not be the case for PAS-23 ms. Short tone lengths of 23 ms as peripheral PAS stimuli seem to activate AEP components earlier than P1. In combination with TMS and an interstimulus interval of 45 ms auditory PAS with a short paired tone (PAS-23 ms) leads to post-PAS LAEP amplitudes that differ significantly from those resulting from auditory PAS with a long paired tone (PAS-400 ms). However, further research is needed to examine the effects of different durations of the paired tone on the effects of PAS, possibly by comparing a wider range of different durations of the paired tone as well as investigating the effects of auditory PAS on middle latency AEP. Also, the influence of attention on the effects of auditory PAS still remains unclear since the reported difference in effects of auditory PAS with a long (PAS-400 ms) and a short paired tone (PAS-23 ms) seem to be caused by mechanisms other than attention. Therefore, future studies of auditory PAS should explore different approaches of directing attention to the paired auditory stimulus as well as their influence on PAS effects. Guiding the TMS with neuronavigation would help to answer the question at which site the assumed neuroplastic effects of acoustic PAS take place.
This scientific commentary refers to &#x2018;Network localization of clinical, cognitive, and neuropsychiatric symptoms in Alzheimer&#x2019;s disease&#x2019;, by Tetreault <i>etal.</i> (doi:10.1093/brain/awaa058).
Cerebellar ataxia, neuropathy and vestibular areflexia syndrome (CANVAS) is a progressive late-onset, neurological disease. Recently, a pentanucleotide expansion in intron 2 of RFC1 was identified as the genetic cause of CANVAS. We screened an Asian-Pacific cohort for CANVAS and identified a novel RFC1 repeat expansion motif, (ACAGG)exp, in three affected individuals. This motif was associated with additional clinical features including fasciculations and elevated serum creatine kinase. These features have not previously been described in individuals with genetically-confirmed CANVAS. Haplotype analysis showed our patients shared the same core haplotype as previously published, supporting the possibility of a single origin of the RFC1 disease allele. We analysed data from &gt;26&#x2009;000 genetically diverse individuals in gnomAD to show enrichment of (ACAGG) in non-European populations.
Cortical superficial siderosis is an established haemorrhagic neuroimaging marker of cerebral amyloid angiopathy. In fact, cortical superficial siderosis is emerging as a strong independent risk factor for future lobar intracerebral haemorrhage. However, the underlying neuropathological correlates and pathophysiological mechanisms of cortical superficial siderosis remain elusive. Here we use an in vivo MRI, ex vivo MRI, histopathology approach to assess the neuropathological correlates and vascular pathology underlying cortical superficial siderosis. Fourteen autopsy cases with cerebral amyloid angiopathy (mean age at death 73 years, nine males) and three controls (mean age at death 91 years, one male) were included in the study. Intact formalin-fixed cerebral hemispheres were scanned on a 3 T MRI scanner. Cortical superficial siderosis was assessed on ex vivo gradient echo and turbo spin echo MRI sequences and compared to findings on available in vivo MRI. Subsequently, 11 representative areas in four cases with available in vivo MRI scans were sampled for histopathological verification of MRI-defined cortical superficial siderosis. In addition, samples were taken from predefined standard areas of the brain, blinded to MRI findings. Serial sections were stained for haematoxylin and eosin and Perls' Prussian blue, and immunohistochemistry was performed against amyloid-&#x3b2; and GFAP. Cortical superficial siderosis was present on ex vivo MRI in 8/14 cases (57%) and 0/3 controls (P&#x2009;=&#x2009;0.072). Histopathologically, cortical superficial siderosis corresponded to iron-positive haemosiderin deposits in the subarachnoid space and superficial cortical layers, indicative of chronic bleeding events originating from the leptomeningeal vessels. Increased severity of cortical superficial siderosis was associated with upregulation of reactive astrocytes. Next, cortical superficial siderosis was assessed on a total of 65 Perls'-stained sections from MRI-targeted and untargeted sampling combined in cerebral amyloid angiopathy cases. Moderate-to-severe cortical superficial siderosis was associated with concentric splitting of the vessel wall (an advanced form of cerebral amyloid angiopathy-related vascular damage) in leptomeningeal vessels (P&#x2009;&lt;&#x2009;0.0001), but reduced cerebral amyloid angiopathy severity in cortical vessels (P&#x2009;=&#x2009;0.048). In terms of secondary tissue injury, moderate-to-severe cortical superficial siderosis was associated with the presence of microinfarcts (P&#x2009;=&#x2009;0.025), though not microbleeds (P&#x2009;=&#x2009;0.973). Collectively, these data suggest that cortical superficial siderosis on MRI corresponds to iron-positive deposits in the superficial cortical layers, representing the chronic manifestation of bleeding episodes from leptomeningeal vessels. Cortical superficial siderosis appears to be the result of predominantly advanced cerebral amyloid angiopathy of the leptomeningeal vessels and may trigger secondary ischaemic injury in affected areas.