input
stringlengths
2.56k
275k
output
stringclasses
1 value
4.2 Angles . Regular Polygons •Regular polygons have equal angles and equal sides. Geometric constructions: angle bisector . The green and blue lines are parallel, and M and N are points on the green and blue lines respectively. Use the simulation below to visualize the steps of construction for angle bisector by clicking the 'GO' button. $$\angle PQR$$ is divided into different angles. Which of these tools are used to make geometric constructions? This classical topic in geometry is important because. requires no measuring at all, just a compass and straightedge, can only be done with a protractor to measure it exactly. By the SAS criterion, the two triangles are congruent, which means that AO = BO, and also: $$\angle AOP$$ = $$\angle BOP$$ = $$\dfrac{1}{2}180^0 = 90^0$$. In addition to constructions, we will also explore geometry in the world around us and using that to inspire a creative extension piece. Always keep 2 pencils in your geometry box, one for insertion in the compass and the other to draw lines and mark points. 2 comments Sort by Date Votes. In ABC, AB = 5.5 cm, BC = 6 cm, CA = 4.5 cm. Geometric construction allows you to construct lines, angles, and polygons with the simplest of tools. Simply select a Hand tool and drag one of the highlighted points. All rights reserved. Geometric constructions: perpendicular line through a point not on the line. The idealized ruler, known as a straightedge, is assumed to be infinite in length, have only one edge, and no markings on it. © copyright 2003-2020 Study.com. Save. Experiment with the simulation below to visualize this process by clicking the 'GO' button. Which of the following allows us to draw big circles? What tool is used to make sure copied angles are exactly the same as the original? Based on your results, we'll create a customized Test Prep Plan just for you! All of these steps are used to draw a parallel line. 1. Geometric construction is the process of drawing a geometrical figure using two geometrical instruments, a compass, and a ruler. 2. Your friends will be impressed with your math knowledge. These two lines are parallel to each other. We use a compass to draw arcs and circles and mark off equal lengths. Premium members get access to this practice exam along with our entire library of lessons taught by subject matter experts. 7th - 11th grade . In creating a geometric construction, measurements of angles and lines are not taken, and rulers are not used except as straightedges. Question 1. Geometric constructions help us to draw lines, angles, and shapes with simple tools. Geometric constructions are accurate mathematical drawings without the use of numbers. 6. Partition a segment into n congruent segments. Let Q be the center and with any radius, draw an arc intersecting the ray $$\overrightarrow{QP}$$ and $$\overrightarrow{QR}$$, say at the points E and D respectively. The mini-lesson targeted the fascinating concept of geometric constructions. Not everyone who loves mathematics loves numbers. Geometric Constructions practice Follow. Constructions worksheets. Which geometric principle is used to justify the construction below? Suppose we have a line segment $$\overline{AB}$$. Edit. You can try your hand at solving a few interesting practice questions at the end of the page. Geometric constructions are inaccurate representations of lines and angles. Print as many as you would like as handouts or practice sheets. Through an interactive and engaging learning-teaching-learning approach, the teachers explore all angles of a topic. Geometric constructions: triangle-circumscribing circle (Opens a modal) Constructing a line tangent to a circle. back If a ray divides an angle into two equal angles, then it is an angle bisector. This method can be used in drafting technical … What will be the shortest distance from N to the green line? Construct ABC and LMN such that BC /MN = 5/ 4 . See more ideas about geometry constructions, geometry, teaching geometry. Maharashtra State Board Class 10 Maths Solutions Chapter 4 Geometric Constructions Practice Set 4.1 Question 1. ∆ABC ~ ∆LMN. Geometric constructions, also called Euclidean constructions after the ancient Greek mathematician Euclid, are geometrically correct figures that are drawn using only a compass and a straightedge. Practice: Justify constructions. Parallel lines are a pair of lines that never cut (intersect) or meet each other, and they lie on the same plane. to them later with the "Go To First Skipped Question" button. What is Geometric Construction? Which of the following tools is used in geometric construction? You will need paper, a sharpened pencil, a straightedge to control your lines (to make a straight edge), and a drawing compass to swing arcs and scribe circles. Services. $$\angle APO$$ = $$\angle BPO$$ (just shown). Draw a line through P and Q. They went from basic to pretty complicated. You can visualize these steps in the simulation below by clicking the 'GO' button. ABC ~ LMN. The kite has two angles bisected as shown below. 7. Step 2: Taking X as a center and any radius, draw an arc intersecting the segment $$\overline{PX}$$ at M and $$\overleftrightarrow{AB}$$ at point N. Step 3: Now, taking P as a center and the same radius, draw an arc EF intersecting the segment $$\overline{PX}$$ at Q. So, the shortest distance from N to the green line is 6 units. by dvitullo. Draw the ray connecting S to Q. Take this practice test to check your existing knowledge of the course material. This is the "pure" form of geometric construction: no numbers involved! Can you find the measures of the angles $$\angle EKI$$ and $$\angle ITE$$? They are not copyright. 4. 8. These constructions use only compass, straightedge (i.e. Here are a few tips and tricks for you to follow while doing construction. To draw such figures, he uses some basic geometrical instruments like a graduated scale, a pair of set-squares, divider, compass, and ruler. Good luck! Apr 11, 2020 - Explore Heather Mieczkowski's board "Geometry constructions" on Pinterest. Given ABC and LMN are similar. We use a ruler to draw line segments and measure their lengths. $$\overleftrightarrow{KT}$$ divides the angles $$\angle EKI$$ and $$\angle ITE$$ in two equal angles respectively. appear. Learn Your Constructions by Moving Their Base Points. Up Next. 2) Two lines are perpendicular if they intersect to form congruent adjacent angles. The angles formed by perpendicular lines must be what? Let's Summarize. To play this quiz, please finish editing it. Be careful while doing geometric constructions. Construct an angle bisector. Contact us by phone at (877) 266-4919, or by mail at 100 View Street #202, Mountain View, CA 94041. When they finish, students will have … When the construction "Copy an Angle" is finished, segments are drawn across the span of the arcs … Study more effectively: skip concepts you already know and focus on what you still need to learn. Key to Geometry workbooks introduce students to a wide range of geometric discoveries as they do step-by-step constructions. Played 138 times. At Cuemath, our team of math experts is dedicated to making learning fun for our favorite readers, the students! Below is a list of the compass and straightedge construction worksheets available on this site. All of the following are enough to construct a triangle EXCEPT: What must you be given to construct an equilateral triangle? Biological and Biomedical All other trademarks and copyrights are the property of their respective owners. To carry out this construction, we will use the fact that any point on the perpendicular bisector of a line segment is equidistant from the two end-points of the line segment. When you have completed the practice exam, a green submit button will In this mini-lesson, we will explore the world of geometric constructions in math. Hence, this distance is equal to 6 units. on your results. Thank you! We will learn how to use these tools to
provide further information of use when modelling the overall impact of increased value-added activities on primary agriculture. Simulation of the Impact of Value-Adding on the Farm Sector using Dual Models Agricultural economists have expended much effort toward evaluating the economic benefits from cost-reducing research in agriculture. Economic research in this area has focused on the multi-stage production system in a partial-equilibrium framework. Studies have examined the distribution of economic benefits from government policy such as investment in research and development. Other studies have examined the benefits from investments in commodity promotion and advertising. The literature provides important insights into the effects of different types of exogenous factors on commodity prices and quantities as well as the effects on welfare of particular groups in the food production system. The effects of promotion and/or advertising are evaluated under the assumption that promotion and/or advertising shift the retail demand curve while for research, the effects are evaluated under the assumption that research shifts the farm input supply curves. While this multi-stage approach is equally applicable to estimating the effects of value adding investment, no attention as yet has been given to economic research on this particular issue. This portion of the project extended the literature on distribution of gains in a multi-stage production system to include gains/losses from investment in value adding in the post-farm-gate sector. The study followed and adapted the work of other researchers who have measured the impact of a technological change in the supply curve for farm commodities. This study was concerned with the impact of investment in value added processing that may shift the derived demand curve for farm commodities. Five commodities were examined; wheat, feed barley, canola, slaughter cattle and slaughter hogs. Functional equations representing the supply and demand for the commodities were applied in experiments based on the assumption of increased demand for the commodities. The sector models were built using estimated coefficients from the primary farm sector and the processing sector models. Model results provide insights into the effects of investment in value adding on prices, quantities and farmers' welfare. Overall, the various simulation results suggest that farmers would be better off with increased prices of grains/oilseed. However, the results indicate that increases in commodity prices cannot be realized in the short term from increased domestic demand for commodities. Effects of a 20% Increase in Domestic Demand for Wheat With an increase in domestic wheat demand, the price of wheat declined by 9.04% and barley by 2.81%. There is however an increase in canola price. With the decline in prices, wheat and barley production experienced some decline in production. Canola production declined as well. The decline in barley price did not result in an increase in domestic demand for this grain. The increase in the price of canola caused the domestic demand for this oilseed to fall by 4.19%. Canola exports increased by 60%, which probably explains the increase in canola price. Wheat exports also increased by 10.78%. However, this change in export volume was not enough to result in a rise in wheat price. The changes in wheat and canola exports appear to be more pronounced than the changes in production of the commodities. The effects on barley were quite minimal. Although the price of barley declined by 2.81%, domestic demand declined and production did not increase. This solution may appear counter-intuitive but considering the fact that barley is used as feed for the livestock industry, we observe that the production of cattle and hogs does not increase. Changes in the hog industry were modest and it appears that the cattle industry was not affected by the increase in domestic wheat demand. In terms of welfare, producer profits declined by 5.77%, which may be attributed to the unrealized increase in farm prices, particularly for the grains. Effects of a 20% Increase in Domestic Demand for Canola A 20% increase in the domestic demand for canola caused an increase in the price of canola by 5.45% but a decline in the price of wheat and barley. With an increase in price, canola production increased by 21.06%. The production of wheat and barley declined which may be attributed to the decline in price and to substitution effects in production with canola. Exports of canola increased by 50%. The decline in wheat price, however, caused an increase in domestic demand for wheat by 21.69%. The effect on barley was not significant. Unlike wheat, a significant amount of canola is processed locally. Thus, the finding of an increase in canola price and production with an increase in domestic demand may be in order. An increase in the domestic demand for canola resulted in an increase in hog price but a decrease in cattle price. Nonetheless, the production of both cattle and hogs decreased by 0.32 and 11.11 percent, respectively. The domestic demand for the two commodities also declined and for exports, hogs exported increased by 3.25% while export of cattle decreased by 5.88%. Effects of a 20% Increase in Domestic Demand for Cattle With a 20% increase in domestic cattle demand, the price of cattle declines by 1.14%. The price decline is contrary to what would be expected. Nevertheless there is an increase in cattle production by 16.9% suggesting a positive net effect for the cattle industry. Export of cattle decreased by 64.71%. The price of hogs fell by 0.18% but hog production increased by 4.86%. However, the decrease in hog price resulted in a 42.86% increase in the domestic demand for hogs. Export of hogs decreased by 1.63%. Changes in the prices and production of the crops were modest but adjustments in the quantities exported were significant. The price of barley was unchanged yet production and domestic demand decreased. This solution appears counter-intuitive when assessed relative to the increased production of both cattle and hogs, as it was expected that an increase in the production of cattle and hogs would result in an increase in domestic demand for barley. In terms of producer welfare, total profits increased by 5.09%. The significant increase in the production of cattle and hogs coupled with the relatively stable livestock prices, may have contributed to the increase in farmers' welfare. This solution may suggest that farmers would be better off with increased investments and capacity-expansions in the domestic cattle slaughtering industry. Effects of a 20% Increase in Domestic Demand for Hogs Generally, a 20% increase in domestic demand for slaughter hogs resulted in price increases for all five commodities, ranging from 0.09% to 1.13%. The price rise did not cause significant change in commodity supply except for hog production. The production of hogs increased by 2.78%. There was no change in hog exports. With a price increase, the domestic demand for wheat, canola and cattle decreased. The export quantities for canola and cattle increased by 20 and 2.94%, respectively. The effects on barley were minimal. In terms of producer welfare, total profits increased by 4.72%, which may be attributed to the resulting increases in commodity prices. This solution is consistent with the solution from the cattle scenario above, in which farmers may be better off with capacity expansions in the domestic meat processing industry. Simulation of the Impact of Value-Adding on the Farm Sector using the Canadian Regional Agricultural Model (CRAM): The Canadian Agricultural Regional Model (CRAM) is a spatial equilibrium policy analysis model developed and maintained by Agriculture and Agri-Food Canada. It provides significant regional and commodity detail of the Canadian agricultural sector and is an important instrument for the analysis of policy changes on the Canadian agriculture industry at a disaggregated level, in terms of the impacts on production (i.e., supply) and demand. In this study, two case situations were
upper troposphere; combined, these processes lead to a large build-up of convective instability (Fig. 3f). Over the course of a few days, this build-up leaves the atmosphere primed for an intense precipitation event --- a powder keg ready to explode. The explosion of the powder keg is ultimately triggered from the top down by the influence of elevated convection. In the absence of subsidence warming and drying that would keep clear air unsaturated, radiative cooling aloft during the recharge phase leads to in-situ cooling, condensation, and elevated convection with cloud bases above 7 km. This elevated convection produces virga (precipitation that evaporates before reaching the ground), and as the recharge phase progresses, the base of the elevated convection moves lower in altitude and the virga falls lower in the atmosphere until it begins to evaporate within the radiatively-heated layer (Fig. 3a--c). The arrival of virga in the inhibition layer produces evaporative cooling rates that are approximately 20 times larger in magnitude than the antecedent radiative heating, rapidly cooling and humidifying the inhibitive cap (Fig. 3e). The sudden weakening of the inhibition serves as a triggering mechanism that allows a small amount of surface-based convection to penetrate into the upper troposphere for the first time in several days. Once the inhibitive cap is breached, a chain reaction ensues and the discharge phase commences: vigorous convection emanating from the near-surface layer produces strong downdrafts, which spread out along the surface as ``cold pools'' (i.e., gravity currents) and dynamically trigger additional surface-based deep convection\citep{torri2015,jeevanjee2015,Feng2015,Torri2019}. This process proceeds for a few hours, until enough convective instability has been released such that air from the near-surface layer is no longer highly buoyant in the upper troposphere. The precipitation outburst dies out, and the cycle restarts with the recharge phase. \begin{figure*}[ht!] \centerline{\includegraphics[width=\textwidth]{schematic_OSR.pdf}} \caption{\textbf{Schematic view of the phases of the relaxation oscillator convective regime.} (Bottom) Snapshots of outgoing solar radiation (OSR) during the (a) recharge, (b) triggering, and (c) discharge phases, obtained 1.95 days, 4 hours, and 0 hours before the next hour of peak precipitation ($t_\mathrm{peak}$), respectively. These snapshots are from the high-resolution fixed-SST simulation at a surface temperature of 330 K. High values of OSR indicate cloud cover. Neither the graphical width of the phases nor the vertical thickness of the atmospheric layers in this schematic are proportional to the amount of time or space they occupy.} \label{fig:schematic} \end{figure*} \section*{Comparison to parameterized convection} The convectively-resolved hothouse state has both similarities and differences to prior results from models with parameterized convection. An important difference is that the time-mean temperature profile in our oscillating simulations does not resemble the three-layered structure identified in previous work\citep{Wordsworth2013,Wolf2015,Kopparapu2016,Popp2016,Wolf2018}, with a significant surface-based temperature inversion capped by a deep non-condensing layer and an overlying condensing layer further aloft. Instead, our simulations have tropospheric lapse rates that fall somewhere between the dry and moist adiabats (Fig. E4a), consistent with prior evidence that entraining moist convection sets the temperature profile in the deeply-convecting tropics\citep{Singh2013,Seeley2015}. Lacking a significant surface-based temperature inversion, our hothouse climate simulations energetically balance LTRH primarily by the latent cooling of rain evaporation rather than sensible heating of the surface. To further assess the importance of precipitation evaporation in the hothouse climate state, we modified the microphysics parameterization in the model to prevent evaporation of precipitating hydrometeors (rain, snow, and graupel). In contrast to the corresponding case with default microphysics, LTRH in the model without hydrometeor evaporation induces a mean temperature profile closely resembling the three-layered structure from previous work with parameterized convection (Fig. E4a). This suggests that the effects of evaporating hydrometeors on convective triggering and/or tropospheric energetics are critical to hothouse climates. In some global climate models (GCMs), evaporation of precipitation is either neglected or parameterized in a highly idealized manner\citep{Zhao2016}, which may be why some previous studies concluded that surface-based inversions are a defining characteristic of hothouse atmospheres\citep{Wolf2015}. Our simulations also help clarify prior results from GCMs regarding changes in cloud cover and climate stability in hothouse states. Previous work has suggested that LTRH causes clouds to thin or disappear from the lower troposphere and thicken in a layer of elevated convection in the upper troposphere \citep{Popp2014,Wolf2015,Wolf2018}. Similarly, in our model elevated condensation and convection during the recharge phase significantly enhance (by a factor of 3--4) the mean upper-tropospheric cloud fraction (Fig. E4b), although shallow clouds do not disappear entirely. A further similarity between our simulations and results from GCMs is the existence of a transient climate instability (i.e., a temporary sign reversal of the climate feedback parameter) during the transition to the new state induced by LTRH\citep{Popp2016,Wolf2015,Wolf2018}. In our model, the instability consists of a clear-sky longwave feedback driven by enhanced upper-tropospheric RH, which is significantly amplified by the increase in upper-tropospheric cloud cover in the oscillatory state (Fig. E5). Even for resolved convection, the net cloud radiative effect is sensitive to model details such as the horizontal resolution and microphysics scheme\citep{Wing2020,Becker2020}, so the radiative effects of clouds in the oscillatory state deserve additional study. The region of enhanced climate sensitivity associated with the transition to the hothouse state is distinct from the climate sensitivity peak found in our model at lower temperatures\citep{Romps2020}, the latter of which has been explained in terms of clear-sky feedbacks that operate in the quasi-steady convective regime\citep{Seeley2021}. \section*{Analogy to spontaneous synchronization} Spatial self-aggregation of convection, in which precipitating clouds localize in the horizontal into large and persistent clusters despite spatially-uniform forcing and boundary conditions, has received considerable attention in recent years\citep{Wing2017}. The new relaxation oscillator regime revealed by our work is an analogous state of {\em temporal} convective self-aggregation: in the absence of any time-dependent forcing, deep precipitating convection becomes spontaneously synchronized (i.e., temporally localized). The oscillatory state is synchronized in the sense that subdomains separated by hundreds of kilometers exhibit boom-bust cycles of near-surface moist static energy and spikes of precipitation that are nearly in-phase (Fig. E6). The phenomenology of this synchronized atmospheric state closely resembles that of other natural systems that exhibit spontaneous synchronization\citep{Mirollo1990}, such as mechanical metronomes on a wobbly platform\citep{Pantaleone2002} and fields of flashing fireflies\citep{Buck1966}. In such systems, the key ingredient that allows for synchronization is a coupling that tends to align the phases of sub-components. In the atmosphere, there are two obvious sources of coupling between spatially-separated sub-domains: 1) gravity waves, which rapidly homogenize temperatures in the free troposphere\citep{Bretherton1989,Edman2017}, and 2) cold pools, which dynamically trigger additional deep convection in the neighborhood of prior deep convection\citep{torri2015,jeevanjee2015}. To investigate this analogy further, we constructed a simple two-layer model of radiative-convective equilibrium that resembles a network of noisy pulse-coupled oscillators\citep{Mirollo1990,methods}. Just as in the convection-resolving model, this two-layer model undergoes a steady-to-oscillatory transition when the amount of convective inhibition is increased (Fig. E7). \section*{Discussion} The hothouse convection described here bears similarities to today's climate in the Great Plains of the central United States, where elevated mixed layers transiently suppress surface-based convection until a triggering mechanism overcomes the inhibition and intense convection ensues\citep{Carlson1983,Schultz2014,Agard2017}. Our results indicate that in hothouse climates, widespread radiatively-generated convective inhibition may shift the spectrum of convective behavior away from the quasi-equilibrium regime\citep{Arakawa1974,Raymond1997} and toward an ``outburst'' regime more similar to that of the U.S. Great Plains. Since very warm climates have strongly reduced equator-pole temperature gradients\citep{Wolf2015,Popp2016}, tropical SSTs of between 330--340 K would be accompanied by moist and temperate high latitudes that might support LTRH and the convective outburst regime over a large fraction of Earth's surface. Nonetheless, an important avenue for future work is to understand how the convective outburst regime described here interacts with large-scale overturning circulations in the tropics, as well as how this regime is expressed at higher latitudes where planetary rotation and seasonal effects play an important role in atmospheric dynamics. Convection-resolving simulations on near-global domains could address these questions, and would also
\right\}^{1/p} \\ & = \left\{ \int_D \left( \int_D \beta^q(x) \mathbf{1}_{tx+(1-t)D}(z) \left|d\omega(z)\right|^{q} dx \right)^{p/q} dz \right\}^{1/p} \\ & = \left\{ \int_D \left( \int_D \beta^q(x) \mathbf{1}_{tx+(1-t)D}(z) dx \right)^{p/q} \left|d\omega(z)\right|^{p} dz \right\}^{1/p} \\ & \le \left( \sup_{z\in D} \int_D \beta^q(x) \mathbf{1}_{tx+(1-t)D}(z) dx \right)^{1/q} \left( \int_D \left| d\omega(z) \right|^p dz \right)^{1/p} \\ & = \sup_{z\in D} \| \beta(x) \mathbf{1}_{tx+(1-t)D}(z) \|_{L^q(D,dx)}\, \left\|d\omega\right\|_{L^p(D,\Lambda^{k+1})}. \end{align*} The proposition follows. \end{proof} \smallskip Estimate \[ C(k,p,q,n,\beta) =\int_{0}^{1} \sup_{z\in D} \| \beta(x) \mathbf{1}_{tx+(1-t)D}(z) \|_{L^q(D,dx)} t^k (1-t)^{-n/p} dt \] in particular cases. \begin{cor} Suppose that $D$ is a~convex set of~finite measure in~$\mathbb{R}^n$, $q\ge p\ge 1$, $\frac{1}{p}-\frac{1}{q} < \frac{1}{n}$, and the~weight $\beta(x)\equiv 1$. Then \[ C(k,p,q,n,1)\le |D|^{1/q}\int_{0}^{1}t^{k-n/q}(1-t)^{-n/p}\min(t^{n/q},(1-t)^{n/q})dt. \] \end{cor} \begin{rem} It is easy to see that the~integral of~the~corollary exists because of~the~conditions imposed on~$p$ and~$q$. \end{rem} \begin{proof} Using the change of variables $u=tx$, we obtain \begin{multline*} \int_{0}^{1} \sup_{z\in D}\left\| \mathbf{1}_{tx+(1-t)D}(z)\right\|_{L^q(D,dx)} t^{k}(1-t)^{-n/p}dt \\ =\int_{0}^{1}\sup_{z\in D}\left\| \mathbf{1}_{u+(1-t)D}(z) \right\|_{L^{q}(tD,du)}t^{k-n/q}(1-t)^{-n/p}dt. \end{multline*} Note that $\left|tD\cap\left\{ u+(1-t)D\right\} \right|\leq\left|D\right|\min(t^{n},(1-t)^{n})$. It follows that \begin{gather*} \left\| \mathbf{1}_{u+(1-t)D}(z)\right\|_{L^q(tD,du)}\leq |D|^{1/q} \min(t^{n/q},(1-t)^{n/q}); \\ C(k,p,q,n,1)\leq |D|^{1/q}\int_{0}^{1}t^{k-n/q}(1-t)^{-n/p}\min(t^{n/q},(1-t)^{n/q})dt \end{gather*} \end{proof} \begin{cor}\label{cor:beta} Suppose that $U$ is a~convex set of finite measure $|U|$ in $\mathbb{R}^n$, $D=[a,b)\times U$, $q\ge p\ge 1$, $\frac{1}{p}-\frac{1}{q}< \frac{q-1}{q(n+1)}$, and $\beta:[a,b)\to \mathbb{R}$ is an~integrable positive function. If $\left\| \beta \right\|_{L^{q}([a,b))}< \infty$ then \[ C(k,p,q,n,\beta)\leq\left|U\right|^{1/q}\left\| \beta \right\|_{L^{q}([a,b))}. \] \end{cor} \begin{proof} If $x\in D$ then $x=(\tau,w)$, where $\tau\in [a,b)$ and $w\in U$. Using the special type of the weight $\beta(x):=\beta(\tau)$ and representing $z\in D$ as $z=(\eta,\zeta)$ with $\eta\in [a,b)$ and $\zeta\in U$, we obtain \begin{multline*} \int_{0}^{1} \sup_{z\in D} \left\| \beta(x)\mathbf{1}_{tx+(1-t)D}(z)\right\|_{L^{q}(D,dx)}t^{k}(1-t)^{-\frac{n+1}{p}}dt \\ \leq \!\! \int_{0}^{1}\!\! \sup_{a \leq \eta <b} \!\left(\!\int_{a}^{b}\! \beta^q (\tau) \mathbf{1}_{t\tau+(1-t)[a,b)}(\eta) d\tau\! \right)^{\frac{1}{q}} \!\!\! \sup_{\zeta\in U} \!\! \left( \int_{U} \! \mathbf{1}_{tw+(1-t)U}(\zeta) dw \! \right)^{\frac{1}{q}} \!\! t^{k}(1-t)^{-\frac{n+1}{p}}dt, \end{multline*} where $x=(\tau,w)$. Using the change of variables $u=tw$ and the estimate $$ \left|tU\cap\{ u+(1-t)U\} \right| \leq |U| \min(t^{n},(1-t)^{n}), $$ we finally get \begin{multline*} \int_{0}^{1}\!\! \sup_{a \leq \eta <b} \!\left(\!\int_{a}^{b}\! \beta^q (\tau) \mathbf{1}_{t\tau+(1-t)[a,b)}(\eta) d\tau\! \right)^{\frac{1}{q}} \!\!\! \sup_{\zeta\in U} \!\! \left( \int_{U} \! \mathbf{1}_{tw+(1-t)U}(\zeta) dw \! \right)^{\frac{1}{q}} \!\! t^{k}(1-t)^{-\frac{n+1}{p}}dt \\ \leq |U|^{1/q}\|\beta\|_{L^{q}([a,b))} \int_{0}^{1} t^{k-n/q}(1-t)^{-(n+1)/p}\min(t^{n/q},(1-t)^{n/q})dt \end{multline*} The~conditions on~$p$ and~$q$ imply the~finiteness of~the~last integral. \end{proof} Corollary~\ref{cor:beta} is a~key ingredient in~the~proof of~out main result, Theorem~\ref{thm: main global}. Unfortunately, for being able to~``separate'' the~variable~$t$, we have to~impose the~stronger constraint $\frac{1}{p}-\frac{1}{q}< \frac{1}{n+1}-\frac{1}{q(n+1)}$ than the~condition $\frac{1}{p}-\frac{1}{q}< \frac{1}{n+1}$ given by~Proposition~\ref{bounded}. \section{A New Homotopy Operator for $q\ge p$. \\ The Case of a~Convex Domain in~$\mathbb{R}^n$}\label{new-homot} In the previous section, we considered the~homotopy operator on~$\Omega_{\mathrm{loc}}^*$ of~the~form $$ A_\alpha=\int_{D}\alpha(y)K_{y}\omega(x)dy $$ for a~convex set~$D$ in~$\mathbb{R}^n$. We will need to~modify~$A$ for obtaining some estimates. Consider the same operator $K_{y}$ as in~the~previous section: \[ \psi_{y}(x,t)=tx+(1-t)y,\quad K_{y}\omega(x)=\int_{0}^{1}(\psi_{y})_1^{*}\omega dt. \] Recall that $dK_{y}\omega+K_{y}d\omega=\omega$. Choose a~smooth positive function $\alpha:D\to \mathbb{R}$ such that $\int_D \alpha(x)dx=1$ and put \[ A_{\alpha}\omega(x):=\int_{D}\alpha(y)K_{y}\omega(x)dy, \quad \omega\in \Omega_{\mathrm{loc}}^*. \] By a~straightforward calculation, \[ dA_{\alpha}\omega=d\left(\int_{D}\alpha(y)K_{y}\omega(x)dy\right) =\int_{D}\alpha(y)d_{x}K_{y}\omega(x)dy;\] \[ A_{\alpha}d\omega=\int_{D}\alpha(y)K_{y}d\omega(x)dy;\] \[ dA_{\alpha}\omega+A_{\alpha}d\omega =\int_{D}\alpha(y)\left[d_{x}K_{y}\omega(x)+K_{y}d\omega(x)\right]dy =\int_{D}\alpha(y)\omega(x)dy=\omega. \] In particular, if $d\omega=0$ then $$ dA_\alpha\omega =\omega. $$ The~definition of~$A_\alpha$ easily implies the~following \begin{prop}\label{homot-sm} The~homotopy operator~$A_\alpha$ takes smooth forms to~smooth forms. \end{prop} \begin{defn} Call a~smooth positive function $\alpha:D\to \mathbb{R}$ an {\em admissible weight} for a~convex domain $D\subset \mathbb{R}^n$ and $p\geq 1$ if $$ \int_D \alpha(x)dx=1; \quad \|\alpha\|_{L^{p'}(D)} <\infty; \quad \|\alpha(y)|y|\,\|_{L^{p'}(D)} <\infty. $$ \end{defn} For $p\ge 1$, we as usual put $$ p' = \begin{cases} \frac{p}{p-1} & \text{~if~} p>1, \\ \infty & \text{~if~} p=1 \end{cases} $$ \begin{thm} \label{thm:one weight} Suppose that $q\ge p\geq 1$, $D\subset \mathbb{R}^n$ is a~convex set, $\beta:D\to \mathbb{R}$ is a positive smooth function, and $\alpha:D\to \mathbb{R}$ is an~admissible weight. If $$ C_{1}(k,p,q,n,\beta):=\int_{0}^{1} \sup_{z\in D} \| \beta(x) \mathbf{1}_{tx+(1-t)D}(z) \|_{L^q(D,dx)} t^k (1-t)^{-n/p} dt <\infty; $$ $$ C_{2}(k,p,q,n,\beta):=\int_{0}^{1} \sup_{z\in D} \| |x|\beta(x) \mathbf{1}_{tx+(1-t)D}(z) \|_{L^q(D,dx)} t^k (1-t)^{-n/p} dt < \infty $$ then for any $\omega\in C^\infty L^{p}(D,\Lambda^{k})$ we have \[ \| A_\alpha \omega\| _{L^{q}(D,\Lambda^{k-1},\beta)}\ \leq C(k,p,q,\alpha,\beta,n)\| \omega\| _{L^{p}(D,\Lambda^{k})} \] where $$ C(k,p,q,\alpha,\beta,n)= \| \alpha(y)|y|\| _{L^{p'}(D)} C_{1}(k,p,q,n,\beta) +\|\alpha\|_{L^{p'}(D)} C_{2}(k,p,q,n,\beta). $$ \end{thm} \begin{proof} Put $\xi:=A_{\alpha}\omega$. If $p>1$ then, by H\"older's inequality, we infer \begin{multline*} \left\| A_{\alpha}\omega\right\| _{L^{q}(D,\Lambda^{k-1},\beta)} =\left\| \beta(x)\int_{D}\alpha(y)K_{y}\omega(x)dy\right\| _{L^{q}(D,\Lambda^{k-1},dx)} \\ \leq \left\| \beta(x)\left\| \frac{K_{y}\omega(x)}{|x-y|} \right\| _{L^{p}(D,\Lambda^{k-1},dy)}\left\| \alpha(y)|x-y|\right\| _{L^{p'}(D,dy)} \right\| _{L^{q}(D,\Lambda^{k-1},dx)}. \end{multline*} The~above estimate also obviously holds for~$p=1$. By the triangle inequality, \[ \| \alpha(y)|x-y|\| _{L^{p'}(D,dy)} \leq |x| \| \alpha(y)\| _{L^{p'}(D,dy)}+ \| \alpha(y)|y|\,\| _{L^{p'}(D,dy)}. \] Therefore, \begin{multline*} \| A_{\alpha}\omega\| _{L^{q}(\beta,D,\Lambda^{k-1})} \\ \leq \| \alpha(y)|y| \,\| _{L^{p'}(D,dy)} \left\| \beta(x)\left\| \frac{K_{y}\omega(x)}{|x-y|}\right\| _{L^{p}(D,\Lambda^{k-1},dy)} \right\| _{L^{q}(D,\Lambda^{k-1},dx)} \\ +\| \alpha(y)\| _{L^{p'}(D,dy)} \left\| \beta(x)|x| \left\| \frac{K_{y}\omega(x)}{|x-y|}\right\| _{L^{p}(D,\Lambda^{k-1},dy)} \right\| _{L^{q}(D,\Lambda^{k-1},dx)}. \end{multline*} By Proposition~\ref{bounded}, \begin{gather*} \left\| \beta(x)\left\| \frac{K_{y}\omega(x)}{|x-y|}\right \| _{L^{p}(D,\Lambda^{k-1},dy)}\right\| _{L^{q}(D,\Lambda^{k-1},dx)} \leq C_{1}(k,p,q,n,\beta)\left\| \omega\right\| _{L^{p}(D,\Lambda^{k})}; \\ \left\| \beta(x)|x|\left\| \frac{K_{y}\omega(x)}{|x-y|}\right \| _{L^{p}(D,\Lambda^{k-1},dy)}\right\| _{L^{q}(D,\Lambda^{k-1},dx)} \leq C_{2}(k,p,q,n,\beta)\left\| \omega\right\| _{L^{p}(D,\Lambda^{k})}. \end{gather*} The theorem is proved. \end{proof} \begin{cor}\label{twow} Suppose that $q\ge p\geq 1$, $D\subset \mathbb{R}^n$ is a~convex set, $\alpha:[a,b)\to\mathbb{R}$ is an~admissible weight, $\beta,\gamma:D\to \mathbb{R}$ are positive smooth functions. If the conditions \begin{gather*} C_{1}(k,\overline{p},q,n,\beta):=\int_{0}^{1} \sup_{z\in D} \| \beta(x) \mathbf{1}_{tx+(1-t)D}(z) \|_{L^q(D,dx)} t^k (1-t)^{-n/\overline{p}} dt < \infty; \\ C_{2}(k,\overline{p},q,n,\beta):=\int_{0}^{1} \sup_{z\in D} \| |x| \beta(x) \mathbf{1}_{tx+(1-t)D}(z) \|_{L^q(D,dx)} t^k (1-t)^{-n/\overline{p}} dt < \infty; \\ Q(k,\overline{p},p,\gamma):=\left\| \gamma^{-1}\right\| _{L^{p\overline{p}/(p-\overline{p})}(D)}<\infty \end{gather*} are fulfilled for some $\overline{p}$, $1\le \overline{p}\le p$ {\rm(}for $\overline{p}=p$, we put $\frac{p\overline{p}}{p-\overline{p}}=\infty${\rm)}, then the inequality \[ \| A_\alpha \omega\| _{L^{q}(D,\Lambda^{k-1},\beta)} \leq C(k,p,q,\alpha,\beta,\gamma,n)\| \omega\| _{L^{p}(D,\Lambda^{k},\gamma)}, \] where \[ C(k,p,q,\alpha,\beta,\gamma,n)=Q(k,\overline{p},p,\gamma)\: C(k,\overline{p},q,n,\alpha,\beta), \] holds for any $\omega\in C^\infty L^{p}(D,\Lambda^{k})$. \end{cor} \begin{proof} By Theorem \ref{thm:one weight}, \[ \| A_\alpha \omega \| _{L^{q}(D,\Lambda^{k-1},\beta)} \leq C(k,\overline{p},q,n,\alpha,\beta)\left\| \omega\right\| _{L^{\overline{p}}(D,\Lambda^{k})}. \] If $\overline{p}<p$ then, using H\"older's inequality, we have \begin{equation}\label{est-gam} \left\| \omega\right\| _{L^{\overline{p}}(D,\Lambda^{k})} \leq\left\| \gamma\omega\right\| _{L^{p}(D,\Lambda^{k})} \left\| \gamma^{-1}\right\| _{L^{p\overline{p}/(p-\overline{p})}(D)}. \end{equation} Inequality~(\ref{est-gam}) also holds for~$\overline{p}=p$. The corollary follows. \end{proof} \begin{cor}\label{cor:two weights} Suppose that $q \ge p\geq 1$, $\frac{1}{p}-\frac{1}{q}<\frac{q-1}{q(n+1)}$, $U$ is a~bounded convex set in~$\mathbb{R}^n$, $D=[a,b)\times U$, $\alpha:[a,b)\to\mathbb{R}$ is an~admissible weight, and $\beta,\gamma:[a,b)\to \mathbb{R}$ are positive smooth functions. If the conditions $\|\beta\|_{L^{q}([a,b))}\!\!<\!\!\infty$, $\| \tau\beta(\tau)\|_{L^{q}([a,b))}\!\!<\!\!\infty$, and $\|\gamma^{-1}\|_{L^{p\overline{p}/(p-\overline{p})}([a,b))}<\infty$ are fulfilled for some $\overline{p}$, $1\le \overline{p}\le p$ {\rm(}for $\overline{p}=p$, we put $\frac{p\overline{p}}{p-\overline{p}}=\infty${\rm)}, then the inequality $$ \| A_\alpha \omega \| _{L^q(D,\Lambda^{k-1},\beta)} \leq \mathrm{const\,} \|\omega\|_{L^p(D,\Lambda^k,\gamma)} $$ with some constant depending $k$,$p$,$q$,$n$,$\alpha$,$\beta$, and $\gamma$ holds for any $\omega\in C^\infty L^{p}(D,\Lambda^{k},\gamma)$. \end{cor} \begin{proof} Suppose that a~number~$\overline{p}\le p$ satisfies the~conditions of~the~corollary. If $x\in D$ then $x=(\tau,w)$, where $\tau\in [a,b)$ and $w\in U$. By~Corollary~\ref{cor:beta}, since $\frac{1}{\overline{p}}-\frac{1}{q}<\frac{q-1}{q(n+1)}$ and $\|\beta\|_{L^{q}([a,b))}<\infty$, we have \begin{multline*} \int_{0}^{1} \sup_{z\in D} \left\| \beta(\tau)\mathbf{1}_{tx+(1-t)D}(z)\right\|_{L^{q}(D,dx)}t^{k} (1-t)^{-(n+1)/\overline{p}}dt \\ \leq |U|^{1/q}\|\beta\|_{L^{q}([a,b))} \int_{0}^{1} t^{k-n/q}(1-t)^{-(n+1)/p}\min(t^{n/q},(1-t)^{n/q})dt. \end{multline*} On~the~other hand, since $\| \tau\beta(\tau)\|_{L^{q}([a,b))}<\infty$, we have by~Corollary~\ref{cor:beta}: \begin{multline*} \int_{0}^{1} \sup_{z\in D} \left\|\, |x|\beta(\tau)\mathbf{1}_{tx+(1-t)D}(z)\right\|_{L^{q}(D,dx)}t^{k} (1-t)^{-(n+1)/\overline{p}}dt \\ = \int_{0}^{1} \sup_{z\in D} \left\|\, \sqrt{\tau^2+w^2}\beta(\tau)\mathbf{1}_{tx+(1-t)D}(z)\right\|_{L^{q}(D,dx)}t^{k} (1-t)^{-(n+1)/\overline{p}}dt \\ \le \sqrt{2} \int_{0}^{1} \sup_{z\in D} \left\|\, (|\tau|+|w|)\beta(\tau)\mathbf{1}_{tx+(1-t)D}(z)\right\|_{L^{q}(D,dx)}t^{k} (1-t)^{-(n+1)/\overline{p}}dt \\ \le \sqrt{2} \int_{0}^{1} \sup_{z\in D} \left\|\, |\tau|\beta(\tau) \mathbf{1}_{tx+(1-t)D}(z)\right\|_{L^{q}(D,dx)}t^{k} (1-t)^{-(n+1)/\overline{p}}dt \\ + \sqrt{2} \int_{0}^{1} \sup_{z\in D} \left\|\, |w|\beta(\tau) \mathbf{1}_{tx+(1-t)D}(z)\right\|_{L^{q}(D,dx)}t^{k} (1-t)^{-(n+1)/\overline{p}}dt \\ \le \sqrt{2} |U|^{1/q} \|\tau\beta(\tau)\|_{L^{q}([a,b))} \int_{0}^{1} t^{k-n/q}(1-t)^{-(n+1)/\overline{p}}\min(t^{n/q},(1-t)^{n/q})dt \\ + \sqrt{2} \sup_{w\in U}|w| \, \|\beta\|_{L^{q}([a,b))} \int_{0}^{1} t^{k-n/q}(1-t)^{-(n+1)/\overline{p}}\min(t^{n/q},(1-t)^{n/q})dt<\infty. \end{multline*} The~relations $\| \tau\beta(\tau)\|_{L^{q}([a,b))}<\infty$ and $\| |x|\beta(\tau)\|_{L^{q}(D)}<\infty$ enable us to~apply Corollary~\ref{twow} and obtain the~desired assertion. \end{proof} \section{Globalization: the~Sobolev--Poincare Inequality on~a~Cylinder}\label{global} Here we globalize the~Sobolev--Poincar\'e inequality to~cylinders. The main assertion of~the~section is \begin{thm}\label{glob-sp-cyl} Suppose that $M$ is the~cylinder $[a,b)\times N$, where $N$ is a~closed manifold of~dimension~$n$, $q\ge p\geq 1$, $\frac{1}{p}-\frac{1}{q}<\frac{q-1}{q(n+1)}$, and $\beta, \gamma:[a,b)\to \mathbb{R}$ be positive smooth functions. Let $\omega$ be an~exact $k$-form in~$C^\infty L^p(M,\Lambda^k,\gamma)$. If the conditions $\|\beta\| _{L^{q}([a,b))}<\infty$, $\| t\beta(t)\|_{L^{q}([a,b))}<\infty$, and $\|\gamma^{-1}\|_{L^{p\overline{p}/(p-\overline{p})}([a,b))}<\infty$ are fulfilled for some $\overline{p}$, $1\le \overline{p}\le p$ {\rm(}for $\overline{p}=p$, we put $\frac{p\overline{p}}{p-\overline{p}}=\infty${\rm)}, then there exists a~$(k-1)$-form $\xi\in C^\infty L^q(M,\Lambda^{k-1},\beta)$ such that \begin{equation}\label{poin-cyl} d\xi=\omega \quad \text{and} \quad \|\xi\|_{L^q(M,\Lambda^{k-1},\beta)} \le \mathrm{const\,}\,\|\omega\|_{L^p(M,\Lambda^k,\gamma)}. \end{equation} \end{thm} \smallskip Let $\tilde{\mathcal{U}}=\{\tilde{U}_x\}$, $x\in N$, be a~coordinate open cover of~the~base~$N$. At~each point $x\in N$, consider a~geodesic ball $U_x$ that is geodesically convex (small balls are geodesically convex, see~\cite[Proposition~4.2]{Carm92}) and such that its closure (a~compact set) is contained in~$\tilde{U}_x$. Then $\mathcal{U^0}=\{\tilde{U}_x\}$ is an~open cover of~$N$. Extract a~finite subcover $\mathcal{U}=\{U_i\}$, $i=1,\dots,l$, from~$U^0$. Since $\mathcal{U}$ consists of~geodesic balls, it is a~{\it good cover}, i.e., all finite intersections $U_I = U_{i_0}\cap\dots\cap U_{i_{s-1}}$, $I=(i_0,\dots,i_{s-1})$, are bi-Lipschitz diffeomorphic to~convex open sets with~compact closure in~$\mathbb{R}^n$. With such a~cover~$\mathcal{U}$, we associate the~corresponding cover $\mathcal{V}=\{V_i=[a,b)\times U_i\}$, $i=1,\dots,l$, of~$M$ and put $V_I = V_{i_0}\cap\dots\cap V_{i_{s-1}}$ for $I=(i_0,\dots,i_{s-1})$. Then each intersection~$V_I$ is bi-Lipschitz diffeomorphic to~a~cylinder of~the~form $[a,b)\times U_{\mathbb{R}^n}$, where $U_{\mathbb{R}^n}$ is a~convex set with~compact closure in~$\mathbb{R}^n$. By~analogy with~\cite{Shar2011}, we put $$ K^{k,0}:= C^\infty(M,\Lambda^k); \quad K^{k,s}:= \bigoplus_{i_0<\dots<i_{s-1}} C^\infty(V_I,\Lambda^k). $$ Given $\varkappa\in K^{r,s}$, denote by~$\varkappa_I$, $I=(i_0,\dots,i_{s-1})$, $i_0<\dots<i_s$, the~components of~$\varkappa$. Define a~coboundary operator $\delta: K^{k,s} \to K^{k,s+1}$ as follows: $$ (\delta\varkappa)_J = \left. \left( \sum_{r=0}^s (-1)^r \varkappa_{j_0 \dots \hat j_r \dots j_s} \right) \right|_{V_J}, \quad J=(j_0,\dots,j_s). $$ Let $L^q(K^{k,s})$ be the~space of~elements $\varkappa\in K^{k,s}$ with the~finite norm $$ \|\varkappa\|_{L^q(K^{k,s},\beta)} = \sum_{i_0<\dots i_{s-1}} \|\varkappa_I\|_{L^q(V_I,\Lambda^k,\beta)}. $$ As usual, if $\varkappa\in K^{k,s}$ has components $\varkappa_I$, $I=(i_0,\dots,i_{s-1})$, $i_0<\dots<i_s$, and $\nu$ is a~permutation of~the~set $\{0,\dots,s-1\}$ then $\alpha_{\nu(I)}= \alpha_I \mathrm{sign}\nu$. The~following proposition is a~modification for~our case of~\cite[Proposition~3.6]{Shar2011}, which is in~turn an~adaptation of~\cite[Propositions~8.3 and 8.5]{BottTu}. \begin{prop}\label{exac} $(K^{k,\bullet},\delta)$ is an~exact complex. Moreover, if $\lambda\in L^q(K^{k,s+1},\beta)$ satisfies $\delta \lambda=0$ then there exists $\varkappa\in L^q(K^{k,s},\beta)$ such that $\lambda=\delta\varkappa$ and \begin{itemize} \item $\|\varkappa\|_{L^q(K^{k,s},\beta)} \le \mathrm{const\,} \|\lambda\|_{L^q(K^{k,s+1},\beta)}$ \item $\|d\varkappa\|_{L^q(K^{k+1,s},\beta)} \le \mathrm{const\,} \left( \|\lambda\|_{L^q(K^{k,s+1},\beta)} + \|d\lambda\|_{L^q(K^{k+1,s+1},\beta)} \right)$. \end{itemize} \end{prop} \begin{proof} The~fact that $(K^{k,\bullet},\delta)$ is an exact~complex was established in~\cite[Propositions~8.3 and~8.5]{BottTu} but we will give the~standard argument for~completeness. If $\varkappa\in L^q(K^{k,s},\beta)$ then \begin{multline*} (\delta(\delta\varkappa))_{i_0\dotsi_{s+1}} = \sum_r (-1)^i (\delta\varkappa)_{i_0\dots\hat i_r\dots i_{s+1}} \\ =\sum_{l<r} (-1)^r (-1)^l \varkappa_{i_0\dots\hat i_l \dots \hat i_r \dots i_{s+1}} + \sum_{l<r} (-1)^r (-1)^{l-1} \varkappa_{i_0\dots\hat i_l \dots \hat i_r \dots i_{s+1}}=0. \end{multline*} Suppose that $\lambda\in L^q(K^{k,s+1},\beta)$ is such that $\delta\lambda=0$. Let $\tilde\rho_j$ be a~partition of~unity subordinate to~the~cover $\{U_i\}$ of~$N$. Then the~functions $\rho_j: M \to \mathbb{R}$, $\rho_j(t,x)=\tilde\rho_j(x)$ for all $(t,x)\in M=[a,b)\times N$, constitute a~partition of~unity subordinate to~the~cover $\{V_i\}$ of~$M$. Put \begin{equation}\label{kappa} \varkappa_{i_0 \dots i_{s-1}} := \sum_j \rho_j \lambda_{j i_0 \dots i_{s-1}}. \end{equation} Show that $\delta\varkappa=\lambda$. We have $$ (\delta\varkappa)_{i_0 \dots i_s} = \sum_r (-1)^r \varkappa_{i_0 \dots \hat i_r \dots i_s} = \sum_{r,j} (-1)^r \rho_j \lambda_{j i_0\dots \hat i_r \dots i_s}. $$ Since $\lambda$ is a~cocycle, $$ (\delta\lambda)_{j i_0 \dots i_s} = \lambda_{i_0 \dots i_s} + \sum_r (-1)^{r+1} \lambda_{j i_0 \dots \hat i_r \dots i_s} =0 $$ Hence, $$ (\delta\varkappa)_{i_0 \dots i_s} = \sum_j \rho_j \sum_r (-1)^r \lambda_{j i_0 \dots \hat i_r \dots i_s} = \sum_j \rho_j \lambda_{i_0 \dots i_s} = \lambda_{i_0 \dots i_s}. $$ Thus, $(K^{k,\bullet},\delta)$ is indeed an~exact complex. The~element~$\varkappa$ defined by~(\ref{kappa}) admits the~estimates of~the~norms mentioned in~the~proposition. Indeed, we infer \begin{align*} \|\varkappa\|_{L^q(K^{k,s},\beta)} & = \sum_{i_0<\dots<i_{s-1}} \biggl\| \sum_j \rho_j \lambda_{j i_0 \dots i_{s-1}} \biggr\|_{L^q(U_I)} \\ &\le \sum_{i_0<\dots<i_{s-1}} \sum_j \| \rho_j \lambda_{j i_0 \dots i_{s-1}} \|_{L^q(U_I)} \\ &\le \sum_{i_0<\dots<i_{s-1}}
business cycles. The following graphs (from Uhlig 2003) demonstrate a big reason for the influence of the RBC model. Looking at those graphs, you might wonder why there is anything left for macroeconomists to do. Business cycles have been solved! However, as I will argue in part 4, the perceived closeness of model and data is largely an illusion. There are, in my opinion, fundamental issues with the RBC framework that render it essentially meaningless in terms of furthering our understanding of real business cycles. ### The Birth of Dynamic Stochastic General Equilibrium Models Although many economists would point to the contribution of the RBC model in explaining business cycles on its own, most would agree that its greater significance came from the research agenda it inspired. Kydland and Prescott’s article was one of the first of what would come to be called Dynamic Stochastic General Equilibrium (DSGE) models. They are dynamic because they study how a system changes over time and stochastic because they introduce random shocks. General equilibrium refers to the fact that the agents in the model are constantly maximizing (consumers maximizing utility and firms maximizing profits) and markets always clear (prices are set such that supply and demand are equal in each market in all time periods). Due in part to the criticisms I will outline in part 4, DSGE models have evolved from the simple RBC framework to include many of the features that were lost in the transition from Keynes to Lucas and Prescott. Much of the research agenda in the last 30 years has aimed to resurrect Keynes’s main insights in microfounded models using modern mathematical language. As a result, they have come to be known as “New Keynesian” models. Thanks to the flexibility of the DSGE setup, adding additional frictions like sticky prices and wages, government spending, and monetary policy was relatively simple and has enabled DSGE models to become sufficiently close to reality to be used as guides for policymakers. I will argue in future posts that despite this progress, even the most advanced NK models fall short both empirically and theoretically. ## What’s Wrong With Modern Macro? Part 1 Before Modern Macro - Keynesian Economics Part 1 in a series of posts on modern macroeconomics. This post focuses on Keynesian economics in order to set the stage for my explanation of modern macro, which will begin in part 2. If you’ve never taken a macroeconomics class, you almost certainly have no idea what macroeconomists do. Even if you have an undergraduate degree in economics, your odds of understanding modern macro probably don’t improve much (they didn’t for me at least. I had no idea what I was getting into when I entered grad school). The gap between what is taught in undergraduate macroeconomics classes and the research that is actually done by professional macroeconomists is perhaps larger than in any other field. Therefore, for those of you who made the excellent choice not to subject yourself to the horrors of a first year graduate macroeconomics sequence, I will attempt to explain in plain English (as much as possible), what modern macro is and why I think it could be better. But before getting to modern macro itself, it is important to understand what came before. Keep in mind throughout these posts that the pretense of knowledge is quite strong here. For a much better exposition that is still somewhat readable for anyone with a basic economic background, Michael De Vroey has a comprehensive book on the history of macroeconomics. I’m working through it now and it’s very good. I highly recommend it to anyone who is interested in what I say in this series of posts. ### Keynesian Economics Although Keynes was not the first to think about business cycles, unemployment, and other macroeconomic topics, it wouldn’t be too much of an exaggeration to say that macroeconomics as a field didn’t truly appear until Keynes published his General Theory in 1936. I admit I have not read the original book (but it’s on my list). My summary here will therefore be based on my undergraduate macro courses, which I think capture the spirit (but probably not the nuance) of Keynes. Keynesian economics begins by breaking aggregate spending (GDP) into four pieces. Private spending consists of consumption (spending by households on goods and services) and investment (spending by firms on capital). Government spending on goods and services makes up the rest of domestic spending. Finally, net exports (exports minus imports) is added to account for foreign expenditures. In a Keynesian equilibrium, spending is equal to income. Consumption is assumed to be a fraction of total income, which means that any increase in spending (like an increase in government spending) will cause an increase in consumption as well. An important implication of this setup is that increases in spending increase total income by more than the initial increase (called the multiplier effect). Assume that the government decides to build a new road that costs$1 million. This increase in expenditure immediately increases GDP by $1 million, but it also adds$1 million to the income of the people involved in building the road. Let’s say that all of these people spend 3/4 of their income and save the rest. Then consumption also increases by $750,000, which then becomes other people’s incomes, adding another$562,500, and the process continues. Some algebra shows that the initial increase of $1 million leads to an increase in GDP of$4 million. Similar results occur if the initial change came from investment or changes in taxes. The multiplier effect also works in the other direction. If businesses start to feel pessimistic about the future, they might cut back on investment. Their beliefs then become self-fulfilling as the reduction in investment causes a reduction in consumption and aggregate spending. Although the productive resources in the economy have not changed, output falls and some of these resources become underutilized. A recession occurs not because of a change in economic fundamentals, but because people’s perceptions changed for some unknown reason – Keynes’s famous “animal spirits.” Through this mechanism, workers may not be able to find a job even if they would be willing to work at the prevailing wage rate, a phenomenon known as involuntary unemployment. In most theories prior to Keynes, involuntary unemployment was impossible because the wage rate would simply adjust to clear the market. Keynes’s theory also opened the door for government intervention in the economy. If investment falls and causes unemployment, the government can replace the lost spending by increasing its own expenditure. By increasing spending during recessions and decreasing it during booms, the government can theoretically smooth the business cycle. ### IS-LM The above description is Keynes at its most basic. I haven’t said anything about monetary policy or interest rates yet, but both of these were essential to Keynes’s analysis. Unfortunately, although The General Theory was a monumental achievement for its time and probably the most rigorous analysis of the economy that had been written, it is not exactly the most readable or even coherent theory. To capture Keynes’s ideas in a more tractable framework, J.R. Hicks and other economists developed the IS-LM model. I don’t want to give a full derivation of the IS-LM model here, but the basic idea is to model the relationship between interest rates and income. The IS (Investment-Savings) curve plots all of the points where the goods market is in equilibrium. Here we assume that investment depends negatively on interest rates (if interest rates are high, firms would rather put their money in a bank then invest in new projects). A higher interest rate then lowers investment and decreases total income through the same multiplier effect outlined above. Therefore we
\section{Introduction\label{sec:intro}} Recent advances in the science of complex systems aim for a better understanding of the higher-order connectivity as a possible basis for their emerging properties and complex functions. Beyond the framework of pairwise interactions, these connections described by simplexes of different sizes provide the geometry for higher-order interactions and simplex-related dynamical variables. One line of research consists of modelling and analysis of the structure of simplicial complexes in many complex systems, ranging from the human connectome \cite{Brain_weSciRep2019} to quantum physics \cite{Geometries_QuantumPRE2015} and materials science \cite{AT_Materials_Jap2016,we-SciRep2018}. Meanwhile, considerable efforts aim at understanding the impact of geometry on the dynamics. In this context, the research has been done on modelling of the simplex-based synchronisation processes \cite{HOC_Synchro_ArenasPRL2019,HOC_synchroexpl_Bianconi2019}, on studying the related spectral properties of the underlying networks \cite{we-SpectraPRE2019,Spectra-rapisarda2019}, as well as on the interpretation of the dynamics of the brain \cite{HomologyBrain_petriJRSI2014,HOC_Brain_ReimanFrontCompNeuro2017,Brain_Hubs-dynamicsReview2018} and other complex dynamical systems \cite{HOC_Kuehn2019}. Recently, mapping the brain imaging data \cite{BrainMapping_Zhang2018} to networks involved different types of signals across spatial and temporal scales; consequently, a variety of structural and functional networks have been obtained \cite{BrainConnectivityNets_2010,BrainNetParcellation_Shen2010,EEGnets_ICAreconstr_PLOSone2016,Xbrain_hyperscanning}. This network mapping enabled getting a new insight into the functional organisation of the brain \cite{Brain_phys1review,Brain_phys2}, in particular, based on the standard and deep graph theoretic methods \cite{Sporns_BrainNets_review2013,brainGraph_metods2014,GD_Bud_PLOS2015} and the algebraic topology of graphs \cite{Brain_weSciRep2019,Xbrain_wePLOS2016}. The type of network that we consider in this work is the whole-brain network \textit{human connectome}; it is mapped from the \textit{fMRI} data available from the human connectome project \cite{HCP_Neuroimage}, see Methods. The network nodes are identified as the grey-matter anatomical brain regions, while the edges consist of the white-matter fibres between them. Beyond the pairwise connections, the human connectome exhibits a rich structure of simplicial complexes and short cycles between them, as it was shown in \cite{Brain_weSciRep2019}. Furthermore, on a mesoscopic scale, a typical structure with anatomical modules is observed. It has been recognised \cite{brainNets_tasksPNAS2015,Brain_modularNNeurosci2017} that every module has an autonomous function, which contributes to performing complex tasks of the brain. Meanwhile, the integration of this distributed activity and transferring of information between different modules is performed by very central nodes (hubs) as many studies suggest, see a recent review \cite{Brain_Hubs-dynamicsReview2018} and references therein. Formally, hubs are identified as a group of four or five nodes in each brain hemisphere that appear as top-ranking according to the number of connections or another graph-centrality measure. Almost all formal criteria give the same set of nodes, which are anatomically located deep inside the brain, through which many neuronal pathways go. Recently, there has been an increased interest in the research of the hubs of the human connectome. The aim is to decipher their topological configuration and how they fulfil their complex dynamic functions. For example, it has been recognised that the brain hubs are mutually connected such that they make a so-called ``rich club'' structure \cite{Brain_Hubs-richclub2011}. Moreover, their topological configuration develops over time from the prenatal to childhood and adult brain \cite{Brain_Hubs-development2015,Brain_Hubs-development2019}. The hubs also can play a crucial role in the appearance of diseases when their typical configuration becomes destroyed \cite{Brain_Hubs-diseases2012}. Assuming that the higher-order connectivity may provide a clue of how the hubs perform their function, here we examine the organisation of simplicial complexes around eight leading hubs in the human connectome. Based on our work \cite{Brain_weSciRep2019}, we use the consensus connectomes that we have generated at the Budapest connectome server \cite{http-Budapest-server3.0,Hung2}. These are connectomes that are common for one hundred female subjects (F-connectome) and similarly for one hundred male subjects (M-connectome), see Methods. Accordingly, we determine the hubs as eight top-ranking nodes in the whole connectome, performing the ranking according to the number of simplexes of all orders in which the node participates. These are the Putamen, Caudate, Hippocampus and Thalamus-Proper in the left and similarly in the right brain hemisphere; they also appear as hubs according to several other graph-theory measures. We then construct \textit{core networks} consisting of these hubs and all simplexes attached to them in both female and male connectomes. We determine the simplicial complexes and the related topological entropy in these core structures. To highlight the weight-related heterogeneity of connections, the structure of these core networks is gradually altered by increasing the threshold weight above which the connections are considered as significant. We show that the connectivity up to the 6th order remains in both connectomes even at a high threshold. Meanwhile, the identity of edges and their weights appear to be different in the F- and M-connectomes. \section{Methods\label{sec:methods}} \subsubsection{Data description} We use the data for two \textit{consensus connectomes} that we have generated in \cite{Brain_weSciRep2019} at the Budapest connectome server 3.0 \cite{http-Budapest-server3.0,Hung2} based on the brain imaging data from Human Connectome Project \cite{HCP_Neuroimage}. Specifically, these are the weighted whole-brain networks that are common for 100 female subjects, \textit{F-connectome}, and similarly, \textit{M-connectome}, which is common for 100 male subjects. Each connectome consists of $N=1015$ nodes annotated as the anatomical brain regions, and weighted edges, whose weight is given by the number of fibres between the considered pair of brain regions normalised by the average fibre length. Here, we consider the largest number $10^6$ fibres tracked and set the minimum weight to four. The corresponding core networks \textit{Fc-network} and \textit{Mc-network} are defined as subgraphs of the F- and M-connectomes, respectively, containing the leading hubs and their first neighbour nodes as well as all edges between these nodes. Meanwhile, the hubs are determined according to the topological dimension criteria, as described below and in Results. \subsubsection{Topology analysis and definition of quantities} We apply the Bron-Kerbosch algorithm \cite{cliquecomplexes} to analyse the structure of simplicial complexes, i.e., clique complexes, in the core Fc- and Mc- connectomes. In this context, a \textit{simplex} of order $q$ is a full graph (clique) of $q+1$ vertices $\sigma_q=\left \langle i_0,i_1,i_2,...,i_{q}\right \rangle$. Then a simplex $\sigma_r$ of the order $r<q$ which consists of $r$ vertices of the simplex $\sigma_q$ is a \textit{face} of the simplex $\sigma_q$. Thus, the simplex $\sigma_q$ contains faces of all orders from $r=0$ (nodes), $r=1$ (edges), $r=2$ (triangles), $r=3$ (tetrahedrons), and so on, up to the order $r=q-1$. A set of simplexes connected via shared faces of different orders makes a \textit{simplicial complex}. The order of a simplicial complex is given by the order of the largest clique in this complex, and $q_{max}$ is the largest order of all simplicial complexes. Having the adjacency matrix of the graph, with the algorithm, we build the incidence matrix ${\Lambda}$, which contain IDs of all simplexes and IDs of nodes that make each simplex. With this information at hand, we compute three structure vectors \cite{jj-book,Qanalysis1} to characterise the architecture of simplicial complexes: \begin{itemize} \item The \textit{first structure vector (FSV):} $\mathbf{Q}=\{Q_0,Q_1,\cdots Q_{q_{max}-1}, Q_{q_{max}}\}$, where $Q_q$ is the number of $q$-connected components; \item The \textit{second structure vector (SSV):} $\mathbf{N_s}=\{n_0,n_1, \cdots n_{q_{max}-1},n_{q_{max}}\}$, where $n_q$ is the number of simplexes from the level $q$ upwards; \item The \textit{third structure vector (TSV):} the component $\hat{Q}_q \equiv 1-{Q_q}/{n_q}$ quantifies the degree of connectedness among simplexes \textit{at} the topology level $q$. \end{itemize} Furthermore, we determine the topological dimension of nodes and topological entropy introduced in \cite{we-PRE2015}. The topological dimension $dimQ_i$ of a node $i$ is defined as the number of simplexes of all orders in which the corresponding vertex participates, \begin{equation} dimQ_i \equiv \sum_{q=0}^{q_{max}}Q_q^i \ , \label{eq:dimQ} \end{equation} where $Q_q^i$ is determined directly from the ${\Lambda}$ matrix by tracking the orders of all simplexes in which the node $i$ has a nonzero entry. Then, with this information, the entropy of a topological level $q$ defined as \begin{equation} S_Q(q) = -\frac{\sum_i p_q^i \log p_q^i}{\log M_q} \; \label{eq-entropy-q} \end{equation} is computed. Here, $p_q^i= \frac{Q_q^i}{\sum_i Q_q^i}$ is the node's occupation probability of the $q$-level, and the sum runs over all nodes. The normalisation factor $M_q=\sum_i\left(1-\delta_{Q_q^i,0}\right)$ is the number of vertices having a nonzero entry at the level $q$ in the entire graph. Thus the topological entropy (\ref{eq-entropy-q}) measures the degree of cooperation among vertices resulting in a minimum at a given topology level. Meanwhile, towards the limits $q\to 0$ and $q\to q_{max}$, the occurrence of independent cliques results in a higher entropy at that level. In addition, we compute the vector $\mathbf{f}= \left\{ f_0, f_1, \cdots f_{q_{max}}\right\}$, which is defined \cite{we-PRE2015} such that $f_q$ represents the \textit{number of simplxes and faces at
pred}) \right]^2, \end{equation} where $\delta(\Delta c_i^{\rm pred})$ and $\delta (\Delta s_i^{\rm pred})$ are the associated uncertainties on the model-predicted differences. Including the $K_{\rm L}^0\pi^+\pi^-$ final state improves sensitivity, particularly to $s_i^{(\mathcal{0})}$. The uncertainties on the model-predicted values of the differences, $\delta(\Delta c_i^{\rm pred})$ and $\delta (\Delta s_i^{\rm pred})$ are dominated by assumptions associated with the U-spin breaking parameters~\cite{ref:bes3prl_cisi, ref:bes3prd_cisi}. This uncertainty motivates an amplitude analysis of $D^0 \to K_{\rm L}^0\pi^+\pi^-$, which will test these assumptions and determine a well defined data-driven uncertainty. With the motivation for an amplitude analysis of $D^0\to K^0_{\rm L}\pi^+\pi^-$ described, we now provide the formalism for such an analysis. Any three-body decay $D\to abc$ can proceed via multiple quasi-independent two-body intermediate channels: \begin{equation} \label{eq:decay_chain} D \to rc,~ r \to ab, \end{equation} where $r$ is an intermediate resonance. The total effective amplitude of this decay topology is given by a coherent sum of all the contributing resonant channels. This approximation is called the {\it isobar model}, where the contributing intermediate amplitudes are referred to as the {\it isobars}. Isobars can be modeled with various complex dynamical functions, the choice of which depends on the spin and width of the resonance. In addition to the resonant modes, the total decay amplitude may also include a three-body {\it non-resonant} channel: \begin{equation} \mathcal{A}(\textbf{x}) = \mathcal{A}_{\rm resonant} + \mathcal{A}_{\rm NR} = \sum_r a_r e^{i\phi_r}A_r(\textbf{x}) + a_0e^{i\phi_0}, \end{equation} where $\mathcal{A}(\textbf{x})$ is the final decay amplitude at position $\textbf{x}$ in the DP. Here the complex coupling parameters $a_re^{i\phi_r}$ correspond to resonant contributions denoted by $r$ and provide relative magnitudes and phases to each of these resonant amplitudes. As the DP phase space is uniform, only the dynamical part $A_r(\textbf{x})$ of the total decay rate results in variations of event density over the DP. Nominally, the dynamics of the modes associated with well isolated and narrow resonant structures with spin one or two are described by relativistic {\it Breit-Wigner} functions. In contrast, dynamics of broad overlapping resonant structures, which usually is the case with scalars, are parametrized using the K-matrix formulation borrowed from scattering theory. For the subsequent discussions on various parametrizations in the rest of this section, a generic decay chain will be referred to, as in eq.~\ref{eq:decay_chain}, with an angular-momentum transfer $J_{D} \to j_{r} + L$, where $J_D$ and $j_r$ denote the intrinsic spins of $D$ and $r$, and $L$ is the relative orbital angular momentum between $r$ and $c$. Relativistic {\it Breit-Wigner} functions are phenomenological descriptions of non-overlapping intermediate transitions that are away from threshold. Their dynamical structure takes the form \begin{equation} \label{eq:bw} T_r(s) = \frac{1}{m_0^2 - s - im_0\Gamma(s)}, \end{equation} where $m_0$ is the resonance mass and $\sqrt{s}$ is the resonance two-particle invariant mass. The momentum-dependent resonance width $\Gamma(s)$ relates to the pole width ($\Gamma_0$) as \begin{equation} \label{eq:gamma_s} \Gamma(s) = \Gamma_0 \frac{m_0}{\sqrt{s}} \left( \frac{q}{q_0} \right)^{(2L+1)} \mathcal{B}_r^{L}(q,q_0). \end{equation} Pole masses and widths in this analysis are fixed to the PDG values~\cite{ref:pdg20}. The function $\mathcal{B}_r^L(q,q_0)$ is the centrifugal-barrier factor~\cite{ref:barrierhippel} in the decay $r \to ab$, where $q$ is the momentum transfer in the $r$ decay in its rest frame and $q_0$ is $q$ evaluated at $m_0$. The full Breit-Wigner amplitude description includes, in addition to the dynamical part $T_r(s)$, barrier factors corresponding to $P$- and $D$-wave decays of the initial state $D$ meson and resonance $r$ decays, $\mathcal{B}_{D}^L$ and $\mathcal{B}_r^L$ respectively, and an explicit spin-dependent factor ($\mathcal{Z}_L$): \begin{equation} \label{eq:amp_bw} A_r^{BW} = T_r(s) \times \mathcal{B}_{r}^L(q,q_0) \times \mathcal{B}_D^L(p,p_0) \times \mathcal{Z}_L(J_D,j_r,{\bf p},{\bf q}). \end{equation} Here $p$ denotes momentum of the spectator particle $c$ in the resonance rest frame and $p_0$ is the corresponding on-shell value. Scaling of the Breit-Wigner lineshape by the barrier factors optimizes the enhancement or dampening of the total amplitude depending upon the relative orbital angular momentum (or the spin of the resonance) of the decay and the linear momenta of the particles involved. For resonances with spin greater than or equal to one and small decay interaction radius (or impact parameter) of the order 1~fm, large momentum transfer in the $a,b$ system is disfavoured because of limited orbital angular momentum between $r$ and $c$. {\it Blatt-Weisskopf} form factors~\cite{ref:blattweisskopf}, normalized to unity at $q=q_0$, are used to parametrize the barrier factors whose functional forms are given in table~\ref{tab:blatt}, where $d$ denotes the interaction radius of the parent particle. Similar expressions for $D$-decay barrier factors can be written in terms of momentum of the spectator particle evaluated in the $D$ rest frame. \begin{table}[tbp] \centering \large \begin{tabular}{|c|c|} \hline $L$ & Form factor $\mathcal{B}^L_r(q,q_0)$ \Tstrutsmall \Bstrutsmall \\ \hline \hline 0 & 1 \\ 1 & $\sqrt{\frac{1 + q_0^2d^2}{1 + q^2d^2}}$ \Tstrut \Bstrut \\ 2 & $\sqrt{\frac{9 + 3q_0^2d^2 + (q_0^2d^2)^2}{9 + 3q^2d^2 + (q^2d^2)^2}}$ \Tstrut \Bstrut \\ \hline \end{tabular} \caption{Normalized Blatt-Weisskopf barrier factors~\cite{ref:blattweisskopf} for resonance decay exhibiting spin and momentum-dependent effects.} \label{tab:blatt} \end{table} The spin-dependence of the decay amplitudes are derived using covariant spin-tensor or Rarita-Schwinger formalism~\cite{ref:chungcernreport, ref:spinchung, ref:spintensor, ref:spinzoubugg}. The pure spin-tensors for spin 1 and 2 from spin-projection operators $\Theta$ and the break-up four-momentum $k^{\mu} = a^{\mu} - b^{\mu}$ in the resonance rest frame (so that three-momentum \textbf{k} = 2\textbf{q}) are given by \begin{align} \mathit{S}_{\mu} &= \Theta_{\mu \nu}k^{\nu}, \\ \mathit{T}_{\mu \nu} &= \Theta_{\mu \nu \rho \sigma}k^{\rho}k^{\sigma}. \end{align} Using the above defined spin-tensors together with the orthogonality and spacelike conditions on Lorentz invariant functions of rank one and two for $P$ and $D$ wave, respectively, when summed over all the polarization states, it is possible to arrive at the following definitions of the angular decay amplitude: \begin{align} A(0 \to 1 + 1) &: \mathcal{Z}_{L=1} = \Theta^{\mu \nu} \mathit{S}_{\mu} \mathcal{L}_{\nu}, \\ A(0 \to 2 + 2) &: \mathcal{Z}_{L=2} = \Theta^{\mu \rho}\Theta^{\nu \sigma} \mathit{T}_{\mu \nu} \mathcal{M}_{\rho \sigma}, \end{align} where $\mathcal{L}$ and $\mathcal{M}$ are normalized tensors describing the states of relative orbital angular momenta $L=1$ and $L=2$. Simplified expressions for the angular amplitudes in terms of four-momenta of the states involved and their invariant masses are given in appendix~\ref{appensec:BW_spin}. Overlapping $S-$wave pole production with multiple channels in two-body scattering processes are best described by the $K$-matrix formulation~\cite{ref:KmatChung, ref:kmatparam, ref:focuskmat}. A sum of Breit-Wigner functions to describe such broad resonant structures violate the unitarity of the transition matrix $\mathcal{T}$. The idea is to write the total effective $\mathcal{T}$ matrix in terms of the $K$ matrix \begin{equation} \hat{\mathcal{T}} = ( 1 - i\hat{K}\omega )^{-1}\hat{K}. \end{equation} The $K$-matrix contains contributions from all the poles, intermediate channels and all possible couplings. Here the parameter $\omega$ is a diagonal matrix with the phase-space densities of various channels involved as its elements. As an example, in low energy $\pi\pi \to \pi\pi$ scattering, the total amplitude carries a contribution from coupling between the resonance $f_0(980)$ and a $KK$ channel, which is partly responsible for the sharp dip observed in the scattering amplitude near $1$ GeV. A simple Breit-Wigner function cannot explain this variation in the amplitude. This recipe can be translated to decay processes involving broad overlapping resonance structures produced in an $S-$wave. An initial state first couples to $K$-matrix poles with strength parametrized by $\beta_{\alpha}$ for pole $\alpha$, and these poles in turn couple to various intermediate channels $i$ in the $K$-matrix with strengths characterized by $g_{i}^{\alpha}$. Direct coupling between initial state and these $K$-matrix channels is also a possibility and the corresponding strength is denoted by $f_{1i}^{\rm prod}$, scaling a slowly varying polynomial term in $s$ and an arbitrary parameter $s_0^{\rm prod}$, which are fixed from a global analysis of $\pi\pi$ scattering data~\cite{ref:kmatparam}. Summing these contributions together results in the {\it production vector} $\hat{P}$: \begin{equation} \hat{P}_i = \sum_{\alpha} \frac{\beta_{\alpha}g_i^{\alpha}}{m_{\alpha}^2-s} + f_{1i}^{\rm prod}\frac{1-s_0^{\rm prod}}{s-s_0^{\rm prod}}, \end{equation} where $m_{\alpha}$ are the pole masses and $s$ is the kinematic variable, in this case the invariant squared-mass of the two pions from the three-body decay. The structure of the $K$ matrix in a decay process with poles $\alpha$ and decay channels denoted by $i$ and $j$ is given by \begin{equation} \label{Eq:kmat_def} K_{ij}(s) = \left( \sum_{\alpha} \frac{g_i^{\alpha}g_j^{\alpha}}{m_{\alpha}^2-s} + f_{ij}^{\rm scatt}\frac{1-s_0^{\rm scatt}}{s-s_0^{\rm scatt}} \right) f_{A0}(s). \end{equation} The intermediate channels considered in the present case are $\pi\pi$, $KK$, $\pi\pi\pi\pi$, $\eta\eta$ and $\eta\eta'$. In addition to the pole terms, direct
super-cluster, of $0.1\mu{\rm G}$ strength and 10 Mpc coherence length corresponding to a hypothetical local turbulent eddy of comparable size \cite{Biermann,Tanco}. For these parameters as well we have $t_e\ll t_H$, and therefore even this field structure would not affect the neutrino bound. Nevertheless, two points should be made. First, if cosmic-rays are confined to our local super-cluster, one would expect the local cosmic-ray flux to be higher than average (since the production rate averaged over Hubble time should be higher than average in over-dense large scale regions), implying that the upper bound on the neutrino intensity is lower than $I_{\rm max}$ derived here. Second, it is hard to understand how the hypothesized magnetic field structure could have been formed. The overdensity in the local super-cluster is not large and an equipartition magnetic field of strength $0.1\mu{\rm G}$ therefore corresponds to a turbulent velocity $v_t\sim10^3{\rm\ km/s}$. A turbulent eddy of this velocity coherent over tens of Mpc is inconsistent with local peculiar velocity measurements (e.g. \cite{Strauss} for a recent review). Moreover, the corresponding eddy turn around time is larger than the Hubble time. \section{AGN jet models} We consider in this section some popular models for neutrino production in which high-energy neutrinos are produced in the jets of active galactic nuclei \cite{AGNnu}. In these models, the flux of high-energy neutrinos received at Earth is produced by ``blazars'', AGN jets nearly aligned with our line of sight. Since the predicted neutrino intensities for these models exceed by typically two orders of magnitude the upper bound, Eq. (\ref{Fmax}), based on observed cosmic ray fluxes, it is important to verify that the models satisfy the assumption on which Eq. (\ref{Fmax}) is based, i.e. optical depth $<1$ to $p-\gamma$ interaction. The neutrino spectrum and flux are derived in AGN jet models on the basis of the following key considerations. It is assumed that protons are Fermi accelerated in the jet to high energy, with energy spectrum $dN_p/dE_p\propto E_p^{-2}$. For a photon spectrum $dN_\gamma/dE_\gamma \propto E_\gamma^{-2}$, as typically observed, the number of photons with energy above the threshold for pion production is proportional to the proton energy $E_p$ (the threshold energy is inversely proportional to $E_p$). This implies that the proton photo-meson optical depth is proportional to $E_p$, and therefore, assuming that the optical depth is small, that the resulting neutrino spectrum is flatter than the proton spectrum, namely $dN_\nu/dE_\nu\propto E_\nu^{-1}$, as shown in Fig. 1. The spectrum extends to a neutrino energy which is $\approx5\%$ of the maximum accelerated proton energy, which is typically $10^{19}$eV in the models discussed. The production of charged pions is accompanied by the production of neutral pions, whose decay leads to the emission of high-energy gamma- rays. It has been claimed \cite{pionAGN} that the observed blazar emission extending to $\sim10$~TeV \cite{TeV} supports the hypothesis that the high-energy emission is due to neutral pion decay rather then to inverse Compton scattering by electrons. Thus, the normalization of the neutrino flux is determined by the assumption that neutral pion decay is the source of high-energy photon emission and that this emission from AGN jets produces the observed diffuse $\gamma$-ray background, $\Phi_\gamma(>100{\rm MeV})=10^{-8} {\rm erg/cm}^2{\rm s\,sr}$ \cite{EGRET}. Under these assumptions the total neutrino energy flux is similar to the $\gamma$-ray background flux (see Fig. 1). In the AGN jet models discussed above, the proton photo-meson optical depth $\tau_{p\gamma}$ at $E_p\le10^{19}$eV is smaller than unity. This is evident from the neutrino energy spectrum shown in Fig. 1, which is flatter than the assumed proton spectrum at $E_p\le10^{19}$, as explained above. In fact, it is easy to see that these models are constrained to have $\tau_{p\gamma}\le10^{-3}$ at $E_p\sim10^{19}$eV. The threshold energy of photons for pair-production in interaction with a 1~TeV photon is similar to the photon energy required for resonant meson production in interaction with a proton of energy $E_p=0.2{\rm GeV}^2/(0.5{\rm MeV})^2\times1\,{\rm TeV}=10^{18}$~eV. Emission of $\sim1$~TeV photons from blazars is now well established \cite{eTeV}, and there is evidence that the high-energy photon spectrum extends as a power-law at least to $\sim10$~TeV \cite{TeV}. This is the main argument used \cite{pionAGN} in support of the hypothesis that high-energy emission from blazars is due to pion decay rather than inverse Compton scattering. The observed high-energy emission implies that the pair-production optical depth for $\sim1$~TeV photons is small, and that $\tau_{p\gamma}\le10^{-4}(E_p/10^{18}{\rm eV})$, since the cross section for pair production is $\sim10^4$ times larger than the cross section for photo-meson production. This result guarantees that the upper bound $I_{\rm max}$ on the neutrino intensity is valid for AGN jet models. \section{Gamma-ray bursts} In the GRB fireball model for high-energy neutrinos, the cosmic ray observations are naturally taken into account and the upper limit on high energy neutrino flux is automatically satisfied. In fact, it was the similarity between the energy density in cosmic ray sources implied by the cosmic ray flux observations and the GRB energy density in high energy protons that led to the initial suggestion that GRBs are the the source of high energy protons. Just as for AGN jets, the GRB fireballs are optically think to $\gamma$- p interactions that produce pions but--unlike the AGN jet models--the GRB model predicts a neutrino flux that satisfies the cosmic-ray upper bound discussed in Section II. \subsection{Neutrinos at energies $\sim10^{14}$~eV} In the GRB fireball model \cite{fireball}, which has recently gained support from GRB afterglow observations \cite{AG}, the observed gamma rays are produced by synchrotron emission of high-energy electrons accelerated in internal shocks of an expanding relativistic wind, with characteristic Lorentz factor $\Gamma\sim300$ \cite{lorentz}. In this scenario, observed gamma-ray flux variability on time scale $\Delta t$ is produced by internal collisions at radius $r_d\approx\Gamma^2c\Delta t$ that arise from variability of the underlying source on the same time scale \cite{internal}. In the region where electrons are accelerated, protons are also expected to be shock accelerated, and their photo-meson interaction with observed burst photons will produce a burst of high-energy neutrinos accompanying the GRB \cite{WnB}. If GRBs are the sources of ultra-high-energy cosmic-rays \cite{GRB1,GRB2}, then the expected GRB neutrino intensity is \cite{WnB} \begin{eqnarray} E_\nu^2\Phi_{\nu_\mu}&&\approx E_\nu^2\Phi_{\bar\nu_\mu} \approx E_\nu^2\Phi_{\nu_e}\approx {1\over2}f_\pi I_{\rm max}\cr &&\approx 1.5\times10^{-9}\left({f_\pi\over0.2}\right)\min\{1,E_\nu/E^b_\nu\} {\rm GeV\,cm}^{-2}{\rm s}^{-1}{\rm sr}^{-1},\quad E^b_\nu\approx10^{14} {\rm eV}. \label{JGRB} \end{eqnarray} Here, $f_\pi$ is the fraction of energy lost to pion production by high-energy protons. The derivation of $f_\pi$ and $E^b_\nu$ and their dependence on GRB model parameters is given in the appendix [Eqs. (\ref{fpi}), (\ref{Enu})]. The intensity given by Eq. (\ref{JGRB}) is $\sim5$ times smaller than that given in Eq. (8) of ref. \cite{WnB}, due to the fact that in Eq. (8) of ref. \cite{WnB} we neglected the logarithmic correction $\ln(100)=5$ of Eq. (\ref{ECR}). The GRB neutrino intensity can be estimated directly from the observed gamma-ray fluence. The Burst and Transient Source Experiment (BATSE) measures the GRB fluence $F_\gamma$ over a decade of photon energy, $\sim0.1$MeV to $\sim1$MeV, corresponding to half a decade of radiating electron energy (the electron synchrotron frequency is proportional to the square of the electron Lorentz factor). If electrons carry a fraction $f_e$ of the energy carried by protons, then the muon neutrino fluence of a single burst is $E_\nu^2dN_\nu/dE_\nu\approx0.25(f_\pi/f_e)F_\gamma/\ln(3)$. The average neutrino flux per unit time and solid angle is obtained by multiplying the single burst fluence with the GRB rate per solid angle, $\approx10^3$ bursts per year over $4\pi$~sr. Using the average burst fluence $F_\gamma\approx6\times10^{-6}{\rm erg/cm}^2$, we obtain a muon neutrino intensity $E_\nu^2\Phi_\nu\approx3\times10^{-9}(f_\pi/f_e) {\rm GeV/cm}^2{\rm s\,sr}$. Recent GRB afterglow observations typically imply $f_e\sim0.1$ \cite{AG}, and therefore $f_\pi/f_e\sim1$. Thus, the neutrino intensity estimated directly from the gamma-ray fluence agrees with the estimate (\ref{JGRB}) based on the cosmic-ray production rate. \subsection{Neutrinos at high energy $>10^{16}$~eV} The neutrino spectrum (\ref{JGRB}) is modified at high energy, where neutrinos are produced by the decay of muons and pions whose life time $\tau_{\mu,\pi}$ exceeds the characteristic time for energy loss due to adiabatic expansion and synchrotron emission \cite{WnB,RM98}. The synchrotron loss time is determined by the energy density of the magnetic field in the wind rest frame. For the characteristic parameters of a GRB wind, the muon energy for which the adiabatic energy loss time equals the muon life time, $E^a_\mu$, is comparable to the energy $E^s_\mu$ at which the life time equals the synchrotron loss time, $\tau^s_\mu$. For pions, $E^a_\pi>E^s_\pi$. This, and the fact that the adiabatic loss time is independent of energy and the synchrotron loss time is inversely proportional to energy, imply that synchrotron losses are the dominant effect suppressing the flux at high energy.
experimental limitations (staining efficiency/quality, image resolution, and detection of fluorescence signal) impedes identification of true spine boundaries whereas our software provides the possibility to minimize user bias. Although this experiment exclusively used confocal light microscopy images of dendritic spines, the method may be extended in the future for use with other super- resolution imaging techniques, such as photo-activated localization microscopy. The present 3-D segmentation method may be used for different experimental protocols that study the structural dynamics of dendritic spines in vitro and in vivo under various physiological and pathological conditions. Moreover, we believe that this will be a research tool that enables the detection of even subtle changes in the 3-D dendritic spine structure. Such methodological advances in spine morphological studies will allow more precise analyses and better interpretations of biological data regarding structural plasticity. The present method facilitates accurate, unbiased spine segmentation results and can significantly improve the way we study dendritic spine plasticity. However, it strictly depends on the user knowledge and experimental design which spines are going to be incorporated into the analysis. ### Data Availability The datasets generated during and/or analyzed during the current study are available from the corresponding authors on reasonable request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Harris, K. M. in eLS https://doi.org/10.1038/npg.els.0000093 (John Wiley & Sons, Ltd, 1994). 2. 2. Yuste, R., Majewska, A. & Holthoff, K. From form to function: calcium compartmentalization in dendritic spines. Nat. Neurosci. 3, 653–659 (2000). 3. 3. Bonhoeffer, T. & Yuste, R. Spine motility: phenomenology, mechanisms, and function. Neuron 35, 1019–1027 (2002). 4. 4. Tønnesen, J., Katona, G., Rózsa, B. & Nägerl, U. V. Spine neck plasticity regulates compartmentalization of synapses. Nat. Neurosci. 17, 678–685 (2014). 5. 5. Lee, K. F. H., Soares, C. & Béïque, J.-C. Examining form and function of dendritic spines. Neural Plast. 2012 (2012). 6. 6. Shepherd, G. M. The dendritic spine: a multifunctional integrative unit. J. Neurophysiol. 75, 2197–2210 (1996). 7. 7. Yuste, R. Dendritic spines and distributed circuits. Neuron 71, 772–781 (2011). 8. 8. Sala, C. & Segal, M. Dendritic spines: the locus of structural and functional plasticity. Physiol. Rev. 94, 141–188 (2014). 9. 9. Matsuzaki, M., Honkura, N., Ellis-Davies, G. C. R. & Kasai, H. Structural basis of long-term potentiation in single dendritic spines. Nature 429, 761–766 (2004). 10. 10. Oh, W. C., Hill, T. C. & Zito, K. Synapse-specific and size-dependent mechanisms of spine structural plasticity accompanying synaptic weakening. Proc. Natl. Acad. Sci. 110, E305–E312 (2013). 11. 11. Yuste, R. & Bonhoeffer, T. Morphological changes in dendritic spines associated with long-term synaptic plasticity. Annu. Rev. Neurosci. 24, 1071–1089 (2001). 12. 12. Holtmaat, A. & Svoboda, K. Experience-dependent structural synaptic plasticity in the mammalian brain. Nat. Rev. Neurosci. 10, 647–658 (2009). 13. 13. Matsuzaki, M. et al. Dendritic spine geometry is critical for AMPA receptor expression in hippocampal CA1 pyramidal neurons. Nat. Neurosci. 4, 1086–1092 (2001). 14. 14. Nusser, Z. et al. Cell type and pathway dependence of synaptic AMPA receptor number and variability in the hippocampus. Neuron 21, 545–559 (1998). 15. 15. Spires, T. L. Dendritic Spine Abnormalities in Amyloid Precursor Protein Transgenic Mice Demonstrated by Gene Transfer and Intravital Multiphoton Microscopy. J. Neurosci. 25, 7278–7287 (2005). 16. 16. Comery, T. A., Stamoudis, C. X., Irwin, S. A. & Greenough, W. T. Increased density of multiple-head dendritic spines on medium-sized spiny neurons of the striatum in rats reared in a complex environment. Neurobiol. Learn. Mem. 66, 93–6 (1996). 17. 17. Caroni, P., Donato, F. & Muller, D. Structural plasticity upon learning: regulation and functions. Nat. Rev. Neurosci. 13, 478–490 (2012). 18. 18. Nägerl, U. V., Eberhorn, N., Cambridge, S. B. & Bonhoeffer, T. Bidirectional activity-dependent morphological plasticity in hippocampal neurons. Neuron 44, 759–767 (2004). 19. 19. Zhou, Q., Homma, K. J. & Poo, M. Shrinkage of dendritic spines associated with long-term depression of hippocampal synapses. Neuron 44, 749–757 (2004). 20. 20. Araya, R., Vogels, T. P. & Yuste, R. Activity-dependent dendritic spine neck changes are correlated with synaptic strength. Proc. Natl. Acad. Sci. 111, E2895–E2904 (2014). 21. 21. Chen, C.-C., Lu, J. & Zuo, Y. Spatiotemporal dynamics of dendritic spines in the living brain. Front. Neuroanat. 8, 28 (2014). 22. 22. Mishchenko, Y. et al. Ultrastructural analysis of hippocampal neuropil from the connectomics perspective. Neuron 67, 1009–1020 (2010). 23. 23. Mukai, H. et al. Automated analysis of spines from confocal laser microscopy images: application to the discrimination of androgen and estrogen effects on spinogenesis. Cereb. cortex 21, 2704–2711 (2011). 24. 24. Son, J., Song, S., Lee, S., Chang, S. & Kim, M. Morphological change tracking of dendritic spines based on structural features. J. Microsc. 241, 261–272 (2011). 25. 25. Mancuso, J. J., Chen, Y., Li, X., Xue, Z. & Wong, S. T. C. Methods of dendritic spine detection: From Golgi to high-resolution optical imaging. Neuroscience 251, 129–140 (2013). 26. 26. Rochefort, N. L. & Konnerth, A. Dendritic spines: from structure to in vivo function. EMBO Rep. 13, 699–708 (2012). 27. 27. Worbs, T. & Förster, R. 4D-Tracking with Imaris. Bitpl. - Oxford Instruments (2007). 28. 28. Swanger, S. A., Yao, X., Gross, C. & Bassell, G. J. Automated 4D analysis of dendritic spine morphology: applications to stimulus-induced spine remodeling and pharmacological rescue in a disease model. Mol. Brain 4, 38 (2011). 29. 29. Janoos, F. et al. Robust 3D reconstruction and identification of dendritic spines from optical microscopy imaging. Med. Image Anal. 13, 167–179 (2009). 30. 30. Shi, P., Huang, Y. & Hong, J. Automated three-dimensional reconstruction and morphological analysis of dendritic spines based on semi-supervised learning. Biomed. Opt. Express 5, 1541–53 (2014). 31. 31. Rodriguez, A., Ehlenberger, D. B., Dickstein, D. L., Hof, P. R. & Wearne, S. L. Automated three-dimensional detection and shape classification of dendritic spines from fluorescence microscopy images. PLoS One 3 (2008). 32. 32. Rodriguez, A., Ehlenberger, D. B., Hof, P. R. & Wearne, S. L. Rayburst Sampling, an Algorithm for Automated Three-dimensional Shape Analysis from Laser Scanning Microscopy Images. Nat. Protoc. 1, 2152–2161 (2006). 33. 33. Saha, P. K. & Chaudhuri, B. B. 3D Digital Topology under Binary Transformation with Applications. Comput. Vis. Image Underst. 63, 418–429 (1996). 34. 34. Saha, P. K. & Wehrli, F. W. Measurement of trabecular bone thickness in the limited resolution regime of in vivo MRI by fuzzy distance transform. IEEE Trans. Med. Imaging 23, 53–62 (2004). 35. 35. Saha, P. K., Wehrli, F. W. & Gomberg, B. R. Fuzzy Distance Transform: Theory, Algorithms, and Applications. Comput. Vis. Image Underst. 86, 171–190 (2002). 36. 36. Saha, P., Basu, S. & Hoffman, E. Multi-Scale Opening of Conjoined Fuzzy Objects: Theory and Applications. IEEE Trans. Fuzzy Syst. 24, 1121–1133 (2016). 37. 37. Saha, P. K. & Udupa, J. K. Optimum image thresholding via class uncertainty and region homogeneity. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 689–706 (2001). 38. 38. Gao, Z., Grout, R. W., Holtze, C., Hoffman, E. A. & Saha, P. A New Paradigm of Interactive Artery/Vein Separation in Noncontrast Pulmonary CT Imaging Using Multiscale Topomorphologic Opening. IEEE Trans. Biomed. Eng. 59, 3016–27 (2012). 39. 39. Saha, P. K., Gao, Z., Alford, S. K., Sonka, M. & Hoffman, E. A. Topomorphologic separation of fused isointensity objects via multiscale opening: separating arteries and veins in 3-D pulmonary CT. IEEE Trans. Med. Imaging 29, 840–851 (2010). 40. 40. Udupa, J. K. & Saha, P. K. Fuzzy connectedness and image segmentation. Proc. IEEE 91, 1649–1669 (2003). 41. 41. Ruszczycki, B. et al. Sampling issues in quantitative analysis of dendritic spines morphology. BMC Bioinformatics 13, 213 (2012). 42. 42. Basu, S. et al. 2dSpAn: semiautomated 2-d segmentation, classification and analysis of hippocampal dendritic spine plasticity. Bioinforma (2016). 43. 43. Borgefors, G. Distance transformations in digital images. Comput. vision, Graph. image Process. 34, 344–371 (1986). 44. 44. Saha, P., Strand, R. & Borgefors, G. Digital topology and geometry in medical imaging: a survey. Med. Imaging, IEEE Trans (2015). 45. 45. Magnowska, M. et al. Transient ECM protease activity promotes synaptic plasticity. Sci. Rep. 6 (2016). 46. 46. Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nature Methods 9, 676–682 (2012). 47. 47. Yushkevich, P. A. et al. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage
\section{Introduction} In the clinical domain, the ability to conduct natural language inference (NLI) on unstructured, domain-specific texts such as patient notes, pathology reports, and scientific papers, plays a critical role in the development of predictive models and clinical decision support (CDS) systems. Considerable progress in domain-agnostic NLI has been facilitated by the development of large-scale, crowdworker-constructed datasets, including the Stanford Natural Language Inference corpus (SNLI), and the Multi-Genre Natural Language Inference (MultiNLI) corpus~\citep{bowman2015large, williams2017broadcoverage}. MedNLI is a similarly-motivated, healthcare-specific dataset created by a small team of physician-annotators in lieu of crowdworkers, due to the extensive domain expertise required~\citep{romanov2018lessons}. \citet{poliak-etal-2018-hypothesis}, \citet{gururangan2018annotation}, \citet{tsuchiya-2018-performance}, and \citet{mccoy2019right} empirically demonstrate that SNLI and MultiNLI contain lexical and syntactic annotation artifacts that are disproportionately associated with specific classes, allowing a hypothesis-only classifier to significantly outperform a majority-class baseline model. The presence of such artifacts is hypothesized to be partially attributable to the priming effect of the example hypotheses provided to crowdworkers at annotation-time. \citet{romanov2018lessons} note that a hypothesis-only baseline is able to outperform a majority class baseline in MedNLI, but they do not identify specific artifacts. We confirm the presence of annotation artifacts in MedNLI and proceed to identify their lexical and semantic characteristics. We then conduct adversarial filtering to partition MedNLI into \emph{easy} and \emph{difficult} subsets~\citep{sakaguchi2020winogrande}. We find that performance of off-the-shelf \texttt{fastText}-based hypothesis-only and hypothesis-plus-premise classifiers is lower on the \emph{difficult} subset than on the \emph{full} and \emph{easy} subsets~\citep{joulin2016bag}. We provide partition information for downstream use, and conclude by advocating alternative dataset construction strategies for knowledge-intensive domains.\footnote{See {https://github.com/crherlihy/clinical\_nli\_artifacts} for code and partition ids.} \section{The MedNLI Dataset} MedNLI is domain-specific evaluation dataset inspired by general-purpose NLI datasets, including SNLI and MultiNLI~\citep{romanov2018lessons, bowman2015large, williams2017broadcoverage}. Much like its predecessors, MedNLI consists of premise-hypothesis pairs, in which the premises are drawn from the \texttt{Past Medical History} sections of a randomly selected subset of de-identified clinical notes contained in MIMIC-III~\citep{johnson2016mimicIII, goldberger2000physiobank}. MIMIC-III was created from the records of adult and neonatal intensive care unit (ICU) patients. As such, complex and clinically severe cases are disproportionately represented, relative to their frequency of occurrence in the general population. Physician-annotators were asked to write a \textit{definitely true}, \textit{maybe true}, and \textit{definitely false} set of hypotheses for each premise, corresponding to \textit{entailment}, \textit{neutral} and \textit{contradiction} labels, respectively. The resulting dataset has cardinality: $n_{\text{train}} = 11232; \ n_{\text{dev}} = 1395; \ n_{\text{test}} = 1422$. \section{MedNLI Contains Artifacts} \label{sec:mednliContainsArtifacts} To determine whether MedNLI contains annotation artifacts that may artificially inflate the performance of models trained on this dataset, we train a simple, premise-unaware, \texttt{fastText} classifier to predict the label of each premise-hypothesis pair, and compare the performance of this classifier to a majority-class baseline, in which all training examples are mapped to the most commonly occurring class label~\citep{joulin2016bag, poliak-etal-2018-hypothesis, gururangan2018annotation}. Note that since annotators were asked to create an entailed, contradictory, and neutral hypothesis for each premise, MedNLI is class-balanced. Thus, in this setting, a majority class baseline is equivalent to choosing a label uniformly at random for each training example. The micro F1-score achieved by the \texttt{fastText} classifier significantly exceeds that of the majority class baseline, confirming the findings of \citet{romanov2018lessons}, who report a micro-F1 score of 61.9 but do not identify or analyze artifacts: \begin{table}[!bth] \centering \small \begin{tabular}{lcc} \hline {} & \textbf{dev} & \textbf{test} \\ \hline majority class & 33.3 & 33.3 \\ \texttt{fastText} & \textbf{64.8} & \textbf{62.6} \\ \hline \end{tabular} \caption{Performance (micro F1-score) of the \texttt{fastText} hypothesis-only classifier.} \label{tab:fastTextF1} \end{table} As the confusion matrix for the test set shown in \hyperref[tab:confusionMatrix]{Table 2} indicates, the \texttt{fastText} model is most likely to misclassify entailment as neutral, and neutral and contradiction as entailment. Per-class precision and recall on the test set are highest for contradiction (73.2; 72.8) and lowest for entailment (56.7; 53.8). \begin{table}[!htb] \resizebox{\columnwidth}{!}{% \centering \begin{tabular}{lccc} \hline {} & entailment & neutral & contradiction \\ \hline entailment & \textbf{255} & 151 & 68 \\ neutral & 126 & \textbf{290} & 58 \\ contradiction & 69 & 60 & \textbf{345} \\ \hline \end{tabular}% } \caption{Confusion matrix for \texttt{fastText} classifier.} \label{tab:confusionMatrix} \end{table} \section{Characteristics of Clinical Artifacts} In this section, we conduct class-specific lexical analysis to identify the clinical and domain-agnostic characteristics of annotation artifacts associated with each set of hypotheses in MedNLI. \subsection{Preprocessing} We cast each hypothesis string in the MedNLI training dataset to lowercase. We then use a \texttt{scispaCy} model pre-trained on the \texttt{en\_core\_sci\_lg} corpus for tokenization and clinical named entity recognition (CNER)~\citep{neumann2019scispacy}. One challenge associated with clinical text, and scientific text more generally, is that semantically meaningful entities often consist of spans rather than single tokens. To mitigate this issue during lexical analysis, we map each multi-token entity to a single-token representation, where sub-tokens are separated by underscores. \subsection{Lexical Artifacts} \label{sec:lexicalArtifacts} Following \citet{gururangan2018annotation}, to identify tokens that occur disproportionately in hypotheses associated with a specific class, we compute token-class pointwise mutual information (PMI) with add-50 smoothing applied to raw counts, and a filter to exclude tokens appearing less than five times in the overall training dataset. Table \hyperref[tab:wordChoice]{3} reports the top 15 tokens for each class. \begin{equation} \noindent \nonumber \texttt{PMI}(\text{token, class}) = log_2 \frac{p(\text{token, class})}{p(\text{token}, \cdot ) p(\cdot, \text{class})} \end{equation} \begin{table*}[!htb] \small \centering \begin{tabular}{lclclc} \toprule entailment & \% & neutral & \% & contradiction & \% \\ \midrule just & 0.25\% & cardiogenic\_shock & 0.33\% & no\_history\_of\_cancer & 0.27\% \\ high\_risk & 0.26\% & pelvic\_pain & 0.30\% & no\_treatment & 0.27\% \\ pressors & 0.25\% & joint\_pain & 0.30\% & normal\_breathing & 0.27\% \\ possible & 0.26\% & brain\_injury & 0.32\% & no\_history\_of\_falls & 0.27\% \\ elevated\_blood\_pressure & 0.26\% & delerium & 0.30\% & normal\_heart\_rhythm & 0.28\% \\ responsive & 0.25\% & intracranial\_pressure & 0.30\% & health & 0.26\% \\ comorbidities & 0.26\% & smoking & 0.42\% & normal\_head\_ct & 0.26\% \\ spectrum & 0.27\% & obesity & 0.41\% & normal\_vision & 0.26\% \\ steroid\_medication & 0.25\% & tia & 0.32\% & normal\_aortic\_valve & 0.27\% \\ longer & 0.26\% & acquired & 0.31\% & bradycardic & 0.26\% \\ history\_of\_cancer & 0.26\% & head\_injury & 0.31\% & normal\_blood\_sugars & 0.27\% \\ broad & 0.26\% & twins & 0.30\% & normal\_creatinine & 0.28\% \\ frequent & 0.25\% & fertility & 0.30\% & cancer\_history & 0.26\% \\ failed & 0.26\% & statin & 0.30\% & cardiac & 0.33\% \\ medical & 0.29\% & acute\_stroke & 0.30\% & normal\_chest & 0.28\% \\ \bottomrule \end{tabular} \caption{Top 15 tokens by \texttt{PMI}(token, class); \% of \emph{class} training examples that contain the token.} \label{tab:wordChoice} \end{table*} \paragraph{Entailment} Entailment hypotheses are characterized by tokens about: (1) patient status and response to treatment (e.g., \emph{responsive}; \emph{failed}; \emph{longer} as in \emph{no longer intubated}); (2) medications and procedures which are common among ICU patients (e.g., \emph{broad\_spectrum}; \emph{antibiotics}; \emph{pressors}; \emph{steroid\_medication}; \emph{underwent}; \emph{removal}); (3) generalized versions of specific words in the premise (e.g., \emph{comorbidities}; \emph{multiple\_medical\_problems}), which \citet{gururangan2018annotation} also observe in SNLI; and (4) modifiers related to duration, frequency, or probability (e.g., \emph{frequent}, \emph{possible}, \emph{high\_risk}). \paragraph{Neutral} Neutral hypotheses feature tokens related to: (1) chronic and acute clinical conditions (e.g., \emph{obesity}; \emph{joint\_pain}; \emph{brain\_injury}); (2) clinically relevant behaviors (e.g., \emph{smoking}; \emph{alcoholic}; \emph{drug\_overdose}); and (3) gender and reproductive status (e.g., \emph{fertility}; \emph{pre\_menopausal}). Notably, the most discriminative conditions tend to be commonly occurring within the general population and generically stated, rather than rare and specific. This presumably contributes to the relative difficulty that the hypothesis-only \texttt{fastText} model has distinguishing between the entailment and neutral classes. \paragraph{Contradiction} Contradiction hypotheses are characterized by tokens that convey normalcy and good health. Lexically, such sentiment manifests as: (1) explicit negation of clinical severity, medical history, or in-patient status (e.g., \emph{denies\_pain}; \emph{no\_treatment}; \emph{discharged\_home}), or (2) affirmation of clinically unremarkable findings (e.g., \emph{normal\_heart\_rhythm}; \emph{normal\_blood\_sugars}), which would generally be rare among ICU patients. This suggests a heuristic of inserting negation token(s) to contradict the premise, which \citet{gururangan2018annotation} also observe in SNLI. \subsection{Syntactic Artifacts} \paragraph{Hypothesis Length} In contrast to \citet{gururangan2018annotation}'s finding that entailed hypotheses in SNLI tend to be shorter while neutral hypotheses tend to be longer, hypothesis sentence length does not appear to play a discriminatory role in MedNLI, regardless of whether we consider merged- or separated-token representations of multi-word entities, as illustrated by Table \hyperref[fig:hypLenTable]{4}: \begin{table}[!htb] \label{fig:hypLenTable} \resizebox{\columnwidth}{!}{% \begin{tabular}{l|c|c|c|c|c|c|} \cline{2-7} & \multicolumn{2}{c|}{\textbf{entailment}} & \multicolumn{2}{c|}{\textbf{neutral}} & \multicolumn{2}{c|}{\textbf{contradiction}} \\ \cline{2-7} & mean & median & mean & median & mean & median \\ \hline \multicolumn{1}{|c|}{\textbf{separate}} & 5.6 & 5.0 &
Subjects -> PHYSICS (Total: 857 journals)     - ELECTRICITY AND MAGNETISM (10 journals)    - MECHANICS (22 journals)    - NUCLEAR PHYSICS (53 journals)    - OPTICS (92 journals)    - PHYSICS (625 journals)    - SOUND (25 journals)    - THERMODYNAMICS (30 journals) NUCLEAR PHYSICS (53 journals) Showing 1 - 50 of 50 Journals sorted alphabetically Advances in Optics and Photonics       (Followers: 18) Annual Review of Nuclear and Particle Science       (Followers: 1) APL Photonics Atomic Data and Nuclear Data Tables EPJ A - Hadrons and Nuclei EPJ B - Condensed Matter and Complex Systems       (Followers: 1) EPJ E - Soft Matter and Biological Physics       (Followers: 3) EPJ Nuclear Sciences & Technologies       (Followers: 3) EPL Europhysics Letters       (Followers: 8) Fusion Science and Technology       (Followers: 4) IEEE Nanotechnology Express       (Followers: 18) International Journal of Quantum Chemistry       (Followers: 5) Journal of Nanomedicine & Nanotechnology       (Followers: 2) Journal of Nuclear and Particle Physics       (Followers: 14) Journal of Nuclear Materials       (Followers: 12) Journal of Physics G : Nuclear and Particle Physics       (Followers: 16) Journal of Quantum Chemistry       (Followers: 1) Kerntechnik Nano Energy       (Followers: 11) NanoImpact Nanotechnology Development       (Followers: 21) Nanotechnology, Science and Applications       (Followers: 7) Nuclear and Particle Physics Proceedings       (Followers: 3) Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment       (Followers: 18) Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms       (Followers: 17) Nuclear Materials and Energy       (Followers: 1) Nuclear Physics A       (Followers: 5) Nuclear Physics B       (Followers: 3) Nuclear Physics News       (Followers: 2) Nuclear Science and Engineering       (Followers: 7) Nuclear Technology       (Followers: 5) Nukleonika Particles Physica E: Low-dimensional Systems and Nanostructures       (Followers: 1) Physica Medica       (Followers: 4) Physical Biology       (Followers: 4) Physical Review A       (Followers: 24) Physical Review Accelerators and Beams       (Followers: 4) Physical Review B       (Followers: 32) Physical Review D       (Followers: 13) Physical Review E       (Followers: 42) Physical Review Letters       (Followers: 163) Physics of Atomic Nuclei       (Followers: 10) Physics of Particles and Nuclei       (Followers: 2) Physics of Particles and Nuclei Letters       (Followers: 1) Progress in Particle and Nuclear Physics       (Followers: 2) Radiation Detection Technology and Methods       (Followers: 1) The European Physical Journal D - Atomic, Molecular, Optical and Plasma Physics       (Followers: 29) The European Physical Journal Special Topics       (Followers: 1) World Journal of Nuclear Science and Technology       (Followers: 4) Similar Journals The European Physical Journal D - Atomic, Molecular, Optical and Plasma PhysicsJournal Prestige (SJR): 0.387 Citation Impact (citeScore): 1Number of Followers: 29      Hybrid journal (It can contain Open Access articles) ISSN (Print) 1434-6060 - ISSN (Online) 1434-6079 Published by Springer-Verlag  [2467 journals] • Harmonics generated in circularly polarized laser fields: a study on spin angular momentum conservation Abstract: Harmonics generated from the electronic transition among continuum states are studied by a quantum scattering theory. The analytical formulas of the harmonics generated in one- and two-color circularly polarized (CP) laser fields are obtained rigorously. The harmonic generation amplitude is described by a pair of phased generalized Bessel functions, by which the conservation relations of the spin angular momentum are derived in a solid base and in a straightforward way. It is found that, owing to the spin angular momentum conservation, high-order harmonics cannot be generated in the one-color CP fields, but can be generated in the bichromatic CP fields. To a given harmonic order, two generation channels exist: one generates right CP harmonic, while the other generates left CP harmonic. When the driving CP modes are of opposite helicity, the generated right CP harmonic is of the comparable strength to the left CP harmonic. When the driving CP modes are of same helicity, the harmonic with the same helicity as the driving fields is much stronger than that with the opposite helicity to the driving fields. The selection rules of harmonics are different in these two kinds of bichromatic CP driving fields. Graphic Figure depicts the calculated harmonic spectra generated from Ar atoms. For the counter-rotating driving fields, the harmonic spectra exhibit clear plateau and sharp cutoff structures, for both the right and left CP harmonics. The right CP harmonics are of 3n-1 series in order, while the left CP harmonics are of 3n+1 series in order. Generally, the right CP harmonics are stronger than the adjacent left CP harmonics. For the co-rotating driving fields, the harmonic rate decreases fast as the harmonic order increases, thus the plateau and the cutoff structures are not clear as their counterparts in the previous case, for both right CP and left CP harmonics. The harmonic spectra change continuously in order. Each harmonic order has its right CP and left CP components, but generally, the left CP components are much smaller than the right CP components. It is also clear that the harmonics generated from counter-rotating BCP fields are much stronger than that generated from co-rotating BCP fields. PubDate: 2023-01-27 • The influence of the structure of atomic systems on the dynamics of electron exchange in ion–ion collision Abstract: One electron exchange between the Rydberg states of ions is elaborated within the time-symmetrized framework of two-wave-function model. It was observed that in an atomic collision, the population of ionic states by electron exchange, is strongly conditioned by the structure of the subsystem itself, especially at intermediate velocities. This circumstance is particularly pronounced when determining the ion–ion distances at which the charge exchange is most likely. The specificity of the model is reflected in the fact that the determination of the electron capture distance is carried out at fixed initial and final states of the system under consideration. As an illustrative example, XeVIII was used as a target of collision process, i.e. ion Xe $$^{8+}$$ initially populated in Rydberg state $$\nu _B=(n_B=8,l_B=0,m_B=0)$$ , while argon ions Ar $$^{Z_A+}$$ were used as projectiles in the core charge range $$Z_A\in [3,9]$$ . Graphic PubDate: 2023-01-25 • Dominant Lyapunov mapping in phase and parameters spaces corresponding to the thermodynamic limit of the two-mode Bose–Hubbard model applied to the control of collapses of Josephson oscillations Abstract: In this work, we propose the mapping of the phase space and parameters space of a classical counterpart of a system of spinless interacting bosonic particles trapped in a double-well potential submitted to a temporal modulation of on-site atom–atom interaction. To do this, we present a mapping of the dominant Lyapunov exponent of the equations of motion governed by the classical Hamiltonian corresponding to the thermodynamic limit of the two-mode Bose–Hubbard model. Such a proposal can be used to determine parameter ranges and regions of atomic coherent states that are amenable to undergo the control of collapses of Josephson oscillations. Our findings suggest that states and parameters associated with regular regime in the classical domain are amenable to this control be applied, whereas states settled in chaotic regions are not eligible. Graphic abstract PubDate: 2023-01-25 • Atomistic modeling of thermal effects in focused electron beam-induced deposition of Me $$_2$$ Au(tfac) Abstract: The role of thermal effects in the focused electron beam-induced deposition (FEBID) of Me $$_2$$ Au(tfac) is studied by means of irradiation-driven molecular dynamics simulations. The
# Optimizing a NengoDL model¶ Optimizing Nengo models via deep learning training methods is one of the important features of NengoDL. This functionality is accessed via the Simulator.train method. For example: with nengo.Network() as net: <construct the model> with nengo_dl.Simulator(net, ...) as sim: sim.train(<data>, <optimizer>, n_epochs=10, objective=<objective>) When the Simulator is first constructed, all the parameters in the model (e.g., encoders, decoders, connection weights, biases) are initialized based on the functions/distributions specified during model construction (see the Nengo documentation for more detail on how that works). What the Simulator.train method does is then further optimize those parameters based on some inputs and desired outputs. We’ll go through each of those components in more detail below. ## Simulator.train arguments¶ ### data¶ The first argument to the Simulator.train function is the training data. This generally consists of two components: input values for Nodes, and target values for Probes. inputs We can think of a model as computing a function $$y = f(x, \theta)$$, where $$f$$ is the model, mapping inputs $$x$$ to outputs $$y$$ with parameters $$\theta$$. These values are specifying the values for $$x$$. In practice what that means is specifying values for the input Nodes in the model. A Node is a Nengo object that inserts values into a Network, usually used to define external inputs. Simulator.train will override the normal Node values with the training data that is provided. This is specified as a dictionary {<node>: <array>, ...}, where <node> is the input node for which training data is being defined, and <array> is a numpy array containing the training values. This training array should have shape (n_inputs, n_steps, node.size_out), where n_inputs is the number of training examples, n_steps is the number of simulation steps to train across, and node.size_out is the dimensionality of the Node. When training a NengoDL model the user must specify the minibatch_size to use during training, via the Simulator(..., minibatch_size=n) argument. This defines how many inputs (out of the total n_inputs defined above) will be used for each optimization step. Here is an example illustrating how to define the input values for two input nodes: with nengo.Network() as net: a = nengo.Node([0]) b = nengo.Node([1, 2, 3]) ... n_inputs = 1000 minibatch_size = 20 n_steps = 10 with nengo_dl.Simulator(net, minibatch_size=minibatch_size) as sim: sim.train(data={a: np.random.randn(n_inputs, n_steps, 1), b: np.random.randn(n_inputs, n_steps, 3), ...}, ...) Note that inputs can only be defined for Nodes with no incoming connections (i.e., Nodes with size_in == 0). Any Nodes that don’t have data provided will take on the values specified during model construction. targets Returning to the network equation $$y = f(x, \theta)$$, the goal in optimization is usually to find a set of parameter values such that given inputs $$x$$ and target values $$t$$, an error value $$e = o(y, t)$$ is minimized. These values are specifying those target values $$t$$. This works very similarly to defining inputs, except instead of assigning input values to Nodes it assigns target values to Probes. We add {<probe>: <array>, ...} entries to the data dictionary, where <array> has shape (n_inputs, n_steps, probe.size_in). Those target values will be passed to the objective function $$g$$ for each timestep. For example: with nengo.Network() as net: ... ens = nengo.Ensemble(10, 2) p = nengo.Probe(ens) n_inputs = 1000 minibatch_size = 20 n_steps = 10 with nengo_dl.Simulator(net, minibatch_size=minibatch_size) as sim: sim.train(data={..., p: np.random.randn(n_inputs, n_steps, 2)}, ...) Note that these examples use random inputs/targets, for the sake of simplicity. In practice we would do something like targets=my_func(inputs), where my_func is a function specifying what the ideal outputs are for the given inputs. ### optimizer¶ The optimizer is the algorithm that defines how to update the network parameters during training. Any of the optimization methods implemented in TensorFlow can be used in NengoDL; more information can be found in the TensorFlow documentation. An instance of the desired TensorFlow optimizer is created (specifying any arguments required by that optimizer), and that instance is then passed to Simulator.train. For example: import tensorflow as tf with nengo_dl.Simulator(net, ...) as sim: sim.train(optimizer=tf.train.MomentumOptimizer( learning_rate=1e-2, momentum=0.9, use_nesterov=True), ...) ### objective¶ As mentioned, the goal in optimization is to minimize some error value $$e = o(y, t)$$. The objective is the function $$o$$ that computes an error value $$e$$, given $$y$$ and $$t$$. This argument is specified as a dictionary mapping Probes to objective functions, indicating how the output of that probe is mapped to an error value. The default objective in NengoDL is the standard mean squared error. This will be used if the user doesn’t specify an objective. Users can specify a custom objective by creating a function that implements the $$o$$ function above. Note that the objective is defined using TensorFlow operators. It should accept Tensors representing outputs and targets as input (each with shape (minibatch_size, n_steps, probe.size_in)) and return a scalar Tensor representing the error. This example manually computes mean squared error, rather than using the default: import tensorflow as tf def my_objective(outputs, targets): return tf.reduce_mean((targets - outputs) ** 2) with nengo_dl.Simulator(net, ...) as sim: sim.train(objective={p: my_objective}, ...) Some objective functions may not require target values. In this case the function can be defined with one argument def my_objective(outputs): ... Finally, it is also possible to specify None as the objective. This indicates that the error is being computed outside the simulation by the modeller. In this case the modeller should directly specify the output error gradient as the targets value. For example, we could apply the same mean squared error update this way: with nengo_dl.Simulator(net, ...) as sim: sim.run(...) error = 2 * (sim.data[p] - my_targets) sim.train(data={..., p: error}, objective={p: None}, ...) Note that it is possible to specify multiple objective functions like objective={p0: my_objective0, p1: my_objective1}. In this case the error will be summed across the probe objectives to produce an overall error value to be minimized. It is also possible to create objective functions that depend on multiple probe outputs, by specifying objective={(p0, p1): my_objective}. In this case, my_objective will still be passed parameters outputs and targets, but those parameters will be lists containing the output/target values for each of the specified probes. Simulator.loss can be used to check the loss (error) value for a given objective. See Objectives for some common objective functions that are provided with NengoDL for convenience. ### truncation¶ When optimizing a simulation over time we specify inputs and targets for all $$n$$ steps of the simulation. The gradients are computed by running the simulation forward for $$n$$ steps, comparing the outputs to the targets we specified, and then propagating the gradients backwards from $$n$$ to 0. This is known as Backpropagation Through Time (BPTT). However, in some cases we may not want to run BPTT over the full $$n$$ steps (usually because it requires a lot of memory to store all the intermediate values for $$n$$ steps of gradient calculation). In this case we choose some value $$m < n$$, run the simulation for $$m$$ steps, backpropagate the gradients over those $$m$$ steps, then run the simulation for $$m$$ more steps, and so on until we have run for the total $$n$$ steps. This is known as Truncated BPTT. The truncation argument is used to specify $$m$$, i.e. sim.train(..., truncation=m). If no value is given then full un-truncated BPTT will be performed. In general, truncated BPTT will result in worse performance than untruncated BPTT. Truncation limits the range of the temporal dynamics that the network is able to learn. For example, if we tried to learn a function where input $$x_t$$ should influence the output at $$y_{t+m+1}$$ that would not work well, because the errors from step $$t+m+1$$ never make it back to step $$t$$. More generally, a truncated system has less information about how outputs at $$t$$ will affect future performance, which will limit how well that system can be optimized. As mentioned, the main reason to use truncated BPTT is in order to reduce the memory demands during training. So if you find yourself running out of memory while training a model, consider using the truncation argument (while ensuring that the value of $$m$$ is still large enough to capture the temporal dynamics in the task). ### summaries¶ It is
If you are interested in a video with some additional insight, a proof, and some further examples, have a look here.A number of linear regression for machine learning implementations are available, examples of which include those in the popular Scikit-learn library for Python and the formerly-popular Weka Machine Learning Toolkit.. Iterative Reweighted Least Squares in python. The fit parameters are $A$, $\gamma$ and $x_0$. Analyst 135 (5), 1138-1146 (2010). . The methods and algo-rithms presented here can be easily extended to the complex numbers. Multivariate function fitting. WLS Estimation. Python method: import numpy as np import pandas as pd # import statsmodels. Search online and you might find different rules-of-thumb, like “the highest variability shouldn’t be greater than four times that of the smallest”. Well, the good news is that OLS can handle a certain level of heteroskedasticity. . Moving least squares is a method of reconstructing continuous functions from a set of unorganized point samples via the calculation of a weighted least squares measure biased towards the region around … Notes “leastsq” is a wrapper around MINPACK’s lmdif and lmder algorithms. At Metis, one of the first machine learning models I teach is the Plain Jane Ordinary Least Squares (OLS) model that most everyone learns in high school. Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which the errors covariance matrix is allowed to be different from an identity matrix.WLS is also a specialization of generalized least squares … I created my own YouTube algorithm (to stop me wasting time), All Machine Learning Algorithms You Should Know in 2021, 5 Reasons You Don’t Need to Learn Machine Learning, Building Simulations in Python — A Step by Step Walkthrough, 5 Free Books to Learn Statistics for Data Science, A Collection of Advanced Visualization in Matplotlib and Seaborn with Examples, As age increases, net worths tend to diverge, As company size increases, revenues tend to diverge, Or, as infant height increases, weight tends to diverge. It builds on and extends many of the optimization methods of scipy.optimize. .8 2.2 Some Explanations for Weighted Least Squares . f(x) = \frac{A \gamma^2}{\gamma^2 + (x-x_0)^2}, In other words we should use weighted least squares with weights equal to $$1/SD^{2}$$. One of the biggest disadvantages of weighted least squares, is that Weighted Least Squares is based on the assumption that the weights are known exactly. And as always, thanks for reading, connecting, and sharing! Make learning your daily ritual. Python Ordinary Least Squares (OLS) Method for Polynomial Fitting. This will affect OLS more than WLS, as WLS will de-weight the variance and its“penalty”. Figure 2 shows the WLS (weighted least squares) regression output. Lecture 24{25: Weighted and Generalized Least Squares 36-401, Fall 2015, Section B 19 and 24 November 2015 Contents 1 Weighted Least Squares 2 2 Heteroskedasticity 4 2.1 Weighted Least Squares as a Solution to Heteroskedasticity . Over on Stackoverflow, I am trying calculate the Weighted Least Squares (WLS) of a data set in a python library called Numpy as compared to using a library called Statsmodels.However, I noticed something very mysterious. least_squares. These examples are extracted from open source projects. Ordinary Least Squares is the simplest and most common estimator in which the two (beta)s are chosen to minimize the square of the distance between the predicted values and the actual values. I don't read python but I've tried to reproduce this result in R and cannot do … Sometime we know that different observations have been measured by different instruments that have some (known or estimated) accuracy. Weighted Least Squares. Least Squares Regression In Python. In particular, I have a dataset X which is a 2D array. As the figure above shows, the unweighted fit is seen to be thrown off by the noisy region. Moreover, we can solve the best estimate x of the unknown resistance given a linear model.In these two situations, we use all of the measurements y to solve the best estimate x.But what about if our measurement data is very large or we must compute the “running estimate” x as the measurements y “stream in”? 7-10. Obviously by picking the constant suitably large you can get the weighting quite accurate. However, OLS is only one of a distinguished family tree: Weighted Least Squares (WLS) is the quiet Squares cousin, but she has a unique bag of tricks that aligns perfectly with certain datasets! 1We use real numbers to focus on the least squares problem. Why does least squares linear regression perform so bad when switching from 2D to 3D line? Many fitting problems (by far not all) can be expressed as least-squares problems. Weighted Least Squares Weighted Least Squares Contents. The Python Scipy library includes a least squares function, which is included in the xlw-SciPy spreadsheet. Least-squares minimization applied to a curve-fitting problem. Need help? The noise is such that a region of the data close to the line centre is much noisier than the rest. The right side of the figure shows the usual OLS regression, where the weights in column C are not taken into account. 1 Weighted Least Squares 1 2 Heteroskedasticity 3 2.1 Weighted Least Squares as a Solution to Heteroskedasticity . A weighted version has now been added: The Alglib library also has a least squares function, including both unweighted and weighted versions: Least squares fitting with Numpy and Scipy nov 11, 2015 numerical-analysis optimization python numpy scipy. Let’s see below how the high outlier is suppressed in WLS. Calculating Least Squares with np.polyfit() function Here, we will use the .polyfit() function from the NumPy package which will perform the least … Coming from the ancient Greek hetero, meaning “different”, and skedasis, meaning “dispersion”, it can also be found in the anglicized “Heteroscedasticity” (notice the additional ‘c’) form. 3.1 Least squares in matrix form E Uses Appendix A.2–A.4, A.6, A.7. When features are correlated and the columns of the design matrix $$X$$ have an approximate linear dependence, the design matrix becomes close to singular and as a result, the least-squares estimate becomes highly sensitive to random errors in the observed target, producing a large variance. a, b = scipy.linalg.lstsq(X, w*signal)[0] I know that signal is the array representing the signal and currently w is just [1,1,1,1,1...]. Weighted least squares is an efficient method that makes good use of small data sets. $$1 Weighted Least Squares 1 2 Heteroskedasticity 3 2.1 Weighted Least Squares as a Solution to Heteroskedasticity . Implementation of the exponentially weighted Recursive Least Squares (RLS) adaptive filter algorithm. Notes “leastsq” is a wrapper around MINPACK’s lmdif and lmder algorithms. Sums of residuals; squared Euclidean 2-norm for each column in b-a*x.If the rank of a is < N or M <= N, this is an empty array. Want to Be a Data Scientist? . In this section, we will be running a simple demo to understand the working of Regression Analysis using the least squares regression method. This blog on Least Squares Regression Method will help you understand the math behind Regression Analysis and how it can be implemented using Python. The resulting fitted equation from Minitab for this model is: Progeny = 0.12796 + 0.2048 Parent. . That is, Octave can find the parameter b such that the model y = x*b fits data (x,y) as well as possible, assuming zero-mean Gaussian noise. 25.4 Linear Least Squares. - Do a least square fit on this new data set. WLS Estimation. . Adding a custom constraint to weighted least squares regression model. Both Numpy and Scipy provide black box methods to fit one-dimensional data using linear least squares, in the first case, and non-linear least squares, in the
–simplify-merges, the graph includes all of the necessary information: .-A---M--. N / / \ / I B R \ / / \ / / ---X-- Notice that since M is reachable from R, the edge from N to M was simplified away. However, N still appears in the history as an important commit because it “pulled” the change R into the main branch. #+end_quote The –simplify-by-decoration option allows you to view only the big picture of the topology of the history, by omitting commits that are not referenced by tags. Commits are marked as !TREESAME (in other words, kept after history simplification rules described above) if (1) they are referenced by tags, or (2) they change the contents of the paths given on the command line. All other commits are marked as TREESAME (subject to be simplified away). ### Bisection Helpers –bisect Limit output to the one commit object which is roughly halfway between included and excluded commits. Note that the bad bisection ref refs/bisect/bad is added to the included commits (if it exists) and the good bisection refs refs/bisect/good-* are added to the excluded commits (if they exist). Thus, supposing there are no refs in refs/bisect/, if #+begin_quote $git rev-list --bisect foo ^bar ^baz outputs midpoint, the output of the two commands $ git rev-list foo ^midpoint $git rev-list midpoint ^bar ^baz would be of roughly the same length. Finding the change which introduces a regression is thus reduced to a binary search: repeatedly generate and test new midpoint’s until the commit chain is of length one. #+end_quote –bisect-vars This calculates the same as –bisect, except that refs in refs/bisect/ are not used, and except that this outputs text ready to be eval’ed by the shell. These lines will assign the name of the midpoint revision to the variable bisect_rev, and the expected number of commits to be tested after bisect_rev is tested to bisect_nr, the expected number of commits to be tested if bisect_rev turns out to be good to bisect_good, the expected number of commits to be tested if bisect_rev turns out to be bad to bisect_bad, and the number of commits we are bisecting right now to bisect_all. –bisect-all This outputs all the commit objects between the included and excluded commits, ordered by their distance to the included and excluded commits. Refs in refs/bisect/ are not used. The farthest from them is displayed first. (This is the only one displayed by –bisect.) This is useful because it makes it easy to choose a good commit to test when you want to avoid to test some of them for some reason (they may not compile for example). This option can be used along with –bisect-vars, in this case, after all the sorted commit objects, there will be the same text as if –bisect-vars had been used alone. ### Commit Ordering By default, the commits are shown in reverse chronological order. –date-order Show no parents before all of its children are shown, but otherwise show commits in the commit timestamp order. –author-date-order Show no parents before all of its children are shown, but otherwise show commits in the author timestamp order. –topo-order Show no parents before all of its children are shown, and avoid showing commits on multiple lines of history intermixed. For example, in a commit history like this: #+begin_quote ---1----2----4----7 \ \ 3----5----6----8--- where the numbers denote the order of commit timestamps, git rev-list and friends with –date-order show the commits in the timestamp order: 8 7 6 5 4 3 2 1. With –topo-order, they would show 8 6 5 3 7 4 2 1 (or 8 7 4 2 6 5 3 1); some older commits are shown before newer ones in order to avoid showing the commits from two parallel development track mixed together. #+end_quote –reverse Output the commits chosen to be shown (see Commit Limiting section above) in reverse order. Cannot be combined with –walk-reflogs. ### Object Traversal These options are mostly targeted for packing of Git repositories. –objects Print the object IDs of any object referenced by the listed commits. –objects foo ^bar thus means “send me all object IDs which I need to download if I have the commit object bar but not /foo/”. –in-commit-order Print tree and blob ids in order of the commits. The tree and blob ids are printed after they are first referenced by a commit. –objects-edge Similar to –objects, but also print the IDs of excluded commits prefixed with a “-” character. This is used by *git-pack-objects*(1) to build a “thin” pack, which records objects in deltified form based on objects contained in these excluded commits to reduce network traffic. –objects-edge-aggressive Similar to –objects-edge, but it tries harder to find excluded commits at the cost of increased time. This is used instead of –objects-edge to build “thin” packs for shallow repositories. –indexed-objects Pretend as if all trees and blobs used by the index are listed on the command line. Note that you probably want to use –objects, too. –unpacked Only useful with –objects; print the object IDs that are not in packs. –object-names Only useful with –objects; print the names of the object IDs that are found. This is the default behavior. –no-object-names Only useful with –objects; does not print the names of the object IDs that are found. This inverts –object-names. This flag allows the output to be more easily parsed by commands such as *git-cat-file*(1). –filter=<filter-spec> Only useful with one of the –objects*; omits objects (usually blobs) from the list of printed objects. The <filter-spec> may be one of the following: The form –filter=blob:none omits all blobs. The form –filter=blob:limit=<n>[kmg] omits blobs larger than n bytes or units. n may be zero. The suffixes k, m, and g can be used to name units in KiB, MiB, or GiB. For example, blob:limit=1k is the same as blob:limit=1024. The form –filter=object:type=(tag|commit|tree|blob) omits all objects which are not of the requested type. The form –filter=sparse:oid=<blob-ish> uses a sparse-checkout specification contained in the blob (or blob-expression) <blob-ish> to omit blobs that would not be required for a sparse checkout on the requested refs. The form –filter=tree:<depth> omits all blobs and trees whose depth from the root tree is >= <depth> (minimum depth if an object is located at multiple depths in the commits traversed). <depth>=0 will not include any trees or blobs unless included explicitly in the command-line (or standard input when –stdin is used). <depth>=1 will include only the tree and blobs which are referenced directly by a commit reachable from <commit> or an explicitly-given object. <depth>=2 is like <depth>=1 while also including trees and blobs one more level removed from an explicitly-given commit or tree. Note that the form –filter=sparse:path=<path> that wants to read from an arbitrary path on the filesystem has been dropped for security reasons. Multiple –filter= flags can be specified to combine filters. Only objects which are accepted by every filter are included. The form –filter=combine:<filter1>+<filter2>+…<filterN> can also be used to combined several filters, but this is harder than just repeating the –filter flag and is usually not necessary. Filters are joined by + and individual filters are %-encoded (i.e. URL-encoded). Besides the + and % characters, the following characters are reserved and also must be encoded: ~!@#$^&*()[]{}\;“,<>? as well as all characters with ASCII code <= 0x20, which includes space and newline. Other arbitrary characters can also be encoded. For instance, combine:tree:3+blob:none and combine:tree%3A3+blob%3Anone are equivalent. –no-filter Turn off any previous –filter= argument. –filter-provided-objects Filter the list of explicitly provided objects, which would otherwise always be printed even if they did not match any of the filters. Only useful with –filter=. –filter-print-omitted Only useful with –filter=; prints a list of the objects omitted by the filter. Object IDs are prefixed with a “~” character. –missing=<missing-action> A debug option to help with
and G1 is such that when the magnet M1 moves Free Electricity degree, the other magnet which came in the position where M1 was, it will be repelled by the magnet of Fixed-disk as the magnet on Fixed-disk has moved 360 degrees on the plate above gear G1. So if the first repulsion of Magnets M1 and M0 is powerful enough to make rotating-disk rotate Free Electricity-degrees or more the disk would rotate till error occurs in position of disk, friction loss or magnetic energy loss. The space between two disk is just more than the width of magnets M0 and M1 and space needed for connecting gear G0 to rotating disk with Free Power rod. Now I’ve not tested with actual objects. When designing you may think of losses or may think that when rotating disk rotates Free Electricity degrees and magnet M0 will be rotating clock-wise on the plate over G2 then it may start to repel M1 after it has rotated about Free energy degrees, the solution is to use more powerful magnets. No “boing, boing” … What I am finding is that the abrupt stopping and restarting requires more energy than the magnets can provide. They cannot overcome this. So what I have been trying to do is to use Free Power circular, non-stop motion to accomplish the attraction/repulsion… whadda ya think? If anyone wants to know how to make one, contact me. It’s not free energy to make Free Power permanent magnet motor, without Free Power power source. The magnets only have to be arranged at an imbalanced state. They will always try to seek equilibrium, but won’t be able to. The magnets don’t produce the energy , they only direct it. Think, repeating decimal….. Does the motor provide electricity? No, of course not. It is simply an engine of sorts, nothing more. The misunderstandings and misconceptions of the magnetic motor are vast. Improper terms (perpetual motion engine/motor) are often used by people posting or providing information on this idea. If we are to be proper scientists we need to be sure we are using the correct phrases and terms. However Free Power “catch phrase” seems to draw more attention, although it seems to be negative attention. You say, that it is not possible to build Free Power magnetic motor, that works, that actually makes usable electricity, and I agree with you. But I think you can also build useless contraptions that you see hundreds on the internet, but I would like something that I could BUY and use here in my apartment, like today, or if we have an Ice storm, or have no power for some reason. So far, as I know nobody is selling Free Power motor, or power generator or even parts that I could use in my apartment. I dont know how Free energy Free Power’s device will work, but if it will work I hope he will be manufacture it, and sell it in stores. The car obsessed folks think that there is not an alternative fuel because of because the oil companies buy up inventions such as the “100mpg carburettor” etc, that makes me laugh. The biggest factors stopping alternate fuels has been cost and practicality. Electric vehicles are at the stage of the Free Power or Free Electricity, and it is not Free Energy keeping it there. Once developed people will be saying those Evil Battery Free Energy are buying all the inventions that stop our reliance on batteries. No “boing, boing” … What I am finding is that the abrupt stopping and restarting requires more energy than the magnets can provide. They cannot overcome this. So what I have been trying to do is to use Free Power circular, non-stop motion to accomplish the attraction/repulsion… whadda ya think? If anyone wants to know how to make one, contact me. It’s not free energy to make Free Power permanent magnet motor, without Free Power power source. The magnets only have to be arranged at an imbalanced state. They will always try to seek equilibrium, but won’t be able to. The magnets don’t produce the energy , they only direct it. Think, repeating decimal….. Of all the posters here, I’m certain kimseymd1 will miss me the most :). Have I convinced anyone of my point of view? I’m afraid not, but I do wish all of you well on your journey. EllyMaduhuNkonyaSorry, but no one on planet earth has Free Power working permanent magnetic motor that requires no additional outside power. Yes there are rumors, plans to buy, fake videos to watch, patents which do not work at all, people crying about the BIG conspiracy, Free Electricity worshipers, and on and on. Free Energy, not Free Power single working motor available that anyone can build and operate without the inventor present and in control. We all would LIKE one to be available, but that does not make it true. Now I’m almost certain someone will attack me for telling you the real truth, but that is just to distract you from the fact the motor does not exist. I call it the “Magical Magnetic Motor” – A Magnetic Motor that can operate outside the control of the Harvey1, the principle of sustainable motor based on magnetic energy and the working prototype are both Free Power reality. When the time is appropriate, I shall disclose it. Be of good cheer. Ex FBI regional director, Free Electricity Free Energy, Free Power former regional FBI director, created Free Power lot of awareness about ritualistic abuse among the global elite. It goes into satanism, pedophilia, and child sex trafficking. Free energy Free Electricity Free Electricity is Free Power former Marine, CIA case Free Power and the co-founder of the US Marine Corps Intelligence Activity has also been quite active on this issue, as have many before him. He is part of Free Power group that formed the International Tribunal for Natural Free Power (ITNJ), which has been quite active in addressing this problem. Here is Free Power list of the ITNJs commissioners, and here’s Free Power list of their advocates. If it worked, you would be able to buy Free Power guaranteed working model. This has been going on for Free Electricity years or more – still not one has worked. Ignorance of the laws of physics, does not allow you to break those laws. Im not suppose to write here, but what you people here believe is possible, are true. The only problem is if one wants to create what we call “Magnetic Rotation”, one can not use the fields. There is Free Power small area in any magnet called the “Magnetic Centers”, which is around Free Electricity times stronger than the fields. The sequence is before pole center and after face center, and there for unlike other motors one must mesh the stationary centers and work the rotation from the inner of the center to the outer. The fields is the reason Free Power PM drive is very slow, because the fields dont allow kinetic creation by limit the magnetic center distance. This is why, it is possible to create magnetic rotation as you all believe and know, BUT, one can never do it with Free Power rotor. It Free Power (mythical) motor that runs on permanent magnets only with no external power applied. How can you miss that? It’s so obvious. Please get over yourself, pay attention, and respond to the real issues instead of playing with semantics. @Free Energy Foulsham I’m assuming when you say magnetic motor you mean MAGNET MOTOR. That’s like saying democratic when you mean democrat.. They are both wrong because democrats don’t do anything democratic but force laws to create other
\nonumber \\ &= \frac{d}{1 + p_v p_c}\left( \left(\frac{R}{d} - \frac{1}{1 + p_v p_c}\right)\right. \nonumber \\ &\hspace{25mm}+ \left. p_vp_c\left(\frac{R}{d} - \frac{2 + p_v p_c}{1 + p_v p_c}\right)x_2^0 \right) \label{eq:dif_u1} \end{align} and \begin{align} &u_2(x_1, e_2^2) - u_2(x_1, e_2^1) \nonumber \\ &= \frac{d}{1 + p_v p_c}\left( p_v p_c \left(\frac{R}{d} - \frac{p_c}{1 + p_v p_c}\right)\right. \nonumber \\ &\hspace{25mm}+ \left. \left(\frac{R}{d} - \frac{2p_v p_c + 1}{p_v (1 + p_v p_c)}\right)x_1^0 \right). \label{eq:dif_u2} \end{align} From \eqref{eq:dif_u1} and \eqref{eq:dif_u2}, for another miner $\ell \in \mathcal{N} \setminus \{k\}$, the best response $\tilde{\beta}_k(x)$ of miner $k$ changes depending on the value range for $g_{\ell}(R / d)$. We thus consider five cases: $g_{\ell}(R / d) < 0$, $g_{\ell}(R / d) = 0$, $0 < g_{\ell}(R / d) < 1$, $g_{\ell}(R / d) = 1$, and $g_{\ell}(R / d) > 1$. When $0 < g_{\ell}(R/d) < 1$, the sign of the difference between miner $k$'s expect payoff for $s_k = 1$ and $s_k = 0$ changes depending on the value of $x_{\ell}^0$ as follows\footnote{We present the case where $k = 1$ and $\ell = 2$, but the same applies when $k = 2$ and $\ell = 1$.}. \begin{equation} \begin{cases} u_k(e_2^2, x_{\ell}) - u_k(e_2^1, x_{\ell}) < 0 & \mbox{if} \; \; g_{\ell}(R/d) < x_{\ell}^0 \leq 1, \\ u_k(e_2^2, x_{\ell}) - u_k(e_2^1, x_{\ell}) = 0 & \mbox{if} \; \; x_{\ell}^0 = g_{\ell}(R/d), \\ u_k(e_2^2, x_{\ell}) - u_k(e_2^1, x_{\ell}) > 0 & \mbox{if} \; \; 0 \leq x_{\ell}^0 < g_{\ell}(R/d). \end{cases} \nonumber \end{equation} Therefore, the best response $\tilde{\beta}_{k}(x)$ is \begin{equation} \tilde{\beta}_k(x) = \begin{cases} \{e_2^1\} & \mbox{if} \; \; g_{\ell}(R / d) < x_{\ell}^0 \leq 1, \\ X_k & \mbox{if} \; \; x_{\ell}^0 = g_{\ell}(R/d), \\ \{e_2^2\} & \mbox{if} \; \; 0 \leq x_{\ell}^0 < g_{\ell}(R / d). \end{cases}\label{eq:bt_pat3} \end{equation} Similarly, we obtain the best response $\tilde{\beta}_{k}(x)$ depending on the value range for $g_{\ell}(R/d)$ as follows: \begin{itemize} \item When $g_{\ell}(R / d) < 0$, \begin{equation} \tilde{\beta}_k(x) = \{e_2^1\}. \label{eq:bt_pat1} \end{equation} \item When $g_{\ell}(R / d) = 0$, \begin{equation} \tilde{\beta}_k(x) = \begin{cases} X_k & \mbox{if} \; \; x_{\ell}^0 = 0, \\ \{e_2^1\} & \mbox{if} \; \; 0 < x_{\ell}^0 \leq 1. \end{cases}\label{eq:bt_pat2} \end{equation} \item When $g_{\ell}(R / d) = 1$, \begin{equation} \tilde{\beta}_k(x) = \begin{cases} X_k & \mbox{if} \; \; x_{\ell}^0 = 1, \\ \{e_2^2\} & \mbox{if} \; \; 0 \leq x_{\ell}^0 < 1. \end{cases}\label{eq:bt_pat4} \end{equation} \item When $g_{\ell}(R / d) > 1$, \begin{equation} \tilde{\beta}_k(x) = \{e_2^2\}. \label{eq:bt_pat5} \end{equation} \end{itemize} We next derive the range of $R/d$ satisfying $0 < g_k(R/d) < 1, \, k = 1, 2$. From \eqref{eq:gz1} and \eqref{eq:gz2}, it is easily shown that $g_1$ and $g_2$ are monotonically increasing functions. Therefore, we obtain \begin{align} &0 < g_1(R / d) < 1 \Rightarrow \frac{p_c}{1 + p_v p_c} < \frac{R}{d} < \frac{1}{p_v}, \label{eq:g1_range} \\ &0 < g_2(R/d) < 1 \Rightarrow \frac{1}{1 + p_v p_c} < \frac{R}{d} < 1. \label{eq:g2_range} \end{align} Assuming $p_v \geq 1$, the magnitude relationship among $p_c/(1 + p_v p_c)$, $1 / p_v$, $1 / (1 + p_v p_c)$, and $1$ depends on the value of $p_c$, as follows: \begin{enumerate} \item When $p_c \geq 1$, \label{enu:case1} \begin{equation} \frac{1}{1 + p_v p_c} \leq \frac{p_c}{1 + p_v p_c} < \frac{1}{p_v} \leq 1. \nonumber \end{equation} \item When $1 - 1/p_v \leq p_c < 1$, \label{enu:case2} \begin{equation} \frac{p_c}{1 + p_v p_c} < \frac{1}{1 + p_v p_c} \leq \frac{1}{p_v} \leq 1. \nonumber \end{equation} \item When $0 < p_c < 1 - 1/p_v$, \label{enu:case3} \begin{equation} \frac{p_c}{1 + p_v p_c} < \frac{1}{p_v} < \frac{1}{1 + p_v p_c} < 1. \nonumber \end{equation} \end{enumerate} We derive the set of Nash equilibria in the case of \ref{enu:case1}). Then, we have nine cases depending on the value range of $R/d$. Table~\ref{tab:rdnash} shows the relation between $R/d$ and the set of Nash equilibria. Therefore, \eqref{eq:NEG2_1} is the set of Nash equilibria. We can similarly prove \eqref{eq:NEG2_2} and \eqref{eq:NEG2_3}. \end{proof} \begin{table*}[t] \centering \caption{Relation between $R/d$ and the set of Nash equilibria.} \begin{tabular}{|c|c|c|c|}\hline $R/d$ value & $\alpha_1$ & $\alpha_2$ & Set of Nash equilibria \\ \hline $0 < R/d < 1 / (1 + p_v p_c)$ & $\alpha_1 < 0$ & $\alpha_2 < 0$ & $\{(e_2^1, e_2^1)\}$ \\ $R/d = 1 / (1 + p_v p_c)$ & $\alpha_1 < 0$ & $\alpha_2 = 0$ & $\{(e_2^1, e_2^1)\}$ \\ $1 / (1 + p_v p_c) < R / d < p_c / (1 + p_v p_c)$ & $\alpha_1 < 0$ & $0 < \alpha_2 < 1$ & $\{(e_2^1, e_2^1)\}$ \\ $R/d = p_c / (1 + p_v p_c)$ & $\alpha_1 = 0$ & $0 < \alpha_2 < 1$ & $\{(e_2^1, e_2^1), (e_2^2, (\gamma_2, 1 - \gamma_2)^{\mathrm{T}})\}, \; \; \gamma_2 \in [0, g_2(p_c/(1 + p_v p_c))]$ \\ $p_c / (1 + p_v p_c) < R / d < 1 / p_v$ & $0 < \alpha_1 < 1$ & $0 < \alpha_2 < 1$ & $\{(e_2^1, e_2^1), (e_2^2, e_2^2), ((\alpha_1, 1 - \alpha_1)^{\mathrm{T}}, (\alpha_2, 1 - \alpha_2)^{\mathrm{T}})\}$ \\ $R/d = 1 / p_v$ & $\alpha_1 = 1$ & $0 < \alpha_2 < 1$ & $\{(e_2^2, e_2^2), (e_2^1, (\delta_2, 1 - \delta_2)^{\mathrm{T}})\}, \; \; \delta_2 \in [g_2(1 / p_v), 1]$ \\ $1 / p_v < R / d < 1$ & $\alpha_1 > 1$ & $0 < \alpha_2 < 1$ & $\{(e_2^2, e_2^2)\}$ \\ $R/d = 1$ & $\alpha_1 > 1$ & $\alpha_2 = 1$ & $\{(e_2^2, e_2^2)\}$ \\ $R/d > 1$ & $\alpha_1 > 1$ & $\alpha_2 > 1$ & $\{(e_2^2, e_2^2)\}$ \\ \hline \end{tabular} \label{tab:rdnash} \end{table*} For a given $p_v$, $p_c$ affects change in the Nash equilibria depending on $R/d$. Fig.~\ref{fig:pcR} shows the $p_c-R/d$ parameter plane where $p_v \geq 1$ is fixed. If the pair $(p_c, R/d)$ is in region~(a), (b), (c), or (d), set $\mbox{NE}(G_2)$ satisfies $\mbox{NE}(G_2) = \{(e_2^2, e_2^2)\}$, $\mbox{NE}(G_2) = \{(e_2^1, e_2^1), (e_2^2, e_2^2), ((\alpha_1, 1 - \alpha_1)^{\mathrm{T}}, (\alpha_2, 1 - \alpha_2)^{\mathrm{T}})\}$, $\mbox{NE}(G_2) = \{(e_2^1, e_2^1)\}$, or $\mbox{NE}(G_2) = \{(e_2^1, e_2^2)\}$, respectively\footnote{$\mbox{NE}(G_2)$ for the boundary between regions~(a) and (b) is given by the fourth equation in \eqref{eq:NEG2_1} and the fourth equation in \eqref{eq:NEG2_2}. Similarly, $\mbox{NE}(G_2)$ for the boundaries of regions~(b) and (c), (c) and (d), and (d) and (a) are given by the second equation in \eqref{eq:NEG2_1} if $p_c \geq 1$ and the second equation in \eqref{eq:NEG2_2} if $1 - 1/p_v < p_c < 1$, the second equation in \eqref{eq:NEG2_3}, and the fourth equation in \eqref{eq:NEG2_3}, respectively. $\mbox{NE}(G_2)$ when the pair $(p_c, R/d) = (1 - 1/p_v, 1/p_v)$ is given by $\mbox{NE}(G_2) = \{((\varepsilon_1, 1 - \varepsilon_1)^{\mathrm{T}}, e_2^2), (e_2^1, (\delta_2, 1 - \delta_2)^{\mathrm{T}})\}$, where $\varepsilon_1 \in [0, 1]$ and $\delta_2 \in [0, 1]$.}. \begin{figure}[tb] \centering \includegraphics[clip, width = 7.5cm]{./fig/theo/pcR_1.eps} \caption{The $p_c-R/d$ parameter plane where $p_v \geq 1$ is fixed.} \label{fig:pcR} \end{figure} Figs.~\ref{fig:eq_1}--\ref{fig:eq_3} show the relation between $R/d$ and $x_k^0$ ($k = 1, 2$) in Nash equilibria. The blue (left) and red (right) lines represent values for $x_1^0$ and $x_2^0$, respectively. \begin{figure}[tb] \centering \includegraphics[clip, width = 7.5cm]{./fig/theo/theo_2_0.8.eps} \caption{The case when $(p_v, p_c) = (2, 0.8)$.} \label{fig:eq_1} \end{figure} Fig.~\ref{fig:eq_1} shows the case where $p_c > 1 - 1/p_v$ is fixed. Consider the case where $1 - 1/p_v < p_c < 1$. The pure strategy profile $s = (0, 0)$ is a Nash equilibrium if $R/d$ is smaller than $1/p_v$. If $R/d$ is larger than $1/p_v$, the pure Nash equilibrium $s = (0, 0)$ disappears and only the pure strategy profile $s = (1, 1)$ is a Nash equilibrium. The pure strategy profile $s = (1, 1)$ is a Nash equilibrium if $R/d$ is larger than $1/(1 + p_v p_c)$. If $R/d$ is smaller than $1/(1 + p_v p_c)$, the pure Nash equilibrium $s = (1, 1)$ disappears and only the pure strategy profile $s = (0, 0)$ is a Nash equilibrium. This change in Nash equilibria due to change in $R/d$ implies that when the mining reward exceeds some value, all miners will decide to participate in the mining, after which they continue for a while even if the reward decreases to the boundary of region~(b). Thus, a hysteresis phenomenon exists in the region. Moreover,
the \expr{matched(X)} method, where X is the number of a group defined by regular expression pattern: \haxe{assets/ERegGroups.hx} Note that group numbers start with 1 and \expr{r.matched(0)} will always return the whole matched substring. The \expr{r.matchedPos()} will return the position of this substring in the original string: \haxe{assets/ERegMatchPos.hx} Additionally, \expr{r.matchedLeft()} and \expr{r.matchedRight()} can be used to get substrings to the left and to the right of the matched substring: \haxe{assets/ERegMatchLeftRight.hx} \subsection{Replace} \label{std-regex-replace} A regular expression can also be used to replace a part of the string: \haxe{assets/ERegReplace.hx} We can use \expr{\$X} to reuse a matched group in the replacement: \haxe{assets/ERegReplaceGroups.hx} \subsection{Split} \label{std-regex-split} A regular expression can also be used to split a string into several substrings: \haxe{assets/ERegSplit.hx} \subsection{Map} \label{std-regex-map} The \expr{map} method of a regular expression object can be used to replace matched substrings using a custom function. This function takes a regular expression object as its first argument so we may use it to get additional information about the match being done and do conditional replacement. For example: \haxe{assets/ERegMap.hx} \subsection{Implementation Details} \label{std-regex-implementation-details} Regular Expressions are implemented: \begin{itemize} \item in JavaScript, the runtime is providing the implementation with the object RegExp. \item in Neko and C++, the PCRE library is used \item in Flash, PHP, C\# and Java, native implementations are used \item in Flash 6/8, the implementation is not available \end{itemize} \section{Math} \label{std-math} Haxe includes a floating point math library for some common mathematical operations. Most of the functions operate on and return \type{floats}. However, an \type{Int} can be used where a \type{Float} is expected, and Haxe also converts \type{Int} to \type{Float} during most numeric operations (see \Fullref{types-numeric-operators} for more details). Here are some example uses of the math library: \haxe{assets/MathExample.hx} \paragraph{Related content} \begin{itemize} \item See the \href{https://api.haxe.org/Math.html}{Math API documentation} for all available functions. \item \href{http://code.haxe.org/tag/math.html}{Haxe snippets and tutorials about math} in the Haxe Code Cookbook. \end{itemize} \subsection{Special Numbers} \label{std-math-special-numbers} The math library has definitions for several special numbers: \begin{itemize} \item NaN (Not a Number): returned when a mathematically incorrect operation is executed, e.g. Math.sqrt(-1) \item POSITIVE_INFINITY: e.g. divide a positive number by zero \item NEGATIVE_INFINITY: e.g. divide a negative number by zero \item PI : 3.1415... \end{itemize} \subsection{Mathematical Errors} \label{std-math-mathematical-errors} Although neko can fluidly handle mathematical errors, like division by zero, this is not true for all targets. Depending on the target, mathematical errors may produce exceptions and ultimately errors. \subsection{Integer Math} \label{std-math-integer-math} If you are targeting a platform that can utilize integer operations, e.g. integer division, it should be wrapped in \emph{Std.int()} for improved performance. The Haxe Compiler can then optimize for integer operations. An example: \begin{lstlisting} var intDivision = Std.int(6.2/4.7); \end{lstlisting} \todo{I think C++ can use integer operatins, but I don't know about any other targets. Only saw this mentioned in an old discussion thread, still true?} \subsection{Extensions} \label{std-math-extensions} It is common to see \Fullref{lf-static-extension} used with the math library. This code shows a simple example: \haxe{assets/MathStaticExtension.hx} \haxe{assets/MathExtensionUsage.hx} \section{Lambda} \label{std-Lambda} \define{Lambda}{define-lambda}{Lambda is a functional language concept within Haxe that allows you to apply a function to a list or \tref{iterators}{lf-iterators}. The Lambda class is a collection of functional methods in order to use functional-style programming with Haxe.} It is ideally used with \expr{using Lambda} (see \tref{Static Extension}{lf-static-extension}) and then acts as an extension to \type{Iterable} types. On static platforms, working with the \type{Iterable} structure might be slower than performing the operations directly on known types, such as \type{Array} and \type{List}. \paragraph{Lambda Functions} The Lambda class allows us to operate on an entire \type{Iterable} at once. This is often preferable to looping routines since it is less error prone and easier to read. For convenience, the \type{Array} and \type{List} class contains some of the frequently used methods from the Lambda class. It is helpful to look at an example. The exists function is specified as: \begin{lstlisting} static function exists( it : Iterable, f : A -> Bool ) : Bool \end{lstlisting} Most Lambda functions are called in similar ways. The first argument for all of the Lambda functions is the \type{Iterable} on which to operate. Many also take a function as an argument. \begin{description} \item[\expr{Lambda.array}, \expr{Lambda.list}] Convert Iterable to \type{Array} or \type{List}. It always returns a new instance. \item[\expr{Lambda.count}] Count the number of elements. If the Iterable is a \type{Array} or \type{List} it is faster to use its length property. \item[\expr{Lambda.empty}] Determine if the Iterable is empty. For all Iterables it is best to use the this function; it's also faster than compare the length (or result of Lambda.count) to zero. \item[\expr{Lambda.has}] Determine if the specified element is in the Iterable. \item[\expr{Lambda.exists}] Determine if criteria is satisfied by an element. \item[\expr{Lambda.indexOf}] Find out the index of the specified element. \item[\expr{Lambda.find}] Find first element of given search function. \item[\expr{Lambda.foreach}] Determine if every element satisfies a criteria. \item[\expr{Lambda.iter}] Call a function for each element. \item[\expr{Lambda.concat}] Merge two Iterables, returning a new List. \item[\expr{Lambda.filter}] Find the elements that satisfy a criteria, returning a new List. \item[\expr{Lambda.map}, \expr{Lambda.mapi}] Apply a conversion to each element, returning a new List. \item[\expr{Lambda.fold}] Functional fold, which is also known as reduce, accumulate, compress or inject. \end{description} This example demonstrates the Lambda filter and map on a set of strings: \begin{lstlisting} using Lambda; class Main { static function main() { var words = ['car', 'boat', 'cat', 'frog']; var isThreeLetters = function(word) return word.length == 3; var capitalize = function(word) return word.toUpperCase(); // Three letter words and capitalized. trace(words.filter(isThreeLetters).map(capitalize)); // [CAR,CAT] } } \end{lstlisting} This example demonstrates the Lambda count, has, foreach and fold function on a set of ints. \begin{lstlisting} using Lambda; class Main { static function main() { var numbers = [1, 3, 5, 6, 7, 8]; trace(numbers.count()); // 6 trace(numbers.has(4)); // false // test if all numbers are greater/smaller than 20 trace(numbers.foreach(function(v) return v < 20)); // true trace(numbers.foreach(function(v) return v > 20)); // false // sum all the numbers var sum = function(num, total) return total += num; trace(numbers.fold(sum, 0)); // 30 // get highest number trace(numbers.fold(Math.max, numbers[0])); // 8 // get lowest number trace(numbers.fold(Math.min, numbers[0])); // 1 } } \end{lstlisting} \paragraph{Related content} \begin{itemize} \item See the \href{https://api.haxe.org/Lambda.html}{Lambda API documentation} for all available functions. \end{itemize} \section{Template} \label{std-template} Haxe comes with a standard template system with an easy to use syntax which is interpreted by a lightweight class called \href{https://api.haxe.org/haxe/Template.html}{haxe.Template}. A template is a string or a file that is used to produce any kind of string output depending on the input. Here is a small template example: \haxe{assets/Template.hx} The console will trace \ic{My name is Mark, 30 years old}. \paragraph{Expressions} An expression can be put between the \ic{::}, the syntax allows the current possibilities: \begin{description} \item[\ic{::name::}] the variable name \item[\ic{::expr.field::}] field access \item[\ic{::(expr)::}] the expression expr is evaluated \item[\ic{::(e1 op e2)::}] the operation op is applied to e1 and e2 \item[\ic{::(135)::}] the integer 135. Float constants are not allowed \end{description} \paragraph{Conditions} It is possible to test conditions using \ic{::if flag1::}. Optionally, the condition may be followed by \ic{::elseif flag2::} or \ic{::else::}. Close the condition with \ic{::end::}. \lang{none}\begin{lstlisting} ::if isValid:: valid ::else:: invalid ::end:: \end{lstlisting} Operators can be used but they don't deal with operator precedence. Therefore it is required to enclose each operation in parentheses \ic{()}. Currently, the following operators are allowed: \ic{+}, \ic{-}, \ic{*}, \ic{/}, \ic{>}, \ic{<}, \ic{>=}, \ic{<=}, \ic{==}, \ic{!=}, \ic{\&\&} and \ic{||}. For example \ic{::((1 + 3) == (2 + 2))::} will display true. \lang{none}\begin{lstlisting} ::if (points == 10):: Great! ::end:: \end{lstlisting} To compare to a string, use double quotes \ic{"} in the template. \lang{none}\begin{lstlisting} ::if (name == "Mark"):: Hi Mark ::end:: \end{lstlisting} \paragraph{Iterating} Iterate on a structure by using \ic{::foreach::}. End the loop with \ic{::end::}. \lang{xml}\begin{lstlisting} Name Age ::name:: ::age:: ::foreach users:: ::end:: \end{lstlisting} \paragraph{Sub-templates} To include templates in other templates, pass the sub-template result string as a parameter. \begin{lstlisting}
area measurements indicate a high degree of proportional regularity. As noted, for any given specimen, the relationship between the maximum width at the ears and the minimum width across the stem is extremely constant (rs=.99). Maximum stem thickness is less highly correlated with other stem dimensions, with rs=.75 with maximum width across the ears and rs=.77 with minimum width across the stem. These results suggest that beyond the desired physical size of a stem, i.e., the appropriate width and thickness, that pro- portion or shape was also critical to the haft mechanism. As expected, maximum length is poorly correlated to the basal area meas- ures (Table 2). There is a moderately high relationship with blade width TAMPA BAY (rs=.66), indicating that longer points have some tendency to have wider blades One result which was not anticipated is the very high relationship of maximum blade width with maximum width at the ears (rs=.89) and minimum width across the stem (rs=.91) (Table 2). This indicates that the width of the blade did not change much during blade reduction since blade width matches the stability of the haft area, the latter having been argued to be unchanged since the preform. Table 2. Spearman Rank Correlation Coefficients Between Attributes Maximum = 11 = 67.95 = 17.99 Maximum n = 19 x = 30.6 s = 12.49 r = .66 s n = 11 p < .02 Max. r = .42 s n = 11 (n.s.) r = .89 n = 18 (p < .0001) r = .39 s n = 11 (n.s.) r = .71 s n = 18 (p < .001) r = .75 n = 26 (p < .0001) Max. Stem Thick Functional Imolications The current primary method of determining the uses of stone tools is through microscopic analysis of utilized edges (e.g., Keeley 1980). In the present study, a microscopic analysis was not performed due to time limita- tions. Furthermore, it was suspected that owing to the nature and condition of the lithic raw material, such an approach would not be very rewarding. This is not to say that lithic materials from Florida are not amenable to Max. Width Ears n = 26 x = 30.6 s = 11.62 Max. Stem Thick n = 26 x = 7.03 s = 2.27 Max. Length Min. Width @ stem n = 26 x = 28.65 s = 11.60 r = .45 s n = 11 (n.s.) .91 18 .0001) Max. Width Ears .99 26 .0001) .77 26 .0001) GOODYEAR, UPCHURCH, BROOKS AND GOODYEAR microwear analysis. However, sophisticated studies 'that control for the re- action of these cherts to use and the chemical and physical alterations these immature cherts sustained subsequent to their deposition in the archeological record are only in their infancy (Purdy 1981). Macroscopically, there appears to be a general light dulling on the blade margins of many of the specimens. Crisp, sharp serrated edges such as that possessed by two of the Simpsons (Fig. 3, h,k), were not common in our sample. It may be significant that these two specimens were from upland terrestrial sites and 18 of the 27 cases are from dredges, fills and beaches. Although none of the specimens is water worn or tumbled, the fact that they came from inundated environments may affect the preservation of fine micro edges. Althouc only a few have whole undamaged tips, in those cases where the tip is intact, care seems to have been taken to make it sharp. In a few cases, however, tips seem somewhat rounded presumably from use. Extreme weathering on a few of the specimens prevents any sort of wear analysis (Table 1). Some observations relative to use can be preferred, however, that rest on morphological and statistical patterns. It is significant that in every case where a tip survives, care was taken in both the manufacture and re- working to maintain a symmetrical blade. The skewing of blade margins as they form a tip, a pattern common in later Florida stemmed bifaces, is non- existent. This symmetry is in keeping with Paleo-Indian point outlines else- where in the East. Although reworking and resharpening of the blade is obvious from many fluted point sites, care was taken to maintain a sharp tip and avoid a break in the blade outline as it joins the stem or haft area. This stands in contrast with later Dalton and notched point forms of the Early Archaic, the hallmark of which is intensive resharpening of blade edges, often leaving a shoulder (Goodyear 1982). In this study, overall length is poorly correlated to any other attribute indicating that reduction in the length of the blade was relatively common. Thus, the blade, particularly the tip, would appear to be the area of the tool receiving the greatest maintenance, hence, the area receiving the greatest use In this way, these lanceolates are like other eastern U.S. Paleo-Indian points Among the latter, variability in length is usually the greatest compared to other attributes (e.g., Gardner and Verrey 1979). A powerful clue as to projectile use is the presence or absence of distal impact fractures. On the High Plains, many of the hafted bifaces used for projectiles or thrusting implements posses flute-like scars on the blade face and lateral burin scars on blade margins which originate from the distal end (e.g., Frison 1978). None of the specimens in this study exhibited such impact scars. Such scars have been noticed on two Suwannee points in the collection of Ben Waller. Obviously, re-tipping of broken blades would tend to erase any evidence of impact fractures assuming that a point could be salvaged. An example of a probable re-tipped Simpson is pictured in Figure 3 (k). The final data on the functions) of the lanceolates pertains to their overall size. The Suwannee, especially the larger ones (Fig. 2), give the impression of being used as heavy cutting tools. The relatively large width of their blades and stems would not seem conducive to penetration required for spearing and thrusting. Smaller Suwannees with identical forms (Fig. 3), occu at the lower and middle ranges of the size spectrum (Fig. 7), and would not be eliminated as projectily points based on size. The smaller sizes of the Simp- son and more slender lanceolate forms such as Beaver Lake and Sante Fe and the TAMPA BAY 70- 60- 50- E E ' 40 I( 30- 20- 10- 2- 'I. 30 MIN. WIDTH 40 50 ACROSS STEM (mm) Figure 7. Scattergram showing distribution of various lanceolate types along a size axis. P PREFORM o SUWANNEE S SIMPSON * OTHER LANCEOLATES 0 0 -0 SS SS o oso 0 00 * * GOODYEAR, UPCHURCH, BROOKS AND GOODYEAR lozenge-shaped specimens (Figs. 3 and 4), suggest that projectile functions would be possible. The variability in the size of the stem width of the Suwannees versus the Simpson and smaller lanceolates (Fig. 7) could perhaps be a function of the hafting mechanisms of each. The high variation in Suwan- nees might reflect a wooden haft since wood is easy to shape and can be worked to any size. The small widths of the other forms as well as their tighter statistical range might reflect hafts based on bone, antler or ivory where it is easier to chip stone to match natural diameters of more intractable mate- rials (cf. Judge 1973). A larger sample of all of these forms will be necessary in order to document impact fractures, use-wear from cutting and scraping, patterns in basal grind- ing, and tip condition by type. For example, among the 51 whole specimens observed in Ben Waller's collection, the smaller more delicate examples only had light grinding or rubbing on their stems while the large Suwannees were thoroughly ground. Heavier grinding might be necessary where a biface is used in a cutting mode (cf. Goodyear 1974:32). Private collections with large sample sizes provide data adequate to the investigation of these patterns and suspected associations of attributes. A detailed microwear study which cross cuts all size and shape factors would be a next step since multiple or secondary functions seem likely for these curated implements. Multiple use and recycling is a pattern typical of eastern United States Paleo-Indian lithic technologies (Goodyear 1979). Raw Material Analysis The study of types and sources of lithic raw material has proven to be a highly productive area of research in archeology. Paleo-Indian lithic raw material patterns are distinctive in that high quality, easily flaked crypto- crystalline silicates were the focus of their technology (Wilmsen 1970; Gardner 1974). Gardner (1974) has noted that, geographically, the density of fluted points is dependent on the availability of cryptocrystalline sources. His point is well born out in Florida as the distribution of Suwannee points tracks very neatly the chert exposures of the state (Waller and Dunbar 1977:Fig. 1; Purdy 1981:Maps 1 and 6). Another fascinating characteristic of North American Paleo-Indian chipped stone technologies is the long distance transport of lithic artifacts, re- sulting in the
$v_B>0$. When an edge of the operator reaches the end of a finite spin chain, then the operator stops spreading, allowing the region of the operator adjacent to this end to become maximally entangled, as illustrated in Fig.~\ref{fig:S_op_cartoon} (Right). We have explored the entanglement of spreading operators numerically in the quantum chaotic Ising spin chain with longitudinal and transverse fields, Eq.~(\ref{eq:ham}), with $L$ sites for $L$ up to 14. We diagonalize this Hamiltonian exactly to obtain the operator dynamics. We will present results for the spreading of the initially local operators $Z_1$ and $Z_{L/2}$, one of which starts near the end of the chain and the other near the center. Other initially local operators behave essentially the same as these two examples. The behavior of $\hat S(x,t)$ for the operator $Z_1(t)$ in a chain of length $L=14$ is shown in Fig.~\ref{fig:SZ1}. (The figure shows $t=1,2,\ldots, 10$ and $t=100$.) By starting the operator at the end of the chain, we are able to watch it spread in one direction over a distance of $(L-1)$ sites. The operator spreads across the chain, while promptly getting locally maximally entangled near the end of the chain where it started. This sets up the spreading profile with a roughly linear $\hat S$ vs. $x$ over the central region of the chain, and a steady production of entanglement. \begin{figure}[t] \centering{ \includegraphics[width=\linewidth]{SZmid_L14_gridon} } \caption{ Operator entanglement of the operator $Z_{L/2}(t)$ which starts near the centre of the chain, for times ${t=1,2,\ldots,10}$ and $t=100$.} \label{fig:SZmid} \end{figure} We obtain the steady-state slope $s_\text{spread} = |\partial \hat S/\partial x|$ by measuring the slope of the right-hand linear section of the entanglement profile. The $L=14$ data at $t=L/2$ shows a slope $s_\text{spread} \sim 0.87$. Appendix~\ref{app:extrapolationsspr} compares results for $L=8,10,12,14$ to give an idea of the finite-size effects. We estimate that $s_\text{spread}$ lies in the range \begin{equation} s_\text{spread} \in (0.9,1.0). \end{equation} This is well below the maximal entanglement of \textit{two} bits per site, which is attained in the long time limit (after reaching both ends of the chain). The entanglement of $Z_{L/2}(t)$ is shown in Fig.~\ref{fig:SZmid}. The features are similar but finite size effects set in sooner. \subsection{Spacetime picture} Let us consider $\hat S(x,t)$ for a spreading, initially localized operator in the minimal surface picture. We assume that the same effectively local description, in terms of a line or surface tension $\mathcal{E}(v)$, applies within the doubled spacetime patch representing the action of both $U(t)^\dag$ and $U(t)$ in the Heisenberg evolution of the operator. We further assume that the effects of operator spreading can be taken into account simply by ``truncating'' the circuit to the lightcone defined by $v_B$ (see Fig.~\ref{fig:spreadingoperatorcut}). That is, we assume that unitaries in $U(t)$ outside this lightcone effectively cancel with their partner in $U(t)^\dag$ to leave the identity. \begin{figure}[t] \centering{ \includegraphics[width=0.5\linewidth]{spreadingoperatorcut} } \caption{ Minimization for spreading operator giving Eq.~\ref{spreadingoperatorminimization}.} \label{fig:spreadingoperatorcut} \end{figure} By symmetry, the minimal cut configuration determining $\hat S(x,t)$ for $x>0$ is then that shown in Fig.~\ref{fig:spreadingoperatorcut}. This gives Eq.~\ref{operator_pyramid}, with \begin{equation}\label{spreadingoperatorminimization} s_\text{spread} = 2 s_\text{eq} \min_v \frac{\mathcal{E}(v)}{v+v_B}, \end{equation} where $v$ is the inverse slope of the non-vertical sections. This is equivalent by (\ref{legendre_transform}) to the expression above in terms of $\Gamma(s)$. For an operator initiated at the origin we may consider the entanglement of the set of sites within a distance $r$ from the origin. At short times this is just twice the value above (since we have two separate cuts of the type discussed above) while at late times the entanglement saturates to { $4s_\text{eq} r$}. This implies that this set of $2r$ sites is fully entangled with the exterior if ${r<v_\text{core} t}$, where { $v_\text{core} = v_B s_\text{spread}/(2 s_\text{eq}+s_\text{spread})$}. The speed $v_\text{core}$ sets the size of the ``fully-entangled'' core of the operator. The entanglement $S(0,t)$ across the midpoint of an operator started at 0 in an infinite system is ${S(0,t)=s_\text{spread} v_B t}$. Therefore storing $A(t)$ in matrix product operator form is less expensive than storing a ``thermalized'' operator of the same size, which has entanglement $2 s_\text{eq} v_B t$ across its midpoint. However $s_\text{spread} v_B t$ is still always larger than the \textit{state} entanglement generated by a quench from a product state in the same amount of time, $s_\text{eq} v_E t$ (see below). Therefore to compute the expectation value $\<A(t)\>$ numerically following a quench from a product state it is likely to be more efficient to use the Schrodinger picture than the Heisenberg picture (time evolving the state rather than the operator). This should be contrasted with certain integrable chains (including Ising and XY and perhaps XXZ), in which the entanglement of a spreading operator grows only logarithmically with time \cite{prosenpizorn, prosenpizorn2,dubail}. For those systems, storing the evolving operator in matrix product form is much more efficient than storing the evolving state. In general Eq.~\ref{eq:vbconstraint} gives the bounds $s_\text{spread} / s_\text{eq} \leq \min \{ 2 v_E/v_B, 1\}$ and $s_\text{spread}/ s_\text{eq} \geq 2 v_E / (v_E + v_B)$. For the random circuit in Eq.~\ref{randomstructureexample} these bounds read $2/3 \leq s_\text{spread} / s_\text{eq} \leq 1$, and the actual value is $s_\text{spread}/s_\text{eq} = 2 (\sqrt 2 - 1)\simeq 0.83$, smaller than but similar to the value we find numerically for the nonintegrable Ising model. The picture above generalizes directly to higher dimensions. For an initially local operator in a rotationally invariant system we must solve a membrane minimization problem in a cone-shaped spacetime region. \section{Extensions} \subsection{Higher-gradient corrections} \label{higher_gradient} So far we have discussed the leading order coarse-grained entanglement dynamics, but subleading effects are needed to understand the more detailed features of our numerics. It is natural to expect that in many situations the dominant such effects will be described by higher spatial derivative corrections to Eq.~\ref{eq:state} and the comparable formulas for operators. Explicit calculations for the higher Renyi entropies, and for the von Neumann entropy in certain limits, show that such higher-derivative corrections are present for random unitary circuits (where the presence of randomness also leads to universal subleading fluctuations).\cite{nahum,zhounahum} The first such subleading term is $\partial^2 S/\partial x^2$ with a coefficient that depends on $\partial S/\partial x$ (which is not necessarily small in this regime). In this section we argue that such higher-derivative corrections explain differences in finite-time entropy growth rates for the various initial states/operators we have considered. To begin with let us compare the state entanglement for two different initial conditions. First, the entanglement following a quench from an initial unentangled product state, $\Psi = \otimes_i \Psi_i$, which we denote $S_{\otimes \Psi}(x,t)$. The leading order dynamics is Eq.~\ref{eq:state} with the initial condition ${S_{\otimes \Psi}(x,0) = 0}$. Second, the state entanglement following a quench from an initial state $\Psi= \Psi_A\otimes \Psi_B$ which is the product of independent Page--random states in the left and right halves of the system. The initial ${S_{\Psi_A\otimes\Psi_B}(x,0)}$ now has a two-pyramid structure. For $t< L/(4 v_B)$, ${S_{\Psi_A\otimes\Psi_B}(x,0)}$ resembles Fig.~\ref{fig:joiningprocess}. In both cases $\partial S/\partial x|_{x=L/2}$ vanishes for $t>0$, so in the leading order treatment the entanglement across the central bond grows at the same rate, \begin{align} S_{\otimes \Psi}(0,t) & \sim S_{\Psi_A\otimes\Psi_B}(0,t) \sim s_\text{eq} v_E t. \end{align} However while the product state initial condition gives a flat entanglement profile, the two-pyramid initial condition gives a positive curvature $\partial^2S_{\Psi_A\otimes\Psi_B}/\partial x^2$ at the central bond, which (from the leading order dynamics) decreases like $1/t$. Therefore if the first subleading correction to $\partial S/\partial t$ is $\partial^2S/\partial x^2$ with a positive coefficient we expect $\partial S_{\Psi_A\otimes\Psi_B}/\partial t$ to tend to $s_\text{eq} v_E$ from above like $1/t$. The available system sizes do not allow us to check this power law, but we do find that $\partial S_{\Psi_A\otimes\Psi_B}/\partial t$ is greater than $\partial S_{\otimes \Psi}/\partial t$ at early times. The time derivatives are shown in Fig.~\ref{fig:joiningprocess}, along with the lattice curvature $(\partial^2 S/ \partial x^2)_\text{lattice}$ at $x=L/2$.\footnote{ $(\partial^2 S / \partial x^2)_\text{lattice}\equiv {\widetilde S_U(L/2+1) - 2 \widetilde S_U(L/2) + \widetilde S_U(L/2-1)}$.} \begin{figure}[t] \centering{ \includegraphics[width=\linewidth]{dSdt_Sxpm1_3protocols} } \caption{ Growth rate $\partial S/\partial t$ of entanglement across the central bond (left), and lattice approximation to $\partial^2S/\partial x^2$ (right), for three protocols discussed in Sec.~\ref{higher_gradient}: an initial
front-view RGB images. The driving conditions include various lighting (daytime and nighttime), lane occlusions (up to 6 occluded lane lines), and road structures (merging, diverging, curved lanes). In addition, we provide the development kits for K-Lane including the annotation, visualization, training, and benchmarking tools. We also introduce a baseline network for Lidar lane detection, which we term LLDN-GFC. LLDN-GFC utilizes self-attention mechanisms to extract lane features via global correlation, and show superior performance compared to the conventional CNN-based LLDNs. In addition, we show the importance of Lidar lane detection networks, where there is only little performance degradation in between detection in the daytime and detection in the nighttime, in contrast to camera-based lane detection networks. As such, we expect this work to pave the way for further studies in the field of Lidar lane detection, and improve the safety aspects in autonomous driving. \section*{Acknowledgment} This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C3008370) \crefname{section}{Sec.}{Secs.} \Crefname{section}{Section}{Sections} \Crefname{table}{Table}{Tables} \crefname{table}{Tab.}{Tabs.} \newcounter{appdxTableint} \newcounter{appdxFigureint} \newcommand\tabcounterint{% \refstepcounter{appdxTableint}% \renewcommand{\thetable}{\arabic{appdxTableint}}% } \newcommand\figcounterint{% \refstepcounter{appdxFigureint}% \renewcommand{\thefigure}{\arabic{appdxFigureint}}% } \setcounter{appdxFigureint}{7} \setcounter{appdxTableint}{2} {\Large\noindent\textbf{Appendix}} \vspace{2mm} We provide a detailed description of the K-Lane dataset and the development kits (devkits), and detailed structure of proposed LLDN-GFC with its CNN-based counterparts, in Section A, and B, respectively. In addition, Section C shows ablation study for the network hyper-parameters of the proposed LLDN-GFC (i.e., Proj28-GFC-T3), low computational alternative (i.e., Pillars-GFC-M5), and the counterparts. Furthermore, Section D shows qualitative lane detection results for K-Lane, and visualization of both heatmap of features and the attention score. Lastly, Section E shows the comparison between LLDN-GFC and heuristic lane detection methods. \vspace{5mm} {\large\noindent\textbf{A. K-Lane and Devkits}} \vspace{2mm} Section A contains technical details that may helps researchers in using the K-Lane datasets and devkits. \vspace{2mm} {\large\noindent\textbf{A.1. Details of K-Lane and Devkits}} \vspace{2mm} In this section, we present three additional details about the K-Lane: sequence distributions, compositions, and the criteria of driving conditions annotations of the dataset. {\noindent\textbf{Sequence Distribution.}} K-Lane dataset consists of fifteen sequences that have different set of road conditions. The details of the sequences are shown in Table \ref{tab:sequence}. For the test data, we provide additional driving conditions annotations on each frame (i.e., curve, occlusion, merging, and number of lanes) with annotation tool shown in Section A.2. {\noindent\textbf{Conditions Criteria.}} To evaluate the LLDN performance depending on data characteristics, we provide 13 different categories of driving conditions as shown in Table \ref{tab:a_4}. Examples of each condition are shown Fig. \ref{klane}, and each frame can have two or more conditions, for example, day time and occlusion. {\noindent\textbf{Dataset Composition.}} The K-lane is divided into fifteen directories, each representing a sequence. Each directory has one associated file that describe the driving condition of the frames in the sequence, and contains files for the collected point cloud data, BEV point cloud tensor (i.e., stacked pillars shown in Fig. \ref{enc_detail}), BEV label, front (camera) images, and calibration parameters, as shown in Table \ref{tab:a_3}. Pedestrians’ faces are blurred on the front images for privacy protection. Interface for pre-processing the files is provided in Section A.2. \begin{table}[htb!] { \centering \small \tabcounterint \begin{tabular}{cccc} \hlineB{3} \begin{tabular}[c]{@{}c@{}}Seq-\\ uence\end{tabular} & \begin{tabular}[c]{@{}c@{}}Num. \\ Frames\end{tabular} & Location & Time \\ \hlineB{2} 1 & 1708 & Urban roads {[}Sejong{]} & Night \\ 2 & 803 & Urban roads {[}Daejeon{]} & Day \\ 3 & 549 & Urban roads {[}KAIST{]} & Day \\ 4 & 1468 & Urban roads {[}KAIST{]} & Day \\ 5 & 251 & Urban roads {[}Daejeon{]} & Day \\ 6 & 132 & Urban roads {[}Daejeon{]} & Day \\ 7 & 388 & Urban roads {[}Daejeon{]} & Day \\ 8 & 357 & Urban roads {[}Daejeon{]} & Day \\ 9 & 654 & Urban roads {[}Daejeon{]} & Day \\ 10 & 648 & Urban roads {[}Daejeon{]} & Day \\ 11 & 1337 & Urban roads {[}Daejeon{]} & Night \\ 12 & 370 & Urban roads {[}Daejeon{]} & Night \\ 13 & 2991 & Highway {[}Daejeon to Cheongju{]} & Day \\ 14 & 1779 & Highway {[}Daejeon to Cheongju{]} & Night \\ 15 & 1947 & Highway {[}Cheongju to Daejeon{]} & Night \\ \hlineB{3} \end{tabular} \caption{Sequences in K-Lane.} \label{tab:sequence} } \end{table} \begin{table*}[htb!] { \tabcounterint \small \centering \begin{tabular}{c|l|c} \hlineB{3} Conditions & \multicolumn{1}{c|}{Explanation} & Num. Frames \\ \hlineB{3} Urban & Data acquired from city or university & 8607 \\ \hline Highway & Data acquired on Highway & 6775 \\ \hline Night & Data acquired at night (approximately 20:00-2:00) & 7139 \\ \hline Daytime & Data acquired during the daytime (about 12:00-16:00) & 8243 \\ \hline Normal & Data without curved or merging lanes (mostly straight lanes) & 11065 \\ \hline Gentle Curve & Data with curved lanes whose radius of curvature is greater than 160 {[}m{]} & 1804 \\ \hline Sharp Curve & Data with curved lanes whose radius of curvature is less than 160 {[}m{]} & 1431 \\ \hline Merging & Data with a converging or diverging lane at the rightmost or leftmost lane & 982 \\ \hline No Occlusion & Data without occluded lanes based on the lane label & 9443 \\ \hline Occlusion 1 & Data with one occluded lane based on the lane label & 2660 \\ \hline Occlusion 2 & Data with two occluded lanes based on the lane label & 2112 \\ \hline Occlusion 3 & Data with three occluded lanes based on the lane label & 793 \\ \hline Occlusion 4-6 & \begin{tabular}[c]{@{}l@{}}Data with four to six occluded lanes based on the lane label; \\ Since there are few samples of data with five or six occluded lanes, \\ they are integrated as a single condition (i.e., four to six occluded lanes).\end{tabular} & 374 \\ \hlineB{3} \end{tabular} \caption{Condition details} \label{tab:a_4} } \end{table*} \begin{table*}[htb!] { \tabcounterint \small \centering \begin{tabular}{c|c|l|l} \hlineB{3} Datum Type & Extension & \multicolumn{1}{c|}{Format} & \multicolumn{1}{c}{Comment} \\ \hlineB{2} Point cloud & .pcd & Point cloud with 131072 points & \begin{tabular}[c]{@{}l@{}}Input to point projector and \\ heuristic technique\end{tabular} \\ \hline BEV point cloud tensor & .pickle & $N_g \times N_c \times N_p$ size array & Input to pillar encoder \\ \hline BEV label & .pickle & $H_{bev} \times (W_{bev}+6)$ size array & \begin{tabular}[c]{@{}l@{}}Lane label including unlabeled lane \\ per row (6 columns are for the possible \\ row-wise detection-based approaches)\end{tabular} \\ \hline Front image & .png & RGB image & For annotation and visualization \\ \hline Calibration parameters & .txt & Intrinsic and extrinsic parameters & For Lidar-camera projection \\ \hline Condition & .txt & Condition (e.g., night and day) & For evaluation \\ \hlineB{3} \end{tabular} \caption{Dataset Composition} \label{tab:a_3} } \end{table*} \vspace{2mm} {\large\noindent\textbf{A.2. Details of Development Kits}} \vspace{2mm} In addition to the K-Lane dataset, we also provide the devkits which can be used to expand the dataset, and to develop further LLDNs. The devkits are available to the public in the form of three programs: (1) TPC - Total Pipeline Code for training and evaluation, (2) GLT - Graphic User Interface (GUI)-based Labeling Tools, and (3) GDT – GUI-based development Tools for evaluation, visualization, and additional conditions annotations. {\noindent\textbf{Total Pipeline Code.}} TPC is a complete neural network development code that supports pre-processing of the input data and label, train the network, and perform evaluation based on the F1-metric. TPC handles input and output as Python dictionaries and support modularization of the LLDN (BEV encoder, GFC, detection head), therefore, providing comprehensive and flexible support to developers. {\noindent\textbf{GUI-based Labeling Tools.}} GLT provides an easy way to develop a labeled dataset for a Lidar and a front view camera, regardless of the Lidar and camera models. As shown in Fig. \ref{glt} (left), GLT provides an easy way for labeling by showing the intensity of point cloud in a BEV image. Fig. \ref{glt} (middle) shows a synchronized front camera image for easy labeling of point cloud, and Fig. \ref{glt} (right) shows the saved labeled point cloud. {\noindent\textbf{GUI-based Development Tools.}} GDT is a GUI program used together with TPC. GDT provides visualization of inference results for each scene as point cloud or camera image with projected lanes (Fig. \ref{gdt}-b), high-accuracy calibration of camera and Lidar sensors with specific points of the lanes (Fig. \ref{gdt}-c), and annotation of each frame with set of buttons (Fig. \ref{gdt}-d). \begin{figure*}[htb!] { \figcounterint \centering \includegraphics[width=1.9\columnwidth]{glt_process} \caption{GUI-based Labeling Tool (GLT): (a) Overall components of GLT, (b) Labeling process of a point cloud, (c) Finalizing and saving the label.} \label{glt} } \end{figure*} \begin{figure*}[htb!] { \figcounterint \centering \includegraphics[width=2\columnwidth]{gdt_process} \caption{GUI-based Development Tools (GDT): (a) overall components of GDT and loading a data, (b) visualization of the LLDN inference results, (c) calibrating Lidar
t_{0}$ for any natural number $n > 2$, (Split-DIN-AVD) performs from this point of view better than (DIN-AVD) without increasing the complexity of the governing operator. \end{remark} \subsection{The case $A = 0$} Let us return to (Split-DIN-AVD) dynamics $(\ref{mainsystem})$. Set $A = 0$, and for every $t\geq t_{0}$ take $\gamma(t) = \gamma \in (0, 2\beta)$ and $\eta(t) = \eta t^{2}$ with $\eta = \lambda/\gamma$. Then, associated to the problem \[ \mbox{find} \ x \in \mathcal{H} \ \mbox{such that} \ B(x)=0, \] we obtain the system \begin{equation}\label{system2} \ddot{x}(t) + \frac{\alpha}{t}\dot{x}(t) + \xi \frac{d}{dt}\left(\frac{1}{\eta(t)}Bx(t)\right) + \frac{1}{\eta(t)}Bx(t) = 0. \end{equation} The conditions $\lambda > \frac{2}{(\alpha - 1)^{2}}$ and $\gamma\in (0, 2\beta)$ imply \[ \eta = \frac{\lambda}{\gamma} > \frac{2}{\gamma(\alpha - 1)^{2}} > \frac{2}{2\beta(\alpha - 1)^{2}} = \frac{1}{\beta(\alpha - 1)^{2}}. \] With the previous observation, we are able to state the following theorem. \begin{theorem}\label{TheoremCaseAzero} Let $B: \mathcal{H}\to \mathcal{H}$ be a $\beta$-cocoercive operator for some $\beta > 0$ such that $\zer B\neq \emptyset$. Assume that $\alpha > 1$, $\xi \geq 0$ and $\eta(t) = \eta t^{2}$ for $\eta > \frac{1}{\beta(\alpha - 1)^{2}}$ and all $t\geq t_{0}$. Take $x : [t_{0}, +\infty)\to \mathcal{H}$ a solution to $(\ref{system2})$. Then, the following hold: \begin{enumerate}[(i)] \item $x$ is bounded, and $x(t)$ converges weakly to an element of $\zer B$ as $t \rightarrow +\infty$. \item We have the estimates \[ \int_{t_{0}}^{+\infty}t\|\dot{x}(t)\|^{2}dt < +\infty, \quad \int_{t_{0}}^{+\infty}t^{3}\|\ddot{x}(t)\|^{2}dt < +\infty, \quad \int_{t_{0}}^{+\infty}\frac{1}{t}\left\|Bx(t)\right\|^{2}dt < \infty. \] \item We have the convergence rates \begin{equation*} \|\dot{x}(t)\| = o\left(\frac{1}{t}\right), \quad \|\ddot{x}(t)\| = \mathcal{O}\left(\frac{1}{t^{2}}\right) \end{equation*} as well as the limit \[ \|Bx(t)\| \to 0 \] as $t\to +\infty$. \end{enumerate} \end{theorem} \begin{proof} Since $\eta > \frac{1}{\beta(\alpha - 1)^{2}}$, we can find $\epsilon\in (0, \beta)$ such that $\eta > \frac{1}{(\beta - \epsilon)(\alpha - 1)^{2}}$, equivalently, $2(\beta - \epsilon)\eta > \frac{2}{(\alpha - 1)^{2}}$. Since $(\ref{system2})$ is equivalent to (Split-DIN-AVD) with $A = 0$ and parameters $\lambda = 2(\beta - \epsilon)\eta > \frac{1}{(\alpha - 1)^{2}}$ and $\gamma(t) \equiv 2(\beta - \epsilon) \in (0, 2\beta)$, the conclusion follows from Theorem \ref{maintheorem}. \end{proof} \begin{remark} (a) As we mentioned in the introduction, the dynamical system \eqref{system2} provides a way of finding the zeros of a cocoercive operator directly through forward evaluations, instead of having to resort to its Moreau envelope when following the approach in \cite{AttouchLaszlo}. (b) The dynamics $(\ref{system2})$ bear some resemblance to the system $(\ref{systemBotCsetnek})$ (see also \cite{BotCsetnek}) with $\mu(t) = \frac{\alpha}{t}$ and $\nu(t) = \frac{1}{\eta(t)}$, with an additional Hessian-driven damping term. In our case, since $\eta > \frac{1}{\beta(\alpha - 1)^{2}}$, the parameters satisfy \[ \dot{\mu}(t) = -\frac{\alpha}{t^{2}} \leq 0, \quad \frac{\mu^{2}(t)}{\nu(t)} = \frac{\alpha^{2}\eta t^{2}}{t^{2}} = \alpha^2 \eta> \frac{1}{\beta} \quad \forall t\geq t_{0}. \] However, we have \[ \dot{\nu}(t) = -\frac{2}{\lambda t^{3}} \leq 0 \quad \forall t\geq t_{0}, \] so one of the hypotheses which is needed in $(\ref{systemBotCsetnek})$ is not fulfilled, which shows that one cannot address the dynamical system $(\ref{system2})$ as a particular case of it; indeed, for $(\ref{systemBotCsetnek})$ a vanishing damping is not allowed. With our system, we obtain convergence rates for $\dot{x}(t)$ and $\ddot{x}(t)$ as $t\to +\infty$, which are not obtained in \cite{BotCsetnek}. \end{remark} \section{Structured convex minimization}\label{sec4} We can specialize the previous results to the case of convex minimization, and show additionally the convergence of functional values along the generated trajectories to the optimal objective value at a rate that will depend on the choice of $\gamma$. Let $f:\mathcal{H}\to \mathbb{R}\cup\{+\infty\}$ be a proper, convex and lower semicontinuous function, and let $g:\mathcal{H}\to \mathbb{R}$ be a convex and Fr\'echet differentiable function with $L_{\nabla g}$-Lipschitz continuous gradient. Assume that $\argmin_{\mathcal{H}}(f + g)\neq\emptyset$, and consider the minimization problem \begin{equation}\label{ConvexMinimization} \min_{x\in\mathcal{H}} f(x) + g(x). \end{equation} Fermat's rule tells us that $\overline x$ is a global minimum of $f + g$ if and only if \[ 0\in\partial (f + g)(\overline x) = \partial f(\overline x) + \nabla g(\overline x). \] Therefore, solving $(\ref{ConvexMinimization})$ is equivalent solving the monotone inclusion $0\in (A + B)(x)$ addressed in the first section, with $A = \partial f$ and $B = \nabla g$. Moreover, recall that if $\nabla g$ is $L_{\nabla g}$-Lipschitz then it is $\frac{1}{L_{\nabla g}}$-cocoercive (Baillon-Haddad's Theorem, see \cite[Corollary 18.17]{BC}). Therefore, associated to the problem $(\ref{ConvexMinimization})$ we have the dynamics \begin{equation} \label{systemFunctions} \ddot{x}(t) + \frac{\alpha}{t}\dot{x}(t) + \xi \frac{d}{dt}\left(\frac{\gamma(t)}{\lambda(t)}\left(\nabla f_{\gamma(t)}(u(t)) + \nabla g(x(t))\right)\right) + \frac{\gamma(t)}{\lambda(t)}\left(\nabla f_{\gamma(t)}(u(t)) + \nabla g(x(t)) \right) = 0, \end{equation} where we have denoted $u(t) = x(t) - \gamma(t)\nabla g(x(t))$ for all $t\geq t_{0}$ for convenience. \begin{theorem}\label{minimizar f + g} Let $f: \mathcal{H}\to \mathbb{R}\cup\{+\infty\}$ be a proper, convex and lower semicontinuous function, and let $g : \mathcal{H}\to \mathbb{R}$ be a convex and Fr\'echet differentiable function with a $L_{\nabla g}$-Lipschitz continuous gradient such that $\argmin_{\mathcal{H}}(f + g)\neq \emptyset$. Assume that $\alpha > 1$, $\xi \geq 0$, $\lambda(t) = \lambda t^{2}$ for $\lambda > \frac{2}{(\alpha - 1)^{2}}$ and all $t\geq t_{0}$, and that $\gamma : [t_{0}, +\infty)\to \left(0, \frac{2}{L_{\nabla g}}\right)$ is a differentiable function that satisfies $\frac{\dot{\gamma}(t)}{\gamma(t)} = \mathcal{O}(1/t)$ as $t\to +\infty$. Then, for a solution $x : [t_{0}, +\infty) \to \mathcal{H}$ to $(\ref{systemFunctions})$, the following statements hold: \begin{enumerate}[(i)] \item $x$ is bounded. \item We have the estimates \begin{gather*} \int_{t_{0}}^{+\infty}t\|\dot{x}(t)\|^{2}dt < +\infty, \quad \int_{t_{0}}^{+\infty}t^{3}\|\ddot{x}(t)\|^{2}dt < +\infty, \\ \int_{t_{0}}^{+\infty} \frac{\gamma^{2}(t)}{t}\left\|\nabla f_{\gamma(t)}\Big[x(t) - \gamma(t)\nabla g(x(t))\Big] + \nabla g(x(t))\right\|^{2}dt < +\infty. \end{gather*} \item We have the convergence rates \begin{align*} & \|\dot{x}(t)\| = o\left(\frac{1}{t}\right), \ \|\ddot{x}(t)\| = \mathcal{O}\left(\frac{1}{t^{2}}\right), \\ & \left\|\nabla f_{\gamma(t)}\Big[x(t) - \gamma(t)\nabla g(x(t))\Big] + \nabla g(x(t))\right\| = o\left(\frac{1}{\gamma(t)}\right), \\ & \left\|\frac{d}{dt}\left(\nabla f_{\gamma(t)}\Big[x(t) - \gamma(t)\nabla g(x(t))\Big] + \nabla g(x(t))\right)\right\| = \mathcal{O}\left(\frac{1}{t\gamma(t)}\right) + o\left(\frac{t^{2}\left|\frac{d}{dt}\frac{\gamma(t)}{\lambda(t)}\right|}{\gamma^{2}(t)}\right) \end{align*} as $t\to +\infty$. \item If $0 < \inf_{t \geq t_{0}}\gamma(t) \leq \sup_{t\geq t_{0}}\gamma(t) < \frac{2}{L_{\nabla g}}$, then $x(t)$ converges converges to a minimizer of $f + g$ as $t \rightarrow +\infty$. \item Additionally, if $0 < \gamma(t) \leq \frac{1}{L_{\nabla g}}$ for every $t\geq t_{0}$ and we set $u(t) := x(t) - \gamma(t) \nabla g(x(t))$, then \[ f\left(\prox_{\gamma(t)f}(u(t))\right) + g\left(\prox_{\gamma(t)f}(u(t))\right) - \min\nolimits_{\mathcal{H}}(f + g) = o\left(\frac{1}{\gamma(t)}\right) \] as $t\to +\infty$. Moreover, $\left\|\prox_{\gamma(t)f}(u(t)) - x(t)\right\|\to 0$ as $t\to +\infty$. \end{enumerate} \end{theorem} \begin{proof} Parts (i)-(iv) are a direct consequence of Theorem \ref{maintheorem}. For checking (v), first notice that for all $t\geq t_{0}$ we have \begin{align} T_{\lambda(t), \gamma(t)}(x(t)) &= \frac{1}{\lambda(t)}\Big[\id - J_{\gamma(t)\partial f}\circ(\id - \gamma(t)\nabla g)\Big](x(t)) = \frac{1}{\lambda(t)}\Big[x(t) - \prox_{\gamma(t)f}(u(t))\Big]. \label{aux27} \end{align} Now, let $\overline{x}\in \argmin_{\mathcal{H}}(f+g)$. According to \cite[Lemma 2.3]{FISTA}, for every $t\geq t_{0}$, we have the inequality \begin{align*} &\ f\left(\prox_{\gamma(t)f}(u(t))\right) + g\left(\prox_{\gamma(t)f}(u(t))\right) - \min\nolimits_{\mathcal{H}}(f + g) \\ \leq & \ f\left(\prox_{\gamma(t)f}(u(t))\right) + g\left(\prox_{\gamma(t)f}(u(t))\right) - f(\overline{x}) - g(\overline{x}) \\ \leq &\ -\frac{1}{2\gamma(t)}\left\|\prox_{\gamma(t)f}(u(t)) - x(t)\right\|^{2} + \frac{1}{\gamma(t)}\left\langle x(t) - x^{*}, x(t) - \prox_{\gamma(t)f}(u(t))\right\rangle. \end{align*} After summing the norm squared term and using the Cauchy-Schwarz inequality, for every $t\geq t_{0}$ we obtain \begin{align*} & \ \frac{1}{2\gamma(t)}\left\|\prox_{\gamma(t)f}(u(t)) - x(t)\right\|^{2} \\ \leq & \ f\left(\prox_{\gamma(t)f}(u(t))\right) + g\left(\prox_{\gamma(t)f}(u(t))\right) + \frac{1}{2\gamma(t)}\left\|\prox_{\gamma(t)f}(u(t)) - x(t)\right\|^{2} - \min\nolimits_{\mathcal{H}}(f + g) \\ \leq & \ \left\langle\frac{1}{\gamma(t)}\Big(x(t) - \prox_{\gamma(t)f}(u(t))\Big), x(t) - \overline{x}\right\rangle \leq \left\|\frac{1}{\gamma(t)}\Big(x(t) - \prox_{\gamma(t)f}(u(t))\Big)\right\| \|x(t) - \overline{x}\| \\ = & \ \frac{\lambda(t)}{\gamma(t)}\left\|T_{\lambda(t), \gamma(t)}(x(t))\right\|\|x(t) - \overline{x}\| \\ = & \ o\left(\frac{1}{\gamma(t)}\right) \quad \text{as} \quad t\to +\infty, \end{align*} which follows as a consequence of $x$ being bounded and $\left\|T_{\lambda(t), \gamma(t)}(x(t))\right\| = o\left(\frac{1}{t^{2}}\right)$ as $t\to +\infty$. \end{proof} \begin{remark} It is also worth mentioning the system we obtain in the case where $g \equiv 0$, since we also get some improved rates for the objective functional values when we compare (Split-DIN-AVD) to (DIN-AVD) \cite{AttouchLaszlo}. In this case, we have the system \begin{equation}\label{system minimizar f} \ddot{x}(t) + \frac{\alpha}{t} + \xi \frac{d}{dt}\left(\frac{\gamma(t)}{\lambda(t)}\nabla f_{\gamma(t)}(x(t))\right) + \frac{\gamma(t)}{\lambda(t)}\nabla f_{\gamma(t)}(x(t)) = 0 \end{equation} attached to the convex optimization problem $$\min_{x\in\mathcal{H}}f(x).$$ If we assume $\lambda > \frac{1}{(\alpha - 1)^{2}}$, allow $\gamma : [t_{0}, +\infty) \to (0, +\infty)$ to be unbounded from above and otherwise keep the hypotheses of Theorem \ref{minimizar f + g}, for a solution $x : [t_{0}, +\infty) \to \mathcal{H}$ to $(\ref{system minimizar f})$, the following statements hold: \begin{enumerate}[(i)] \item $x$ is bounded, \item We have the estimates \[ \int_{t_{0}}^{+\infty}t\|\dot{x}(t)\|^{2}dt < +\infty, \quad \int_{t_{0}}^{+\infty}t^{3}\|\ddot{x}(t)\|^{2}dt < +\infty,\quad \int_{t_{0}}^{+\infty}\frac{\gamma^{2}(t)}{t}\left\|\nabla f_{\gamma(t)}(x(t))\right\|^{2}dt < +\infty, \] \item We have the convergence rates \begin{align*} & \|\dot{x}(t)\| = o\left(\frac{1}{t}\right), \ \|\ddot{x}(t)\| = \mathcal{O}\left(\frac{1}{t^{2}}\right), \\ & \left\|\nabla f_{\gamma(t)}(x(t))\right\| = o\left(\frac{1}{\gamma(t)}\right), \ \left\|\frac{d}{dt}\nabla f_{\gamma(t)}(x(t))\right\| = \mathcal{O}\left(\frac{1}{t\gamma(t)}\right) + o\left(\frac{t^{2}\left|\frac{d}{dt}\frac{\gamma(t)}{\lambda(t)}\right|}{\gamma^{2}(t)}\right) \end{align*} as $t\to +\infty$. \item If $0 < \inf_{t\geq t_{0}}\gamma(t)$, then $x(t)$ converges weakly to a minimizer of $f$
import dash import dash_html_components as html import dash_core_components as dcc import altair as alt import pandas as pd import dash_table import dash_bootstrap_components as dbc from dash.dependencies import Input, Output, State from datetime import datetime, timedelta import plotly as py import plotly.express as px import plotly.graph_objects as go from flask_sqlalchemy import SQLAlchemy from flask import Flask from scipy import stats import plotly.io as pio # pio.templates.default = "simple_white" alt.data_transformers.disable_max_rows() ########## Additional Data Filtering ########################################### # df = pd.read_csv('data/processed/business_data.csv', sep=';') #data/processed/cleaned_data.csv ############################################################################### def create_card(header='Header', content='Card Content'): if header: card = dbc.Card([dbc.CardHeader(header), dbc.CardBody(html.Label([content]))]) elif not header: card = dbc.Card([dbc.CardBody(html.Label([content]))]) return card def within_thresh(value, businesstype, column, data, sd_away=1): '''returns the lower and upper thresholds and whether the input falls within this threshold ''' if column == 'Total Fees Paid': a = data.groupby('businessname').sum().reset_index() b = data.loc[:,['businessname','BusinessType']] data = pd.merge(a, b, how="left", on="businessname").drop_duplicates() column = 'FeePaid' mean_df = data.groupby('BusinessType').mean() sd_df = data.groupby('BusinessType').std() mean = mean_df.loc[businesstype, column] sd = float(sd_df.loc[businesstype, column]) upper_thresh = mean + sd_away*sd lower_thresh = mean - sd_away*sd if lower_thresh < 0: lower_thresh = 0 if float(value) > upper_thresh or float(value) < lower_thresh: return lower_thresh, upper_thresh, False else: return lower_thresh, upper_thresh, True server = Flask(__name__) app = dash.Dash(__name__ , external_stylesheets=[dbc.themes.BOOTSTRAP], server=server) app.title = "Fraudulent Business Detection" app.server.config['SQLALCHEMY_DATABASE_URI'] = "postgres+psycopg2://postgres:[email protected]:5432/postgres" app.server.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db = SQLAlchemy(app.server) df = pd.read_sql_table('license_data', con=db.engine) address_quantile_df = pd.read_sql_table('address_quantiles', con=db.engine) address_count_df = pd.read_sql_table('address_counts', con=db.engine) address_frequencies_df = pd.read_sql_table('address_frequencies', con=db.engine) registered_url_df = pd.read_sql_table('registered_urls', con=db.engine) possible_url_df = pd.read_sql_table('possible_urls', con=db.engine) url_info_df = pd.read_sql_table('url_info', con=db.engine) jb_emp_post = pd.read_sql_table('jobbank_employer_posting', con=db.engine) jb_emp_post = jb_emp_post.rename(columns={'business_name': 'BusinessName'}) # business_info = pd.read_sql_table('business_info', con=db.engine) # business_info = business_info.rename(columns={'business_name': 'BusinessName'}) colors = { 'background': "#00000", 'text': '#522889' } collapse = html.Div( [ dbc.Button( "Learn more", id="collapse-button", className="mb-3", outline=False, style={'margin-top': '10px', 'width': '150px', 'background-color': 'white', 'color': '#522889'} ), ] ) @app.callback( Output("collapse", "is_open"), [Input("collapse-button", "n_clicks")], [State("collapse", "is_open")], ) def toggle_collapse(n, is_open): if n: return not is_open return is_open app.layout = dbc.Container([ dbc.Row([ dbc.Col([ html.H1('Fraudulent Business Detection', style={'text-align': 'center', 'color': 'white', 'font-size': '40px', 'font-family': 'Georgia'}), dbc.Collapse(html.P( """ The dashboard will help you with your wine shopping today. Whether you desire crisp Californian Chardonnay or bold Cabernet Sauvignon from Texas, simply select a state and the wine type. The results will help you to choose the best wine for you. """, style={'color': 'white', 'width': '70%'} ), id='collapse'), ], md=10), dbc.Col([collapse]) ], style={'backgroundColor': '#0F5DB6', 'border-radius': 3, 'padding': 15, 'margin-top': 22, 'margin-bottom': 22, 'margin-right': 11}), # dcc.Tabs([ dcc.Tab([ dbc.Row([ dbc.Col([ html.Br(), html.Label([ 'Company Name'], style={ 'color': '#0F5DB6', "font-weight": "bold" }), dcc.Dropdown( id='business-name', options=[{'label': name, 'value': name} for name in list(df['businessname'].dropna().unique())], style={'width': '100%', 'height': 30}, placeholder='Select a State', value = 'Time Education Inc' ), html.Br(), html.Label(['Street Address'], style={'color': '#0F5DB6', "font-weight": "bold"} ), html.Br(), html.Label(id='address'), # Not capturing unit number html.Br(), html.Label(['Search Url'], style={'color': '#0F5DB6', "font-weight": "bold"}), dcc.Textarea(style={'width': '100%', 'height': 30}), dbc.Button('Web Search', id = 'scrape-btn', n_clicks=0, className='reset-btn-1'), ], style={'border': '1px solid', 'border-radius': 3, 'margin-top': 22, 'margin-bottom': 15, 'margin-right': 0, 'height' : 350}, md=4, ), # dbc.Col([], md=1), # dbc.Col([ # html.Br(), # html.Br(), # dbc.Row([dbc.Card([ # dbc.CardHeader('Features Beyond 1-SD away from mean'), # dbc.CardBody(id='score', style={'color': '#0F5DB6', 'fontSize': 18, 'height': '70px'}), # ] # )]), # html.Br(), # ], md = 2), dbc.Col([ dcc.Graph(id='pie-chart', figure = {'layout': go.Layout(margin={'b': 0})}) ],) ]), dbc.Row([ dbc.Col([ html.Br(), dbc.CardColumns([ dbc.Card([ dbc.CardHeader(html.H4('Website URL')), dbc.CardBody(id='insight-1'), ], color = 'primary', outline=True), dbc.Card([ dbc.CardHeader(html.H4('Website Validity')), dbc.CardBody(id='insight-2') ], color = 'primary', outline=True), dbc.Card([ dbc.CardHeader('Features Beyond 1-SD away from mean'), dbc.CardBody(id='score'), ], color = 'primary', outline=True), dbc.Card([ dbc.CardHeader(html.H4('Addresses Listed')), dbc.CardBody(id='insight-4'), ], color = 'primary', outline=True), dbc.Card([ dbc.CardHeader(html.H4('Common Addresses')), dbc.CardBody(id='insight-5'), ], color = 'primary', outline=True) # dbc.CardHeader('Key Insights:', # style={'fontWeight': 'bold', 'color':'white','font-size': '22px', 'backgroundColor':'#0F5DB6', 'height': '50px'}), ]), ], md = 6, style={'border': '1px solid', 'border-radius': 3, 'margin-top': 22, 'margin-bottom': 15, 'margin-right': 0}), dbc.Col([ dbc.Row([ dcc.Graph(id='histogram'), ]), dcc.Dropdown( id='feature_type', value='Number of Employees', options = [{'label': col, 'value': col} for col in ['Fee Paid', 'Number of Employees', 'Total Fees Paid', 'Missing Values']], placeholder='Select a Feature', multi=False ), dcc.Dropdown( id='std', options = [{'label': col, 'value': col} for col in [1,2,3]], placeholder='Select a standard dev', value='', multi=False ), html.Br(), ],md = 6), ]), ]), ]) @app.callback(Output('insight-1', 'children'), Input('business-name', 'value')) def url_presence(business): try: website = registered_url_df[registered_url_df['businessname'] == business].url.iat[0] if website is None: raise IndexError('none') return f"{website}" except IndexError: try: print(possible_url_df.head(1)) print(business) print(possible_url_df[possible_url_df['businessname'] == business]) possible_website = possible_url_df[possible_url_df['businessname'] == business].iloc[0] return f"{possible_website['url']} *estimate: {possible_website['prob']}% confident" except: return f"No website found" @app.callback(Output('insight-2', 'children'), Input('business-name', 'value')) def time_online(business): try : filtered_url_df = registered_url_df[registered_url_df['businessname'] == business] url = filtered_url_df['url'].iat[0] print(url) url_time_df =url_info_df[url_info_df['url'] == url] url_time_df = url_time_df.set_index('url') reg_time = pd.to_datetime(url_time_df.loc[url, 'register_date']) reg_time = reg_time.strftime('%B') + ' ' + reg_time.strftime('%Y') exp_time = pd.to_datetime(url_time_df.loc[url, 'expire_date']) if datetime.now() < exp_time: conj = 'has' exp_time = 'present' else: conj = 'was' exp_time = pd.to_datetime(url_time_df.loc['url', 'expire_date']) exp_time = exp_time.strftime('%B') + ' ' + exp_time.stftime('%Y') if time_online: insight = f"The website {conj} been online from {reg_time} to {exp_time}" elif time_online: insight = 'No website available' return insight except (KeyError, IndexError): return 'No website information available' # @app.callback(Output('insight-3', 'children'), # Input('business-name', 'value')) # def website_online(business): # number_addresses = '' # # business_df = df.query('BusinessName == @business') # # domain_length = business_df.iloc[-1, 'time_online'] # if time_online: # insight = f"The website has been online: {time_online}" # if time_online: # insight = 'No website available' # return insight @app.callback(Output('insight-4', 'children'), Input('business-name', 'value')) def address_quantile(business): count = address_count_df[address_count_df.BusinessName == business].full_adress.iat[0] biztype = df[df.businessname == business].BusinessType.iloc[0] quantile = address_quantile_df[address_quantile_df.BusinessType == biztype]['quantile'].iat[0] if count >= quantile: return f"This business has {count} addresses. This is in the top 1% in the {biztype} category" else: return f"This business has {count} addresses." @app.callback(Output('insight-5', 'children'), Input('business-name', 'value')) def address_frequency(business): address = df[df.businessname == business].iloc[0] address_text = ' '.join(filter(None, address[['House', 'Street', 'City', 'Province','Country','PostalCode']])) try: address_f = address_frequencies_df[address_frequencies_df.full_adress == address_text]['BusinessName'].iat[0] if stats.percentileofscore(address_frequencies_df.BusinessName.values, address_f) >= 99: return f'This address has {address_f} businesses listed at it. That is in the top 1%' else: return f'This address has {address_f} businesses listed at it' except IndexError: return f'No addresses for this business' def calculate_scores(business): business = 'Time Education Inc' toy3 = {'BusinessName':"Time Education Inc", 'num_posting' : '0'} toy1 = {'BusinessName':"Time Education Inc", 'url' : 'www.google.ca'} toy2 = {'url' : 'www.google.ca', 'register_date' : '1999-01-02', 'expire_date': '2022-09-29'} url_df = pd.DataFrame(toy1, index=[1]) url_info = pd.DataFrame(toy2, index=[1]) jb_emp_post = pd.DataFrame(toy3, index=[1]) filtered_url_df = url_df.query('BusinessName == @business') # has registered website # longevity of website # has more than 1 employee # number of missing inputs ############################### # length of time online if url_df.loc[1,'url'] == url_df.loc[1,'url']: # checks if nan url_color = 'green' else: url_color = 'red' url = url_df.loc[1, 'url'] url_time_df =url_info.query('url == @url') url_time_df = url_time_df.set_index('url') reg_time = pd.to_datetime(url_time_df.loc[url, 'register_date']) exp_time = pd.to_datetime(url_time_df.loc[url, 'expire_date']) time_diff = exp_time - reg_time if time_diff < timedelta(7) or url_color == 'red': longevity_color = 'red' elif time_diff < timedelta(365): longevity_color = 'yellow' else: longevity_color='green' ############################## # Job Posting # jb_emp_post = pd.merge(df, jb_emp_post, how='left', on='businessname') # num_posting = jb_emp_post.query('BusinessName == @business').iloc[0,-1] # print(num_posting) # if int(num_posting) > 0: # job_post_color = 'green' # else: # job_post_color = 'red' ############################# scores = [url_color, longevity_color, 'job_post_color', 'red'] return scores @app.callback(Output('pie-chart', 'figure'), [Input('feature_type', 'value'), Input('business-name', 'value')]) def plot_donut(score, business): score_list = calculate_scores(business) df_dict = {'feat': ['website', 'reviews', 'job posting', 'other'], 'size': [25, 25 ,25,25], 'score' : score_list} pie_df = pd.DataFrame(df_dict) fig = go.Figure(data=[go.Pie(labels=pie_df['feat'], values=[25,25,25,25])]) fig.update_traces(hoverinfo='label+percent', textinfo='label', textfont_size=20, marker=dict(colors=pie_df['score'], line=dict(color='#000000', width=1))) fig.update_layout(showlegend=False) return fig @app.callback(Output('address', 'children'), [Input('business-name', 'value')]) def update_address(business): address = df[df.businessname == business].iloc[0] address_text = ' '.join(filter(None, address[['House', 'Street', 'City', 'Province','Country','PostalCode']])) return address_text @app.callback(Output("histogram", "figure"), [Input('feature_type', 'value'), Input('business-name', 'value'), Input('std', 'value')]) def plot_hist(xaxis, business,sd): xrange = None ci_color = 'black' business_df = df.query('businessname == @business') type_value = business_df.iloc[0, 9] #business_join = pd.merge(business_info, df, how='left', on='BusinessName') if sd == '': sd = 1 if xaxis == 'Fee Paid': clean_name = 'Fees Paid' xaxis = 'FeePaid by' index = -4 estimate = business_df.iloc[-1, index] # use -4 for employees, -3 for FeesPaid hist_data = df.query('BusinessType == @type_value').loc[:, xaxis] lower_thresh, upper_thresh, _ = within_thresh(estimate, type_value, xaxis, df, sd) xrange=[0, upper_thresh*1.25] elif xaxis == 'Number of Employees': clean_name = 'Number of Employees at' xaxis = 'NumberofEmployees' index = -5 estimate = business_df.iloc[-1, index] hist_data = df.query('BusinessType == @type_value').loc[:, xaxis] lower_thresh, upper_thresh, _ = within_thresh(estimate, type_value, xaxis, df, sd) xrange=[0, upper_thresh*1.25] elif xaxis ==
# Category Archives: physics ## 2012-2013 Year In Review – Learning Standards This is the second post reflecting on this past year and I what I did with my students. My first post is located here. I wrote about this year being the first time I went with standards based grading. One of the most important aspects of this process was creating the learning standards that focused the work of each unit. ### What did I do? I set out to create learning standards for each unit of my courses: Geometry, Advanced Algebra (not my title – this was an Algebra 2 sans trig), Calculus, and Physics. While I wanted to be able to do this for the entire semester at the beginning of the semester, I ended up doing it unit by unit due to time constraints. The content of my courses didn’t change relative to what I had done in previous years though, so it was more of a matter of deciding what themes existed in the content that could be distilled into standards. This involved some combination of concepts into one to prevent the situation of having too many. In some ways, this was a neat exercise to see that two separate concepts really weren’t that different. For example, seeing absolute value equations and inequalities as the same standard led to both a presentation and an assessment process that emphasized the common application of the absolute value definition to both situations. ### What worked: • The most powerful payoff in creating the standards came at the end of the semester. Students were used to referring to the standards and knew that they were the first place to look for what they needed to study. Students would often ask for a review sheet for the entire semester. Having the standards document available made it easy to ask the students to find problems relating to each standard. This enabled them to then make their own review sheet and ask directed questions related to the standards they did not understand. • The standards focus on what students should be able to do. I tried to keep this focus so that students could simultaneously recognize the connection between the content (definitions, theorems, problem types) and what I would ask them to do with that content. My courses don’t involve much recall of facts and instead focus on applying concepts in a number of different situations. The standards helped me show that I valued this application. • Writing problems and assessing students was always in the context of the standards. I could give big picture, open-ended problems that required a bit more synthesis on the part of students than before. I could require that students write, read, and look up information needed for a problem and be creative in their presentation as they felt was appropriate. My focus was on seeing how well their work presented and demonstrated proficiency on these standards. They got experience and got feedback on their work (misspelling words in student videos was one) but my focus was on their understanding. • The number standards per unit was limited to 4-6 each…eventually. I quickly realized that 7 was on the edge of being too many, but had trouble cutting them down in some cases. In particular, I had trouble doing this with the differentiation unit in Calculus. To make it so that the unit wasn’t any more important than the others, each standard for that unit was weighted 80%, a fact that turned out not to be very important to students. ### What needs work: • The vocabulary of the standards needs to be more precise and clearly communicated. I tried (and didn’t always succeed) to make it possible for a student to read a standard and understand what they had to be able to do. I realize now, looking back over them all, that I use certain words over and over again but have never specifically said what it means. What does it mean to ‘apply’ a concept? What about ‘relate’ a definition? These explanations don’t need to be in the standards themselves, but it is important that they be somewhere and be explained in some way so students can better understand them. • Example problems and references for each standard would be helpful in communicating their content. I wrote about this in my last post. Students generally understood the standards, but wanted specific problems that they were sure related to a particular standard. • Some of the specific content needs to be adjusted. This was my first year being much more deliberate in following the Modeling Physics curriculum. I haven’t, unfortunately, been able to attend a training workshop that would probably help me understand how to implement the curriculum more effectively. The unbalanced force unit was crammed in at the end of the first semester and worked through in a fairly superficial way. Not good, Weinberg. • Standards for non-content related skills need to be worked in to the scheme. I wanted to have some standards for year or semester long skills standards. For example, unit 5 in Geometry included a standard (not listed in my document below) on creating a presenting a multimedia proof. This was to provide students opportunities to learn to create a video in which they clearly communicate the steps and content of a geometric proof. They could create their video, submit it to me, and get feedback to make it better over time. I also would love to include some programming or computational thinking standards as well that students can work on long term. These standards need to be communicated and cultivated over a long period of time. They will otherwise be just like the others in terms of the rush at the end of the semester. I’ll think about these this summer. You can see my standards in this Google document: 2012-2013 – Learning Standards I’d love to hear your comments on these standards or on the post – comment away please! ## Speed of sound lab, 21st century version I love the standard lab used to measure the speed of sound using standing waves. I love the fact that it’s possible to measure physical quantities that are too fast to really visualize effectively. This image from the 1995 Physics B exam describes the basic set-up: The general procedure involves holding a tuning fork at the opening of the top of the tube and then raising and lowering the tube in the graduated cylinder of water until the tube ‘sings’ at the frequency of the tuning fork. The shortest height at which this occurs is the fundamental frequency of vibration of the air in the tube, and this can be used to find the speed of sound waves in the air. The problem is in the execution. A quick Google search for speed of sound labs for high school and university settings all use tuning forks as the frequency source. I have always found the same problems come up every time I have tried to do this experiment with tuning forks: • Not having enough tuning forks for the whole group. Sharing tuning forks is fine, but raises the lower limit required for the whole group to complete the experiment. • Not enough tuning forks at different frequencies for each group to measure. At one of my schools, we had tuning forks of four different frequencies available. My current school has five. Five data points for making a measurement is not the ideal, particularly for showing a linear (or other functional) relationship. • The challenge of simultaneously keeping the tuning fork vibrating, raising and lowering the tube, and making height measurements is frustrating. This (together with sharing tuning forks) is why this lab can take so long just to get five data points. I’m all for giving students
with an off-center axial load, how would you calculate the stiffness of the beam and the resultant deformation? >> >>12566244 Thank you for the explanation. >perhaps she was referring to phenomenological results That's what I'm assuming. Do let me know if you hear of anything, please. >> >>12559550 >insults someone for asking a stupid question Dude fuck off >> >>12565998 Completely normal. Fortunately, if you’re in a position where you’ll need calculus, you’ll have to keep using these tools again and again in subsequent classes. Eventually it’ll become as natural as HS algebra Really, with any class you take, the point isn’t to learn everything and have it stick forever. The point is to keep developing your ‘’maturity’ with the field and to learn enough so that if you ever need to use the subject again, you can relearn it quickly. The only way to really internalize a complicated subject is to use it again and again >> >>12565603 It shouldn’t: try drawing from a multivariate Gaussian where x1 is uncorrelated with x2, and both x1 and x2 are correlated with x3. Apply LASSO. If my intuition is right, you shouldn’t have either coefficient go to 0 >> Are there any easy to visualize homeomorphisms from the sphere to itself with a single fixed point? Consider $f: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ defined by $f(x, y) = (x+1, y)$. $f$ has no fixed points, obviously. Since the Alexandroff one-point compactification is a functor $AOP$ (name invented right now, there's probably something else that's used more often)(see wikipedia for a source) we can get $AOP(f): S^2 \rightarrow S^2$, which is a homeomorphism and only fixes the point at infinity. But this example is super fucking jarring, I want something cuter. >> >>12566576 I've just noticed I somehow misread wikipedia and it isn't really a functor. The construction still works tho. >> what's a good, free software for plots? I need one that marks the intersection points with at least 8 digit precision..., bonus if it can show all at the same time >> File: yukari_and_okina3.jpg (1.37 MB, 2927x4096) 1.37 MB JPG >>12566587 $f\in C(X)$ pushes forward to the compactification $\overline{X}$ only if $f\in C_0(X)$, i.e. vanishing outside a compact set (specifically $f\neq0$ around open neighborhoods around $\infty$). Translations obviously don't do this, >> >>12566659 just use python and matplotlib >> File: yukari_cone.png (601 KB, 1548x877) 601 KB PNG >>12566576 To answer your question, homotopies of maps from one sphere to the other can be interpreted as a (pinned; i.e. satisfying certain homotopy conditions) path into the wedge product $S^2\wedge S^2$. This path "drags" one sphere into another, so to speak. >>12566684 Meant $f=0$ on open sets around $\infty$. >> >>12565726 I'm fairly sure that you should assume that the probability of failure in any given time interval is constant, so the probability that a specific computer works at any point in time follows an exponential decay. >> >>12566205 $\sum_{k=1}^{n+1}ar^k=(ar+ar^2+...+ar^n+ar^{n+1})+(a-a)=(ar^0+ar+ar^2+...+ar^n)+ar^{n+1}-a=\sum_{k=0}^{n}ar^k+ar^{n+1}-a$ >> >>12565891 >>12566009 Yes I don't expect to be spoonfed the answer, anyway it should work out to 7/20 if the lifespan stuff scales "uniformly". Should I read a stats inference book to learn more about this stuff? So far I've just been doing regular probability problems >> >>12566870 what if computer lifespan is normally distributed? uniformly distributed? exponentially distributed? all of these options give different answers to your question, yet there's no way to know which one to use >> >>12566964 >>12566827 yea that's what I thought, the problem is from a shit tier source so they didn't specify but after doing some research and staring at the answers in this thread I agree it's probably assumed to be exponentially distributed so should just be 2^(-8/5). fuck it >> rolling for set.seed() >> >>12566846 >>12566222 Thank you. I understand the full proof now. I think what confused me was the change of index and setting k = j+1. What helped me was to instead just look at the expanded form of the summation after we distributed r, and that made it easy to see that the necessary relation to S (it’s the same thing but with the r^0 term removed and an r^(n+1) term added). Everything after that is just normal algebra. >> How do I compute [eqn]\int_C \frac{z^5}{(z-1)(z+1)^3} [/eqn] where C is the circle of radius 3 centered at 0 >> >>12566964 > yet there's no way to know which one to use The fact that the mean lifespan is the only piece of information given regarding the distribution implies that it is the only piece of information required, which in turn implies that the hazard rate is constant. >> >>12567392 Partial fraction decomposition and residue theorem. I get 8πi. >> >>12567392 I got 3/8pi i. I could've fucked up but you just have to evaluate this closed contour on the poles of your function, you have a.. oh i did the wrong question. But anyways, here's the method anyways, you have a first order pole at z = 1 and a third order pole at z = -1, so when you evaluate the residues you will need to take the (order-1)^th derivative of the integrand and add up those residues. Then you multiply the result by 2pi * i. If I recall correctly, this is cauchy's residue theorem. >> What chemistry is happening for a girl's hair to smell so good? Is it the shampoo they wear? I don't think it is. I believe there's body odor made in their hair and it smells nice. What chemicals and what chemistry is happening for a girl's hair to smell so good? >> >>12536556 You're just getting too high, smoke CBD flower instead to see the difference >> >>12561802 >>12564057 Thanks, at least I know I'm not missing something stupid >> Can someone tell me what is calculus used in physics called and why is it so fucking different from the math courses that I did? Why are there weird notations such as du/dy in fluid mechanics, like does this mean for every step dy I take the speed changes by du? And this is completely different from say dx/dt which actually makes sense. I tried finding on the web how physicists use calculus because there isn't the slightest connection to the math calc. course I had. I managed to find some site mentioning there are "amount" and "change" differentials, and I think this is my problem. I've never been taught there even is a difference, I've been taught derivative is a slope of the tangent at a point of a curve. I want to be able to understand physics proofs that use calculus, but it's used in such an alien way, like a strip of infinitesimal width is dx, and its mass is dm, and it's represented as dm/dx. This surely cannot mean that at some point mass changes by dm i I move by dx. Because what is then the difference between m(x), where I plug an x and get the exact mass there, and dm/dx(x), where I plug in an x and get.... I don't even know what it represents. >> >>12568474 you nudge y an infinitesimally small amount and see how much u changes, that's all >> >>12568474 A straight line has a slope, right? For y = mx + b, this slope is m, and it's the same everywhere on the line. The m tells you how much y changes when x changes, so if you go 1 to the right you go up by m. Well, any curve has a slope, only their slope is different. For example, for y = x^2 the function is flat at x=0 so the slope is 0, and the higher x gets, the steeper the slope becomes. To obtain the value of the slope at f(x) = x^2, we look at some interval h and see how much y changes so "slope at x" = (f(x + h) - f(x))/h = (x^2 + 2xh + h^2 - x^2)/h = 2x + h. Now all we have to do is to make h infinitely small since we want to get the approximation as accurate as possible, so we find that "slope of f(x) at x" = 2x. This x-dependent slope function is called the derivative of f, and it is often written f'(x) or dy/dx, where dx =
X^0$, $\tob = \{ X^m \cap X^0 : m \in \mathbf{Z} \}$, and $\dop \tob = \{ 0, Y \cap X^0, V \}$. If we take $B: = \tob$ and $\a = Y \cap X^0$, then the lefthand side of \eqref{eq_algebraic_condition} equals $V$, but the righthand side equals $\a = Y \cap X^0 < V$. \end{counterexample} \begin{corollary} \label{cor_inf_dim_vec_space_bifilt_dont_extend} The following are equivalent for any vector space $V$: \begin{enumerate*} \item Every pair of chains $\dop \tob, \tob \subseteq \Sub(V)$ extends to a CD sublattice $\dop \tob \sqcup \tob \subseteq \latb$. \item The space $V$ is finite-dimensional. \end{enumerate*} \end{corollary} \begin{proof} If $V$ is finite-dimensional then every bifiltration extends to a CD sublattice by Theorem \ref{thm_simplifiedcdfextensioncriterion}. If $V$ is not finite-dimensional, then it contains a subspace $U$ isomorphic to the space of finitely-supported functions $\mathbf{Z} \to \fielda$. In this case we may apply the construction of Counterexample \ref{cx_infinite_dimensional_bifiltrations_cd} to find a bifiltration of $U$ that does not extend to a CD sublattice; since $\Sub(U)$ is a $\mathring \forall$-complete sublattice of $\Sub(V)$, this bifiltration cannot extend to a CD sublattice of $\Sub(V)$. The desired conclusion follows. \end{proof} \begin{reptheorem}{thm_woextensioninmodularuc} Each pair of well ordered chains in a complete, modular, upper continuous lattice $\lata$ extends to a complete, completely distributive sublattice of $\lata$. \end{reptheorem} \begin{proof} We defer the proof to \S\ref{sec_wo_cd_extension}. \end{proof} Theorem \ref{thm_saecularexistencecriteria} gives several existence criteria specific to the saecular CDF homomorphism. Here, for each $\tela \in \tob$, each lattice homomorphism $\cdf: \dla \to \Sub_\mathfrak$, and each subset $\seta \subseteq \dla$, we set \begin{align*} \cdf[\seta]_\tela : = \{ \cdf(\ela)_\tela : \ela \in \seta \} \end{align*} \begin{theorem}[Existence of the saecular CDF homomorphism] \label{thm_saecularexistencecriteria} The saecular CD homomorphism exists if any one of the following criteria is satisfied. \begin{enumerate*} \item[] \emph{(Height criterion)} Lattice $\Sub(\mathfrak_\tela)$ has finite height, for each $\tela \in \tob$. \item[] \emph{(Well-ordered criterion)} Lattice $\Sub (\mathfrak_\tela)$ is complete and upper continuous, and chains $ \dop {\mathrm K}_f[\tob]_\tela$ and $\mathrm{K}_f[\tob]_\tela$ are well ordered for each $\tela \in \tob$. \item[] \emph{(Algebraic criterion)} For each $\tela \in \tob$, lattice $\Sub (\mathfrak_\tela)$ is complete and algebraic, and \begin{align*} \textstyle \bigwedge_{\lelb \in B} (\lela \vee \lelb) = \lela \vee \bigwedge_{\lelb \in B} \lelb \end{align*} whenever $ X, Y \in \{ \dop {\mathrm K}_f[ \tobc]_\tela , \mathrm{K}_f[\tobc]_\tela\}, $ $ \lela \in X $ , and $ B \subseteq Y. $ \item[] \emph{(Split criterion)} $\ecata \in \{ R \mathsf{Mod}, \mathsf{Mod} R\}$, and $\dga$ splits as a direct sum of interval functors. \end{enumerate*} \end{theorem} \begin{proof} Suppose that $\dga$ is a direct sum $\bigoplus_\itva \dgb(\itva)$, where $\itva$ runs over all intervals in $\tob$ and $\dgb(\itva) \in \allifun(\itva)$ for each $\itva$. Then there exists a complete lattice homomorphism $ \cdf : \pslh \axlh(\dop \tob \sqcup \tob) \to \Sub_\dga $ such that $\cdf \{\dop \seta \sqcup \seta\} = \dgb(\seta - \dop \seta)$ for every down set of $\dop \tob \sqcup \tob$. In fact $ \dop {\mathrm K}_f = \cdf \dop \cpph$ and $\mathrm{K}_f = \cdf \dop \cpph$ so, by uniqueness, $\FCD( \dop {\mathrm K}_f, \mathrm{K}_f)$ is the restriction of $\cdf$ to $\axlh^2(\dop \tob \sqcup \tob)$. This establishes the split criterion. For the algebraic and well-ordered criteria, recall that a subset $\seta \subseteq \Sub_\mathfrak$ extends to a complete CD sublattice iff $\{\ela_\tela : \ela \in \seta \}$ extends to a complete CD sublatice of $\Sub(\mathfrak_\tela)$ for all $\tela$. The desired conclusion thus follows from Theorems \ref{thm_simplifiedcdfextensioncriterion} and \ref{thm_woextensioninmodularuc}. The height criterion implies the well ordered one. \end{proof} \begin{corollary} Every functor from a totally ordered set to the (full) subcategory of Artinian modules over a commutative ring $\ringa$ has a saecular CDF homomorphism. \end{corollary} \begin{proof} This holds by the well ordered criterion. \end{proof} \subsection{Proof of Theorems \ref{thm_cdf_is_proper} and \ref{thm_saecular_factors_are_interval}} \label{sec_prove_factors_are_intervals} Recall that a saecular homomorphism is \emph{natural} at $\seta$ if \begin{align} \f(\aa \dople \a)\di |\seta|_\aa = |\seta \wedge \nu(\aa)|_\a = |\seta|_\a \wedge \dop {\mathrm K}_f(\aa)_\a \tag{\ref{eq_dicap}} \\ \f(\aa \dople \a)\ii |\seta|_\a = |\seta \vee \nu(\a) |_\aa = |\seta|_\aa \vee \mathrm{K}_f(\a)_\aa \tag{\ref{eq_iicup}} \end{align} A saecular homomorphism is natural if it is natural at every element of its domain. \begin{lemma} \label{lem_pushpull_basic} If $\mathsf{SBD}(\dga)$ exists then it is natural on $\Im(\nu)$. Concretely, if (i) $L = \mathrm{K}_f$ and $\lelc \in \tob$, or (ii) $L = \dop {\mathrm K}_f$ and $\lelc \in \dop\tob$, then for each $\aa \preceq \a$ \begin{align} \f(\aa \dople \a)\di \L(\lelc)_\aa = \L(\lelc)_\a \wedge \dop {\mathrm K}_f(\aa)_\a \label{eq_dicap_reduced} \\ \f(\aa \dople \a)\ii \L(\lelc)_\a = \L(\lelc)_\aa \vee \mathrm{K}_f(\a)_\aa \label{eq_iicup_reduced} \end{align} \end{lemma} \begin{proof} Equation \eqref{eq_dicap_reduced} is essentially trivial when $\L = \dop {\mathrm K}_f$: in particular, $\aa \le \lelc$ implies $\L(\lelc)_\aa = \mora_\aa$, hence both sides of \eqref{eq_dicap_reduced} equal $ \dop {\mathrm K}_f(\aa)_\a$, while $ \lelc < \a$ implies that both sides equal $ \dop {\mathrm K}_f(\lelc)_\a$. Equation \eqref{eq_dicap_reduced} is likewise trivial when $\L = \mathrm{K}_f$ and $\c \le \aa$, for in this case both sides equal 0. On the other hand, when $\aa \le \c$ one has \begin{align*} \f(\aa \dople \a)\di \mathrm{K}_f(\c)_\aa = \f(\aa \dople \a)\di \f(\aa \dople \a)\ii \mathfrak(\a \preceq \c)\ii 0 = \mathrm{K}_f(\c)_\a \wedge \dop {\mathrm K}_f(\aa)_\a. \end{align*} This establishes \eqref{eq_dicap_reduced} in full generality. The proof of \eqref{eq_iicup_reduced} is dual. \end{proof} For ease of reference, let us recall Theorem \ref{thm_cdf_is_proper}: \begin{reptheorem}{thm_cdf_is_proper} Suppose that $\mora$ admits a saecular BDH. \begin{enumerate*} \item $\mathsf{SCD}(\mora)$ is natural, if it exists. \item $\mathsf{SBD}(\mora)$ is natural if $\mathsf{SCD}(\mora)$ exists. \item $\mathsf{SBD}(\mora)$ is natural iff it is natural at each $\seta \in \Im(\dop \cpph) \cup \Im(\cpph)$. \end{enumerate*} \end{reptheorem} \begin{proof}[Proof of Theorem \ref{thm_cdf_is_proper}] Suppose that $\mathsf{SCD}(\mora)$ exists. We can regard the formulae on the righthand sides of \eqref{eq_dicap} and \eqref{eq_iicup} as functions of $\seta \in \axlh^2(\dop \tob \sqcup \tob)$; in fact, by complete distributivity, these formulae determine $\forall$-complete lattice homomorphisms from $\axlh^2(\dop \tob \sqcup \tob)$ to $[0, \dop {\mathrm K}_f(\aa)_\a] \subseteq \Sub(\mora_\a)$ and $[\mathrm{K}_f(\a)_\aa, 1] \subseteq \Sub(\mora_\aa)$, respectively. Since $\axlh^2(\dop \tob \sqcup \tob)$ is the free CD lattice on $\dop \tob \sqcup \tob$, it therefore suffices to verify \eqref{eq_dicap} and \eqref{eq_iicup} in the special case where $\seta \in \Im(\nu)$. Lemma \ref{lem_pushpull_basic} supplies this fact. This establishes assertion 1. If $\mathsf{SCD}(\mora)$ exists, then it restricts to $\mathsf{SBD}(\mora) = \mathsf{SCD}(\mora)|_{\dxlh \axlh (\dop \tob \sqcup \tob)}$, so $\mathsf{SBD}(\mora)$ is s-natural as a special case. This establishes assertion 2. If \eqref{eq_dicap} and \eqref{eq_iicup} hold for each $\seta \in \Im(\dop \cpph) \cup \Im(\cpph)$ then they hold for every $\seta \in \dxlh \axlh(\dop \tob \sqcup \tob)$, since every element of $\dxlh \axlh(\dop \tob \sqcup \tob)$ can be obtained from $\Im(\dop \cpph) \cup \Im(\cpph)$ via a finite number of meet and join operations, and both $\f(\aa \dople \a)\di$ and $\f(\aa \dople \a)\ii$ preserve binary meets and joins, by Lemma \ref{lem_direct_image_complete_on_CD_complete_sublattice}. This establishes assertion 3, and completes the proof. \end{proof} \begin{corollary} \label{cor_kerofsub} Let $\setb \le \seta \in \axlh^2(\dop \tob \sqcup \tob)$ be given. If $\mathsf{SCD}(\mora)$ exists then $\mathsf{SCD}\left (\xfrac{\seta}{\setb} \right)$ exists, and\footnote{The numerator of \eqref{eq_inducedkfiltrations} requires no parentheses, by the modular law. } \begin{align} \mathrm{K}_{ |\seta| /| \setb| }(\setc ) = \frac {|\seta \wedge (\cpph \setc) \vee \setb| } {|\setb|} && \dop {\mathrm{K}}_{ |\seta| /| \setb| }( \dop \setc ) = \frac {|\seta \wedge (\dop \cpph \dop \setc) \vee \setb| } {|\setb|} \label{eq_inducedkfiltrations} \end{align} for each $\dop \setc \in \axlh^2(\dop \tob)$ and $\setc \in \axlh^2(\tob)$. More generally, \begin{align} \sfun X = \xfrac{\seta \wedge X \vee \setb}{\setb} \label{eq_subquo_mapping_formula} \end{align} for $X \in \axlh^2(\dop \tob \sqcup \tob)$, where $\sfun: = \mathsf{SCD} (\xfrac{\seta}{\setb})$ \end{corollary} \begin{proof} Theorem \ref{thm_cdf_is_proper} implies the last equality in each of the following two lines, for any $\aa \le \a \in \tob$. \begin{align} \mathrm{K}_{|\seta| /| \setb| }(\a)_\aa = \frac{|\seta|}{|\setb|} (\aa \le \a) \ii 0 & = \frac{|\seta|(\aa \le \a)\ii |\setb|_\a}{|\setb|_\aa} = \frac{|\seta|_\aa \wedge \f(\aa \dople \a)\ii |\setb|_\a}{|\setb|_\aa} = \frac{|\seta|_\aa \wedge \mathrm{K}_f(\a)_\aa \vee |\setb|_\aa }{|\setb|_\aa} \label{eq_subquo_formula_k} \\ \dop {\mathrm{K}}_{|\seta| /| \setb| }(\aa)_\a = \frac{|\seta|}{|\setb|} (\aa \le \a) _{\sbt} 1 & = \frac{|\seta|(\aa \le \a)_{\sbt} 1 \vee |\setb|_\a}{|\setb|_\a} = \frac{|\seta|_\a \wedge \dop {\mathrm K}_f(\aa)_\a \vee |\setb|_\a}{|\setb|_\a}. \label{eq_subquo_formula_kk} \end{align} Therefore \eqref{eq_inducedkfiltrations} holds when $\setc = \fem(\a)$ and $\dop \setc = \dop \fem(\aa)$. If we regard the righthand side of \eqref{eq_subquo_mapping_formula} as a function $h(X)$, then this function can be regarded as a composite $h = q_\bullet \circ p \circ \mathsf{SCD}( \mora)$, where $p(\soa) = \soa \wedge |\seta|$ and $q$ is the quotient map $|\seta| \to |\seta| /|\setb|$. Function $p$ restricts to a $\mathring \forall$-complete lattice endomorphism on $\Im(\mathsf{SCD}(\mora))$ by complete distributivity, and $q _{\sbt} $ restricts to a $\forall$-complete lattice homomorphism on $\Im(\mathsf{SCD}(\mora))$ by Lemma \ref{lem_direct_image_complete_on_CD_complete_sublattice}, Thus $h$ is a $\forall$-complete lattice homomorphism. Moreover, $h \circ \nu = \dop {\mathrm{K}}_{|\seta| /| \setb| } \sqcup \mathrm{K}_{|\seta| /| \setb| }$, by \eqref{eq_subquo_formula_k} and \eqref{eq_subquo_formula_kk}. Thus $h = \mathsf{SCD}\left (\xfrac{\seta}{\setb} \right)(X)$, as desired. \end{proof} \begin{corollary} \label{cor_kerofsub_bd} Suppose that $\sfun = \mathsf{SBD}(\dga)$ exists, and that $\sfun$ is natural at $\seta$ and $\setb$, where $\setb
and \begin{equation*} \partial^m A \subseteq \partial^\ast A. \end{equation*} Employing an argument similar to \cite[Lemma 5.1]{MR3978264}, one can still prove that if $ A \subseteq \mathbf{R}^{n+1}$ is an arbitrary set, then $ \partial^m A $ is a Borel subset of $ \mathbf{R}^{n+1}$ and $ \bm{n}(A, \cdot) $ is a Borel function. We recall that an $ \mathcal{L}^{n+1}$ measurable subset $ A $ of $ \mathbf{R}^{n+1}$ is a set of locally finite perimeter in $ \mathbf{R}^{n+1}$ if the characteristic function $ \bm{1}_A$ is a function of locally bounded first variation (see \cite[Chapter 3]{MR1857292}). If $ A \subseteq \mathbf{R}^{n+1}$ is a set of finite perimeter, we denote with $ \mathcal{F}A $ the {\em reduced boundary} of $ A $ (see \cite[3.54]{MR1857292}). An important result of De Giorgi, see \cite[Theorem 3.59]{MR1857292}, implies that \begin{equation*} \mathcal{F}A \subseteq \partial^m A. \end{equation*} A result of Federer (see \cite[Theorem 3.61]{MR1857292}) yields that if $ A $ is a set of locally finite perimeter, then \begin{equation}\label{eq: reduced an essential boundary} \mathcal{H}^n(\partial^\ast A \setminus \mathcal{F} A) =0. \end{equation} Another result of Federer (see \cite[Theorem 4.5.11]{MR0257325}) implies that if $ A \subseteq \mathbf{R}^{n+1}$ and $ \mathcal{H}^n(K \cap \partial A) < \infty$ for every compact set $ K \subset \mathbf{R}^{n+1}$, then $ A $ is a set of locally finite perimeter. \begin{Definition} Let $A\subset\mathbf{R}^{n+1}$ be a Borel set with locally finite perimeter, and let $\phi $ be a uniformly convex $ \mathcal{C}^2 $-norm on $ \mathbf{R}^{n+1} $. The \emph{$ \phi$-perimeter} of $A$ is the Radon measure $\mathcal{P}^\phi(A, \cdot)$ supported in $ \partial A $ such that \begin{equation*} \mathcal{P}^\phi(A, S) = \int_{S \cap \partial^mA}\phi(\bm{n}(A,x))\, d\mathcal{H}^n(x) \qquad \textrm{for Borel sets $ S \subseteq \mathbf{R}^{n+1} $.} \end{equation*} The total measure is denoted by $ \mathcal{P}^\phi(A, \mathbf{R}^{n+1}) = \mathcal{P}^\phi(A, \partial^m A) = \mathcal{P}^\phi(A) \in [0,\infty]$. \end{Definition} \medskip Clearly, we have $\mathcal{P}^\phi(A)>0$ if and only if $\mathcal{H}^n(\partial^m A)>0$. \medskip The following lemma will be needed in Section \ref{Section: positive reach}. We refer to this section and the references provided there, for the definition and the basic facts concerning sets of positive reach. \begin{Lemma}\label{lem:finite perimeter and positive reach} \begin{enumerate} \item[{\rm (a)}] \label{lem:finite perimeter and positive reach:1} If $A\subset\mathbf{R}^{n+1}$ is a Borel set of locally finite perimeter such that $ 0< \mathcal{L}^{n+1}(A) < \infty$, then $\mathcal{H}^n(\mathcal{F} A)>0$. \item[{\rm (b)}] \label{lem:finite perimeter and positive reach:2} If $A\subseteq \mathbf{R}^{n+1}$ is a set of positive reach, then $ \mathcal{H}^n(K \cap \partial A) < \infty$ for every compact set $ K \subseteq \mathbf{R}^{n+1}$, and consequently $ A $ is a set of locally finite perimeter. \end{enumerate} \end{Lemma} \begin{proof} Noting \cite[Theorem 3.59]{MR1857292} and \cite[Theorem 3.36]{MR1857292}, the statement in (a) directly follows from the isoperimetric inequality in \cite[Theorem 3.46]{MR1857292}. We now prove (b). We fix $ 0 < r < \reach(A)$ and note that $ S(A,r)$ is a closed $ \mathcal{C}^{1}$-hypersurface and $ \xi : =\bm{\xi}_A | S(A,r)$ is a Lipschitz map with $ \xi(S(A,r)) = \partial A$. Since $ \partial A \cap{B}(0,s) \subseteq \xi (\xi^{-1}(\partial A \cap {B}(0,s)))$ and $\xi^{-1}(\partial A \cap {B}(0,s)) \subseteq S(A,r) \cap {B}(0, r+s) $ for $ s > 0 $, we infer that \begin{equation*} \mathcal{H}^n({B}(0,s) \cap \partial A) \leq \Lip(\xi)^n \mathcal{H}^n(S(A,r) \cap {B}(0, s+r)) \quad \textrm{for $ s > 0$}. \end{equation*} The right-hand side is evidently finite, since $ S(A,r) \cap {B}(0, s+r)$ is a compact subset of the closed $ \mathcal{C}^1$-hypersurface $ S(A,r)$. \end{proof} It will be sometimes useful to consider another notion of boundary: if $ A \subseteq \mathbf{R}^{n+1}$ is a closed set, then we define the \emph{viscosity boundary} of $ A $ by \begin{equation*} \partial^{v}A = \{a\in\partial A : \mathcal{H}^0(N(A,a)=1\}. \end{equation*} This is precisely the set of boundary points $a\in \partial A$ for which there is a unique (outer unit normal) \textbf{v}ector $u\in\mathbf{S}^n$ with $(a,u)\in N(A) $. For $s > 0$ we also define $ \partial^v_sA$ to be the set of points $ a \in \partial^v A$ such that there exists a closed Euclidean ball $ B $ of radius $ s $ such that $ B \subseteq A$ and $ a \in \partial B$. We set \begin{equation*} \partial^v_+A = \bigcup_{s >0} \partial^v_sA. \end{equation*} \begin{Remark}\label{rmk: viscosity boundary} We notice that $\partial^v_+ A \subseteq \partial^m A \cap \bm{p}(N(A)) \subseteq \partial^v A$ and \begin{equation*} N(A,a) = \{ \bm{n}(A,a) \} \qquad \textrm{for $ a \in \partial^m A \cap \bm{p}(N(A))$.} \end{equation*} Moreover, $ N(\mathbf{R}^{n+1} \setminus \Int(A), a) = \{-\bm{n}(A,a)\}$ for every $ a \in \partial^v_+A$. Finally, if $ s > 0 $ then $ \partial^v_s A $ is a closed subset of $ \partial A $ and $ B(a - s \bm{n}(A,a), s) \subseteq A $ for every $ a \in \partial^v_s A$. \end{Remark} \medskip The following lemma (or rather the consequence of it discussed in Remark \ref{rem:lipnorm}) will be relevant in the special case of sets with positive reach in Section \ref{sec:5.3} (see Remark \ref{rem:5.13}). \begin{Lemma}\label{lem:lipbound} Let $A \subseteq \mathbf{R}^{n+1}$ be a closed set and $x,y\in\partial A$. Let $0<r<s/2$. Suppose that $u,v \in \mathbf{S}^n$ are such that \begin{equation*} {B}(x - ru, r) \subseteq A,\quad {U}(x+su,s)\cap A=\emptyset, \quad {B}(y - rv, r) \subseteq A,\quad{U}(y+sv,s)\cap A=\emptyset. \end{equation*} Then $$ |u-v| \leq \max \left\{ \frac{2(s - 2r)}{r(s-r)}, \sqrt{\frac{2}{r(s-r)}}\right\} |x-y|. $$ \end{Lemma} \begin{proof} Since $y\in A$, we have $y\notin{U}(x+su,s)$, hence $|y-x-su|^2\ge s^2$, which yields $$|y-x|^2+s^2-2s(y-x)\bullet u\ge s^2$$ or \begin{equation}\label{ref1a1} (y-x)\bullet u\le \frac{|y-x|^2}{2s}. \end{equation} By symmetry, we also have \begin{equation}\label{ref1a2} (x-y)\bullet v\le \frac{|x-y|^2}{2s}. \end{equation} Noting that $x - ru + rv \in {B}(x-ru, r) \subseteq A$, we conclude from \eqref{ref1a2} that \begin{flalign*} (x-ru+rv-y) \bullet v & \leq \frac{|x-y + r(v-u)|^2}{2s} \\ & \leq \frac{1}{2s}|x-y|^2 + \frac{r^2}{2s}|u-v|^2 + \frac{r}{s}(x-y) \bullet (v-u). \end{flalign*} Exchanging $ x$ and $ y $ (and using \eqref{ref1a1}), we also get \begin{equation*} (y-rv+ru-x) \bullet u \leq \frac{1}{2s}|x-y|^2 + \frac{r^2}{2s}|u-v|^2 + \frac{r}{s}(y-x) \bullet (u-v). \end{equation*} Now we sum the last two inequalities to obtain \begin{equation*} (x-y) \bullet (v-u) + r|v-u|^2 \leq \frac{1}{s}|x-y|^2 + \frac{r^2}{s}|u-v|^2 + \frac{2r}{s}(x-y) \bullet (v-u), \end{equation*} and we infer \begin{equation}\label{ref:rem} r\Big( 1 - \frac{r}{s}\Big)|u-v|^2 \leq \frac{1}{s}|x-y|^2 + \Big(1 - \frac{2r}{s}\Big) |x-y||u-v|. \end{equation} If $\frac{1}{s}|x-y|^2 \leq \big(1 - \frac{2r}{s}\big) |x-y||u-v|$, then \begin{equation*} |u-v|^2 \leq \frac{2(s-2r)}{r(s-r)}|x-y||u-v|. \end{equation*} If $\frac{1}{s}|x-y|^2 \geq \big(1 - \frac{2r}{s}\big) |x-y||u-v|$, then \begin{equation*} |u-v|^2 \leq \frac{2}{r(s-r)}|x-y|^2, \end{equation*} which yields the asserted upper bound. \end{proof} \begin{Remark}\label{rem:convexcase} For convex bodies, Lemma \ref{lem:lipbound} is provided in \cite[Lemma 1.28]{Hug99} (see also \cite[Lemma 2.1]{MR1416712} for a less explicit statement and the literature cited there). In this special case, it can be seen from \eqref{ref:rem} that the Lipschitz constant is bounded from above by $1/r$ (with $s=\infty$). \end{Remark} \begin{Remark}\label{rem:lipnorm} For a closed set $A\subset\mathbf{R}^{n+1}$ and $r,s>0$, let $X_{r,s}(A)$ denote the set of all $a\in\partial A$ such that ${B}(a - ru, r) \subseteq A$ and ${U}(a+su,s)\cap A=\emptyset$ for some $u\in\mathbf{S}^n$. Then $X_{r,s}(A)\subset\partial^m A$, for any $a\in X_{r,s}(A)$ the unit vector $u$ is equal to $\bm{n}(A,a)$ (and uniquely determined) and $\{\bm{n}(A,a)\}=N(A,a)\cap \mathbf{S}^n$. If $0<r\le s/4$, then Lemma \ref{lem:lipbound} yields that $\bm{n}(A,\cdot)|X_{r,s}(A)$ is Lipschitz continuous with Lipschitz constant bounded from above by $3/r$, since $$ \frac{2(s-2r)}{r(s-r)}\le \frac{2s}{r(s-s/4)}=\frac{2s}{r\frac{3}{4}s}=\frac{8}{3}\frac{1}{r}<\frac{3}{r} $$ and $$ \sqrt{\frac{2}{r(s-r)}}\le \sqrt{\frac{2}{r\frac{3}{4}s}}=\sqrt{\frac{8}{3}\frac{1}{r2r}}= \frac{2}{\sqrt{3}}\frac{1}{r}<\frac{2}{r}. $$ \end{Remark} \section{A Steiner-type formula for arbitrary closed sets}\label{sec: steiner formula} Throughout this section, we assume that $ \phi $ is a uniformly convex $ \mathcal{C}^2 $-norm. Recalling Theorem \ref{theo: distance twice diff} we start by introducing the following definition. \subsection{Normal bundle and curvatures} We start introducing the principal curvature of the level sets $ S^\phi(A,r) $ of the distance function $ \bm{\delta}^\phi_A$ taking the eigenvalues of the normal vector field $ \bm{\nu}^\phi_A $ defined in equation \eqref{eq: nu function}. \begin{Definition} Suppose $\varnothing\neq A \subseteq \mathbf{R}^{n+1} $ is closed, $ x \in \dmn (\Der \bm{\nu}^\phi_A) $ and $ r = \bm{\delta}^\phi_A(x) $. Then the eigenvalues (counted with their algebraic multiplicities) of $ \Der \bm{\nu}^\phi_A(x)| \Tan(S^\phi(A,r), x) $ are denoted by \begin{equation*} \rchi^\phi_{A,1}(x) \leq \ldots \leq \rchi_{A,n}^\phi(x). \end{equation*} \end{Definition} \begin{Lemma}\label{rem: borel meas of chi} The set $ \dmn (\Der \bm{\nu}^\phi_A)\subseteq \Unp^\phi(A) $ is a Borel subset of $ \mathbf{R}^{n+1} $ and the functions $ \rchi^\phi_{A,i} : \dmn (\Der \bm{\nu}^\phi_A) \rightarrow \mathbf{R} $ are Borel functions for $ i\in\{1,\ldots,n\}$. \end{Lemma} \begin{proof} Let $ \mathcal{X} $ be the set of all $ \varphi \in \Hom(\mathbf{R}^{n+1}, \mathbf{R}^{n+1}) $ with real eigenvalues. For each $ \varphi \in \mathcal{X} $ we define $ \lambda_0(\varphi) \leq \ldots \leq \lambda_n(\varphi) $ to be the eigenvalues of $ \varphi $ counted with their algebraic multiplicity, and then we define the map $ \lambda : \mathcal{X} \rightarrow \mathbf{R}^{n+1} $ by \begin{equation*} \lambda(\varphi) = (\lambda_0(\varphi), \ldots , \lambda_n(\varphi)) \qquad \textrm{for $ \varphi \in\mathcal{X} $.} \end{equation*} We observe that $\mathcal{X}$ is a Borel set and $ \lambda $ is a continuous map by \cite[Theorem
routing, which need to be powered on despite not computing anything for the application. $\gamma$ and $\alpha$ are hyperparameters. Some CGRAs have tile-level power gating that enables the ability to turn off tiles that are used for neither the application's computation nor as pass-through tiles. A higher value of $\gamma$ penalizes pass-through tiles more, which encourages the placement algorithm to use already-used tiles for routing, rather than powering on otherwise unused tiles. A higher value for $\alpha$ will penalize longer potential routes, thereby encouraging shorter critical paths after routing. We find that sweeping $\alpha$ from 1 to 20 and choosing the best result post-routing results in short application critical paths. \vspace{-0.2em} \begin{equation} \label{eqn:detailedplacement-function} \text{Cost}_{net} = (\text{HPWL}_{net} - \gamma \times (\text{Area}_{net} \cap \text{Area}_{existing}))^\alpha \end{equation} After global and detailed placement, we route using an iteration-based routing algorithm~\cite{swartz1998fast}. During each iteration, we compute the slack on a net and determine how critical it is given global timing information. Then we route using the A* algorithm on the weighted graph. The weights for each edge are based on historical usage, net slack, and current congestion. This allows us to balance both routing congestion and timing criticality. Similarly to detailed placement, we also adjust the wire cost functions to discourage the use of unused tiles in favor of tracks within already-used tiles. We finish routing when a legal routing result is produced. \section{Evaluation} \label{sec:results} \vspace{-0.4cm} We evaluate the Canal system by first exploring the optimizations of interconnect FIFOs described in Section~\ref{sec:compile-hardware} and then by using the Canal system to conduct design space exploration of a CGRA interconnect. \subsection{Interconnect FIFO Optimizations} \label{sec:results-fifo} \vspace{-0.4cm} We evaluate the effect of introducing FIFOs in the interconnect on switch box area. As described in Section~\ref{sec:compile-hardware}, we need to include FIFOs in the configurable routing when running applications with ready-valid signaling. As a baseline, we compare against a fully static interconnect with five 16-bit routing tracks containing PEs with two outputs and four inputs, synthesized in Global Foundries 12 nm technology. As shown in Fig.~\ref{fig:sb-fifo-area}, adding these depth two FIFOs to the baseline design introduces a 54\% area overhead. Splitting the FIFO between multiple switch boxes results in only a 32\% area overhead over the baseline. This optimization allows for much more efficient implementation of an interconnect that supports ready-valid signaling. \subsection{Interconnect Design Space Exploration} \vspace{-0.4cm} We use Canal to explore three important design space axes of a configurable interconnect: switch box topology, number of routing tracks, and number of switch box and connection box port connections. We find that Canal's automation greatly simplifies the procedure to explore each option in the following subsections. \begin{figure} \centering \includegraphics[width=0.23\textwidth]{figs/sb_tracks_area.pdf} \includegraphics[width=0.23\textwidth]{figs/cb_tracks_area.pdf} \vspace{-1em} \caption{Left: Area of a switch box as the number of tracks increases. Right: Area of a connection box as the number of tracks increases.} \label{fig:sb-topo-tracks-area} \vspace{-1em} \end{figure} \subsubsection{Exploring Switch Box Topologies and Number of Routing Tracks} \vspace{-0.4cm} The switch box topology defines how each track on each side of the switch box connects to the tracks on the remaining sides of the switch box. The choice of topology affects how easily nets can be routed on the interconnect. High routability generally corresponds to shorter routes and shorter critical paths in applications. This allows the CGRA to be run at higher frequencies, which decreases application run time. For these experiments we investigate two different switch box topologies illustrated in Fig.~\ref{fig:sb-topos}: Wilton~\cite{wilton} and Disjoint~\cite{disjoint}. These switch box topologies have the same area, as they both connect each input to each of the other sides once. \begin{figure} \centering \vspace{-1em} \includegraphics[width=0.45\textwidth]{figs/tracks_runtime.pdf} \vspace{-1em} \caption{Application run time comparison on CGRAs with switch boxes that have different number of tracks.} \label{fig:sb-topo-tracks-runtime} \vspace{-1em} \end{figure} We found that the Wilton topology performs much better than the Disjoint topology, which failed to route in all of our test cases. The Disjoint topology is worse for routability because every incoming connection on track $i$ has a connection only to track $i$ on the three other sides of the SB. This imposes a restriction that if you want to route a wire from any point on the array to any other point on the array starting from a certain track number, you must only use that track number. In comparison, the Wilton topology does not have this restriction resulting in many more choices for the routing algorithm, and therefore much higher routability~\cite{imran}. We also vary the number of routing tracks in the interconnect. This directly affects the size of both the connection box and switch box and the amount of routing congestion. For these experiments we measure the area of the connection box and switch box as well as the run time of applications running on the CGRA. In this experiment we use an interconnect with five 16-bit tracks and PE tiles that have 4 inputs and 2 outputs. As shown in Fig.~\ref{fig:sb-topo-tracks-area}, the area of both the switch box and connection box scale with the number of tracks. From Fig.~\ref{fig:sb-topo-tracks-runtime}, we can see that the run time of the applications generally decreases as the number of tracks increases, although the benefits are less than 25\%. \subsubsection{Exploring Switch Box and Connection Box Port Connections} \vspace{-0.4cm} Finally, we explore how varying the number of switch box and connection box port connections affects the area of the interconnect and the run time of applications executing on the CGRA. In Canal, we have the ability to specify how many of the incoming tracks from each side of the tile are connected to the inputs/outputs of the PE/MEM cores. Decreasing these connections should reduce the area of the interconnect, but may decrease the number of options that the routing algorithm has. For these experiments, we vary the number of connections from the incoming routing tracks through the connection box to the inputs of the PE/MEM core, and vary the number of connections from the outputs of the core to the outgoing ports of the switch box. At maximum, we can have 4 SB sides, with connections from the core output to the four sides of the switch box. We then decrease this by removing the connections facing east for a total of three sides with connections, and finally we also remove the connections facing south for a total of two sides with connections. This is shown in Fig.~\ref{fig:sb-conns}. We do the same for the connection box. \begin{figure} \centering \vspace{-0.5em} \includegraphics[width=0.35\textwidth]{figs/sb_conns.pdf} \vspace{-0.5em} \caption{Reducing the number of connections from the outputs of the PE to the outgoing ports of the switch box.} \label{fig:sb-conns} \vspace{-1em} \end{figure} \begin{figure} \centering \vspace{-0.2em} \captionsetup{justification=centering} \begin{subfigure}[b]{.22\textwidth} \includegraphics[width=1\textwidth]{figs/port_conns_sb_area.pdf} \end{subfigure} \hspace{1em} \begin{subfigure}[b]{.22\textwidth} \includegraphics[width=0.97\textwidth]{figs/port_conns_cb_area.pdf} \end{subfigure} \vspace{-1em} \captionsetup{justification=justified} \caption{Area comparison of a switch box and a connection box that have varying number of connections with the four sides of the tile.} \label{fig:port-conn-area} \vspace{-1em} \end{figure} As shown in Fig.~\ref{fig:port-conn-area}, as the number of connections from the core to the switch box decreases, we see a decrease in switch box area. From Fig.~\ref{fig:port-conn-sb-runtime}, we can see that this generally has a small negative effect on the run time of the applications. In this case, a designer could choose to trade some performance for a decrease in switch box area. As shown in Fig.~\ref{fig:port-conn-area}, as the number of connections from the connection box to the tile inputs decreases, we see a larger decrease in connection box area. From Fig.~\ref{fig:port-conn-cb-runtime}, we can see that this has a larger negative effect on the run time of the applications. Again, a designer could choose to trade some performance for a decrease in connection box area. \section{Conclusion} \vspace{-0.4cm} We have developed Canal, a domain-specific language and interconnect generator for coarse-grained reconfigurable arrays. The Canal language allows a designer to easily specify a complex configurable interconnect, while maintaining control over the low-level connections. The hardware generator, placer and router, and bitstream generator help to facilitate design space exploration of CGRA interconnects. We demonstrate the flexibility of Canal by creating a hybrid ready-valid interconnect and demonstrate the
\section{Introduction} Traditionally, atom tracking is used in chemistry to understand the underlying reactions and interactions behind some chemical or biological system. In practice, atoms are usually tracked using isotopic labeling experiments. In a typical isotopic labeling experiment, one or several atoms of some educt molecule of the chemical system we wish to examine are replaced by an isotopic equivalent (e.g. $^{12}$C is replaced with $^{13}$C). These compounds are then introduced to the system of interest, and the resulting product compounds are examined, e.g. by mass spectrometry \parencite{isotope-ms} or nuclear magnetic resonance \parencite{isotope-nmr}. By determining the positions of the isotopes in the product compounds, information about the underlying reactions might then be derived. From a theoretical perspective, characterizing a formal framework to track atoms through reactions is an important step to understand the possible behaviors of a chemical or biological system. In this contribution, we introduce such a framework based on concepts rooted in semigroup theory. Semigroup theory can be used as a tool to analyze biological systems such as metabolic and gene regulatory networks \parencite{nehaniv2015symmetry,egri2008hierarchical}. In particular, Krohn-Rhodes theory \parencite{rhodes2009applications} was used to analyze biological systems by decomposing a semigroup into simpler components. The networks are modeled as state automatas (or ensembles of automatas), and their characteristic semigroup, i.e., the semigroup that characterizes the transition function of the automata \parencite{algebraic-automata}, is then decomposed using Krohn-Rhodes decompositions or, if not computationally feasible, the holonomy decomposition variant \parencite{egri2015computational}. The result is a set of symmetric natural subsystems and an associated hierarchy between them, that can then be used to reason about the system. In \parencite{IsotopeLabel2019} algebraic structures were employed for modeling atom tracking: graph transformation rules are iteratively applied to sets of undirected graphs (molecules) in order to generate the hyper-edges (the chemical reactions) of a directed hypergraph (the chemical reactions network) \parencite{andersen2016software,chemicalmotifs}. A semigroup is defined by using the (partial) transformations that naturally arise from modeling chemical reactions as graph transformations. Utilizing this particular semigroup so-called pathway tables can be constructed, detailing the orbit of single atoms through different pathways to help with the design of isotopic labeling experiments. In this work, we show that we can gain a deeper understanding of the analyzed system by considering how atoms move in relation to each other. To this end, we briefly introduce useful terminology in Section \ref{sec:preliminaries}, found in graph transformation theory as well as semigroup theory. In Section \ref{sec:chemical-networks-and-algebraic-structures} we show how the possible trajectories of a subset of atoms can be intuitively represented as the (right) Cayley graph \parencite{denes1966connections} of the associated semigroup of a chemical network. Moreover, we define natural subsystems of a chemical network in terms of reversible atom configurations and show how they naturally relate to the strongly connected components of the corresponding Cayley graph. We show the usefulness of our approach in Section \ref{sec:results} by using the constructions defined in Section \ref{sec:chemical-networks-and-algebraic-structures} to differentiate chemical pathways, based on the atom trajectories derived from each pathway. We then show how the Cayley graph additionally provides a natural handle for the analysis of cyclic chemical systems such as the TCA cycle \parencite{biochem-textbook}. \section{Preliminaries} \label{sec:preliminaries} \smallskip \noindent \textbf{Graphs:} In this contribution we consider directed as well as undirected connected graphs $G=(V,E)$ with vertex set $V(G)\coloneqq V$ and edge set $E(G)\coloneqq E$. A graph is vertex or edge labeled if its vertices or edges are equipped with a labeling function respectively. If it is both vertex and edge labeled, we simply call the graph labeled. We write $l(x)$ for the vertex labels $(x \in V(G))$ and edge labels $(x \in E(G))$. Given two (un)directed graphs $G$ and $G'$ and a bijection $\varphi: V(G) \rightarrow V(G')$, we say that $\varphi$ is edge-preserving if $(v, u) \in E(G)$ if and only if $(\varphi(v), \varphi(u)) \in E(G')$. Additionally, if $G$ and $G'$ are labeled, $\varphi$ is label-preserving if $l(v) = l(\varphi(v))$ for any $v \in V(G)$ and $l(v, u) = l(\varphi(v), \varphi(u))$ for any $(v, u) \in E(G)$. The bijection $\varphi$ is an isomorphism if it is edge-preserving and, in the case that $G$ and $G'$ are labeled, label-preserving. If $G = G'$, then $\varphi$ is also an automorphism. Given a (directed) graph $G$ we call $G$ (strongly) connected if there exists a path from any vertex $u$ to any vertex $v$. We call the subgraph $H$ of $G$ a (strongly) connected component if $H$ is a maximal (strongly) connected subgraph. Since the motivation of this work is rooted in chemistry, sometimes it is more natural to talk about the undirected labeled graphs as molecules, their vertices as atoms (with labels defining the atom type), and their edges as bonds (whose labels distinguish single, double, triple, and aromatic bonds, for instance), while still using common graph terminology for mathematical precision. \noindent \textbf{Graph Transformations:} As molecules are modeled as undirected labeled graphs, it is natural to think of chemical reactions as graph transformations, where a set of educt graphs are transformed into a set of product graphs. We model such transformations using the double pushout (DPO) approach. For a detailed overview of the DPO approach and its variations see \parencite{habel2001double}. Here, we will use DPO as defined in \parencite{andersen2016software} that specifically describes how to model chemical reactions as rules in a DPO framework. A rule $p$ describing a transformation of a graph pattern $L$ into a graph pattern $R$ is denoted as a span $L \xleftarrow{l}{} K \xrightarrow{r}{} R$, where $K$ is the subgraph of $L$ remaining unchanged during rewriting and $l$ and $r$ are the subgraph morphism $K$ to $L$ and $R$ respectively. The rule $p$ can be applied to a graph $G$ if and only if (i) $L$ can be embedded in $G$ (i.e., $L$ is subgraph monomorphic to $G$) and (ii) the graphs $D$ and $H$ exists such that the diagram depicted in Fig.~ \ref{fig:derivation} commutes. \begin{figure}[h] \centering \begin{tikzpicture} \node[] (L) at (0, 0){L}; \node[] (K) at (1.5, 0){K}; \node[] (R) at (3, 0){R}; \node[] (G) at (0, -1.5){G}; \node[] (D) at (1.5, -1.5){D}; \node[] (H) at (3, -1.5){H}; \draw[->] (K) -- node[above] {l} (L); \draw[->] (K) -- node[above] {r} (R); \draw[->] (L) -- node[right] {$m$} (G); \draw[->] (K) -- node[right] {} (D); \draw[->] (R) -- node[right] {} (H); \draw[->] (D) -- node[below] {l'} (G); \draw[->] (D) -- node[below] {r'} (H); \end{tikzpicture} \caption{A direct derivation.} \label{fig:derivation} \end{figure} The graphs $D$ and $H$ are unique if they exist \parencite{habel2001double}. The graph $H$ is the resulting graph obtained by rewriting $G$ with respect to the rule $p$. We call the application of $p$ on $G$ to obtain $H$ via the map $m: L \rightarrow G$, a direct derivation and denote it as $G \xRightarrow{p,m}{} H$ or $G \xRightarrow{p}{} H$, if $m$ is not important. We note, that $m$ is not necessarily unique, i.e., there might exist a different map $m'$ such that $G \xRightarrow{p,m'}{} H$. For a DPO rule $p$ to model chemistry, we follow the modeling in \parencite{chemicalmotifs}, and impose 3 additional conditions that $p$ must satisfy. (i) All graph morphisms must be injective (i.e., they describe subgraph isomorphisms). (ii) The restriction of graph morphisms $l$ and are $r$ to the vertices must be bijective, ensuring atoms are conserved through a reaction. (iii) Changes in charges and edges (chemical bonds) must conserve the total number of electrons. In the above framework, a chemical reaction is a direct derivation $G \xRightarrow{p,m}{} H$, where each connected component of $G$ and $H$ corresponds to the educt and product molecules, respectively. Condition (i) and (ii), ensures that $l$ and $r$, and by extension $l'$ and $r'$ are bijective mappings when restricted to the vertices. As a consequence we can track each atom through a chemical reaction modeled as a direct derivation by the map $l'^{-1}\circ r'$. We note, that like $m$, $l'$ and $r'$ might not be unique for a given direct derivation $G \xRightarrow{p}{} H$. We define the set of all such maps $l'^{-1}\circ r'$ for all possible maps $l'$ and $r'$ obtained from $G \xRightarrow{p}{} H$ as $tr(G \xRightarrow{p}{} H)$. An example of a direct derivation representing a chemical reaction is depicted in Fig.~ \ref{fig:direct-der}. \begin{figure} \centering \includegraphics[width=1\textwidth]{./figs/2.pdf} \caption{An example of a direct derivation. The mapping $l$, $r$, $l'$ and $r'$ is implicitly given by the depicted positions of the atoms. Given a chemical network, each hyper-edge directly corresponds to such a direct derivation. \label{fig:direct-der} } \end{figure} \noindent \textbf{Chemical Networks:} We consider a directed hypergraph where each edge $e=(e^+,e^-)$ is a pair of subsets of vertices. Moreover, we let $Y_e = e^+\cup e^-$ denote the set of vertices that are comprised in the start-vertex $e^+$ and end-vertex $e^-$ of $e$. In short, a chemical network $\mathrm{CN}$ is a hypergraph
\section{Introduction} Modified gravity theories (MGT) have been extensively investigated as possible solutions of some of the unsolved puzzles of the observed Universe, such as the nature of dark energy or dark matter \cite{DeFelice:2010aj, Starobinsky:2007hu, Hu:2007nk, Nicolis:2008in, Burrage:2016yjm, Clifton:2011jh, Frusciante:2018aew, Kase:2018nwt}, and can also provide models of cosmic inflation \cite{Starobinsky:1980te, Tsujikawa:2014mba, Ohashi:2012wf, DeFelice:2011jm, Bamba:2015uma} in very good agreement with observations \cite{Akrami:2018odb}. It is thereof important to set constraints on MGT using different types of observations, and one important test is provided by the stability of cosmic structure \cite{Bhattacharya:2016vur}, in particular the turn around radius, i.e. the maximum size of a spherically symmetric gravitationally bounded object in presence of dark energy. The effects of the modification of gravity were considered previously in the case of the Brans-Dicke theory \cite{Bhattacharya:2016vur,Bhattacharya:2015iha} and some classes of Galileion theories \cite{Bhattacharya:2015chc}, while here we consider a wider class of scalar tensor-theories, including among others $f(R)$, Generalized Brans-Dicke and quintessence theories. For convenience in the comparison with observational data, in this paper we compute a closely related quantity, the \textit{gravitational stability mass} (GSM) necessary to ensure the stability of a gravitationally bounded structure of given radius. The first calculations of the turn around radius were based on the use of static coordinates \cite{Bhattacharya:2015iha, Pavlidou:2013zha}, which can be related to cosmological perturbation with respect to the Friedmann metric via an appropriate background coordinate transformation, allowing to establish a gauge invariant definition of the turn around radius \cite{Velez:2016shi}. Here we show that the use of cosmological perturbations theory is more convenient and allows to find general theoretical predictions which can be applied to a wide class of scalar-tensor theory. The theoretical predictions are compared to different set of observations, finding that MGT are in better agreement with observations for few data points, but without a strong evidence of a tension with GR, with the largest deviation of $\approx2.6 \,\sigma$ for the galaxy cluster NGC5353/4. \section{Effective gravitational constant in scalar-tensor theories} We will focus on the class of scalar-tensor theories defined by the action \cite{0705.1032}: \begin{equation}\label{accion} \mathcal{S}=\int d^4x \sqrt{-g}\left[\frac{1}{2}f(R,\phi,X)-2\Lambda+\mathcal{L}_m\right], \end{equation} where $\Lambda$ is the bare cosmological constant, $\phi$ is a scalar field and $X=-\frac{1}{2}\partial_\mu\phi \partial^\mu \phi$ is the scalar field's kinetic term, and we used a system of units in which $c=1$. For non-relativistic matter with energy-momentum tensor \begin{equation} \delta T_0^0=\delta\rho_m, \,\, \delta T_i^0=-\rho_m v_{m,i}, \end{equation} where $v_m$ is the matter velocity potential, and using the metric for scalar perturbations in the Newton gauge \begin{equation}\label{Newton} ds^2=-(1+2\Psi)dt^2+a^2(1-2\Phi)\delta_{ij}dx^i dx^j, \end{equation} the Fourier's transform of the Einstein's equations give the modified Poisson equation \cite{0705.1032} \begin{equation}\label{Psi} \Psi_k=-4\pi \tilde{G}_{eff}\frac{a^2}{k^2}\rho_m\delta_k, \end{equation} where $k$ is the comoving wave number, the subscript $k$ denotes the corresponding Fourier's modes, and $\delta_k$ is the gauge-invariant matter density contrast. The quantity $\tilde{G}_{eff}$, normally interpreted as the effective gravitational ``constant", is given by \cite{0705.1032} \begin{equation}\label{Geff} \tilde{G}_{eff}=\frac{1}{8\pi F}\frac{f_{,X}+4\left(f_{,X}\frac{k^2}{a^2}\frac{F_{,R}}{F}+\frac{F_{,\phi}^2}{F}\right)}{f_{,X}+3\left(f_{,X}\frac{k^2}{a^2}\frac{F_{,R}}{F}+\frac{F_{,\phi}^2}{F}\right)}, \end{equation} where $F=\frac{\partial f}{\partial R}$. \section{Gravitational stability mass} According to \cite{Velez:2016shi} and \cite{Faraoni:2015zqa} the turn around radius can be computed from the gauge invariant Bardeen's potentials by solving the equation \begin{equation} \ddot{a}r-\frac{\Psi^\prime}{a}=0,\label{rtar} \end{equation} where the dot and the prime denote derivatives respectively respect to time and the radial coordinate. Note that the above condition is independent of the gravity theory since it is only based on the use of the metric of cosmological perturbations in the Newton gauge, and \textit{it does not} assume any gravitational field equation. We can take advantage of the generality of eq.(\ref{rtar}) and apply it to any gravity theory, in particular to the theories defined in eq.(\ref{accion}). For the theories we will consider and in the sub-horizon limit, we can then take the inverse Fourier's transform of eq.(\ref{Psi}) to get a real space modified Poisson's equation of the form \begin{equation} \Laplace \Psi =- 4\pi G_{eff} \rho_m \delta \,. \label{PoissR} \end{equation} The gravitational potential outside a spherically symmetric object of mass $m$ is then obtained by integrating the modified Poisson's equation (\ref{PoissR}) \begin{equation}\label{PsiSol} \Psi=-\frac{G_{eff} m}{r}, \end{equation} which substituted in eq.\eqref{rtar} allows to derive a general expression for the turn around radius for all the scalar-tensor theories defined in eq.(\ref{accion}) \begin{equation}\label{TAR} r_{TAR}=\sqrt[3]{\frac{3 G_{eff}m}{\Lambda}} \,. \end{equation} It is convenient to define the ratio between the Newton constant $G$ and the effective gravitational constant as $\Delta=G/G_{eff}$ and the gravitational stability mass (GSM) as: \begin{equation} m_{gs}=\frac{\Lambda r_{obs}^3}{3G_{eff}}=m_{GR}\Delta \,, \end{equation} where $m_{GR}(r_{obs})=\Lambda r_{obs}^3/3 G$ is the value of the GSM predicted by GR. Any object of mass $m_{obs}$ should have a radius $r_{obs}<r_{TAR}(m_{obs})$, or viceversa any gravitational bounded object of radius $r_{obs}$ should have a mass larger than $m_{gs}$ \begin{equation} m_{obs}(r_{obs})>m_{gs}(r_{obs})=\frac{\Lambda r_{obs}^3}{3 G_{eff}}=m_{GR}(r_{obs})\Delta\, \label{mgsm}. \end{equation} In fact objects of size $r_{obs}$ with a mass smaller than $m_{gs}(r_{obs})$ would not be gravitationally stable, since the effective force due to dark energy will dominate the attractive gravitational force. In order to compare theories to experiment, it is important to establish what is the size of gravitationally bounded structures, and for this purpose the caustic method has been developed \cite{Yu:2015dqa}, showing good accuracy when applied to simulated data. In the rest of this paper we will use the results of the application of this method to set constraints on the parameters of the different MGT. Galaxy clusters data \cite{Lee,Rines:2012np} can be used to set upper bounds on GSM, and to consequently set constraints on $G_{eff}$, since from eq.(\ref{mgsm}) we get \begin{equation} \Delta<\frac{m_{obs}}{m_{GR}}\, . \end{equation} \section{Gravity theory independent constraints} Before considering the constraints on specific gravity theories in the next sections, we can derive some general gravity theory independent constraints for $G_{eff}$. Note that since the GSM only gives a lower bound for the mass of an object of given radius, we cannot fit the data points one by one, since each different gravity theory predicts a range of masses $m>m_{gs}(r_{obs})$, not a single value. Consequently most the constraints come from the objects with the lowest masses. Most of the data are consistent with GR, except few data points corresponding to the galaxy clusters A655, A1413 and NGC5353/4, which give respectively $\Delta<0.9162 \pm 0.2812$, $\Delta<0.9723 \pm 0.0151$ and $\Delta<0.0969\substack{+0.3215\\ -0.0178}$, as shown in fig.(\ref{fig:my_label}-\ref{fig:my_label2}). The errors have been obtained by Gaussian propagation from the errors on $m_{obs}$ corresponding to $r_{MAX}$ in \cite{Rines:2012np} for A655 and A1413, and from the probability distribution for the size of NGC5353/4 in \cite{Lee} using the normalization relation \begin{equation} \int \rho_r(r) dr=\int \rho_\Delta[\Delta(r)]d\Delta=1 \end{equation} where $\rho_r$ and $\rho_\Delta$ are the probability density functions of the size $r$ and $\Delta$ respectively. The lower and upper bound are taken from the symmetric two-tail limits on the distribution and the main value is taken as the maximum likelihood estimate for $\Delta$. The tightest constraints for GR come from A1413 and NGC5353/4, whose deviation from GR is respectively of order $1.84 \,\sigma$ and $2.61 \,\sigma$, implying that there is not a very strong evidence of the need of a modification of GR. \begin{figure}[h] \includegraphics[scale=0.37]{fitMax1} \caption{Observed masses and radii of galaxy clusters are compared to the GR prediction (black line). Vertical green lines represent the errors on the estimation of the masses from \cite{Rines:2012np} and blue lines correspond to other cosmological structures in \cite{Pavlidou:2013zha}. The object with the most significant deviation is NGC5353/4 plotted in red \cite{Lee}, which is shown in more detail in fig.(\ref{fig:my_label2}).} \label{fig:my_label} \end{figure} \begin{figure}[h] \includegraphics[scale=0.3]{fitMax2} \caption{Observed masses and radii of the A655, A1413 and NGC5353/4 galaxy clusters. These are the objects with the most significant deviation from the GR prediction, respectively of order $0.19\sigma$ for A655, $1.84\sigma$ for A1413 (see inset) and $2.61\,\sigma$ for NGC5353/4. } \label{fig:my_label2} \end{figure} \section{\texorpdfstring{$f(R)$}{Lg} theories} In this case the action is independent of the scalar field, and in the Jordan frame is \begin{equation} \mathcal{S}=\int d^4x \sqrt{-g}\left[\frac{1}{2}f(R)-2\Lambda+\mathcal{L}_m\right]\, , \end{equation} with the effective gravitational constant given by \begin{equation} \tilde{G}_{eff}=\frac{1}{ 8\pi F}\frac{1+4\frac{k^2}{a^2}\frac{F_{,R}}{F}}{1+3\frac{k^2}{a^2}\frac{F_{,R}}{F}}\, . \end{equation} On sub-horizon scales ($\frac{k^2}{a^2}\frac{F_{,R}}{F}\gg 1$) it reduces to \cite{0705.1032} \begin{equation} \tilde{G}_{eff}=G_{eff}=\frac{1}{6\pi F}\, , \end{equation} and the turn around radius is given by \begin{equation} r_{TAR}=\sqrt[3]{\frac{ m}{2\pi \Lambda F}}\, , \end{equation} which corresponds to this expression for the GSM \begin{equation} m_{gs}=2\pi\Lambda F r_{obs}^3\, . \end{equation} \rojo{Observational data imply $F<(0.0486\pm 0.0149)G^{-1}$ for A655, $F<(0.0516\pm 0.0008)G^{-1}$ for A1413, and $F<0.0051\substack{+0.0009\\ -0.0171}$ for NGC5353/4. It can noted that GR is not incompatible with observations, since the tightest constraint on $F$, corresponding to NGC5353/4 is $2.61 \,\sigma$ away from
# Connection coefficients in Earth orbit 1. Sep 17, 2015 ### TerryW I've run into a couple of problems with exercise 12.6 in MTW (attached). At this stage, all I'm asking is for someone to give me answers to the following questions: 1. What are the Rα'β'γ'δ' in the primed co-ordinate system? and 2. What are the non-zero connection coefficients in the primed co-ordinate system? I started with an Earth bound co-ordinate system fixed so that the x-axis runs from the centre of the earth through the prime meridian at the equator and worked to get the co-ordinate transformations between (x' , y') and (x , y) and used -2GM/R2 for Rα1010 and GM/R2 for R2020. The results for R1'0'1'0' and R2'0'2'0' didn't turn out as I think they should. For my connection coefficients I ended up with Γ1'0'0' = +ω2x' , Γ2'0'0' = -2ω2y' Γ1'0'2' = -ω Γ2'0'1' = +ω Which didn't give a good result for the geodesic equation for y'. I used the connection coefficients to calculate R1'0'1'0' and Rα2'0'2'0' and ended up with 2ω2 and -ω2 respectively which also doesn't look quite right. Can anyone put me on the right track here? Regards TerryW #### Attached Files: • ###### Geodesic Deviation above the Earth.pdf File size: 1.2 MB Views: 91 2. Sep 17, 2015 ### Staff: Mentor Moved to homework since this is a textbook problem. 3. Sep 17, 2015 ### Staff: Mentor What transformations did you come up with? Where did you get these from? They don't look right. Remember that the Riemann tensor describes tidal gravity, not "acceleration due to gravity". (Also, I'm not sure the problem is asking you to take the Earth's gravity into account. See below.) What did you start with? If you started with the connection coefficients in the unprimed frame being all zero, are you aware that this is only true if we ignore the Earth's gravity? That may indeed be what the problem is asking you to do (I'm not sure), but if so, the Riemann tensor components will also all be zero in the unprimed frame; all of the actual interesting stuff comes from the coordinate transformations. 4. Sep 18, 2015 ### TerryW I've written out my transformations and attached them, along with a page from MTW containing the Riemann tensor I used (with the Z and X axes interchanged) I must say I have tried hard to find examples of Riemann tensor components and Connection Coefficients for this situation on the Internet with no success at all. Regards Terry W #### Attached Files: • ###### My transformation and Riemann components.pdf File size: 2.9 MB Views: 54 5. Sep 18, 2015 ### Staff: Mentor Ok, you had a typo in the OP, you had $R^2$ where it should be $R^3$ in the denominators of the Riemann tensor coefficients, that was what confused me about those. With the typo corrected, those are the right Riemann tensor components in the vicinity of the Earth, if we are including Earth's gravity. But if we are including Earth's gravity, then things get a lot more complicated because space is no longer Euclidean. Your coordinate transformation appears to assume that space is Euclidean, which would mean we are ignoring Earth's gravity and working in an SR inertial frame centered on an idealized "Earth" that has no mass (so "Earth" is really just an identifier for a region of space). If that is the case, then the Riemann tensor components in the unprimed frame are all zero. If it's not, then your coordinate transformation needs to start from Schwarzschild coordinates, not ordinary Cartesian coordinates, in the unprimed frame. Even if we assume we are ignoring Earth's gravity (which is the easier way to proceed), your coordinate transformation doesn't look right. First, since both coordinate systems are supposed to be Cartesian (x, y, z), you don't need to go through the intermediate step of translating into radius and angular coordinates; that just introduces confusion. You should be able to write down, at a given instant of time, a direct transformation between the x, y, z of the unprimed (non-rotating) coordinates and the x', y', z' of the primed (rotating) coordinates. Second, note that I said "a given instant of time". The primed coordinates are attached to an object moving in a circle, so the coordinate transformation should have factors in it describing that motion--such as the angular velocity $\omega$. No such factor appears in what you wrote down. 6. Sep 18, 2015 ### TerryW Peter, Thanks for your detailed reply. It prompted me to take a fresh look at the problem and I've spotted something I'd overlooked earlier. Some of the work in this chapter of MTW is looking at Galilean frames which have the particular characteristic that Γj00 = ∂φ/∂xj, all other Γαβγ = 0 and Rj0k0 = Γj00,k. This then leads to Γ100,1 = R1010 and Γ200,2 = R2020. As φ = -GM/R (I need to check out the minus sign, but MTW says that φ = - U (the Newtonian potential) so I think it is OK), then with φ = -GM/(x2+y2)½ then R1010 and R2020 are the same, so the transformation is simple, leading to R1'0'1'0' and R2'0'2'0' being the same as R1010 and R2020 respectively. ( A simple transformation as the problem says!) (I realise now that the Riemann components I'd been using earlier were incorrect because they represent the components in a local Lorentz frame which isn't the same as my frame fixed at the centre of the earth.) I feel confident about my transformations at this stage. As you say, they represent a given instance of time, but once you differentiate wrt to time, then d/dt sinθ = cosθdθ/dt, where dθ/dt = ω. All I need to do now is go through my workings on the results for Γα'β'γ' to see if I can square them up with the values for R1'0'1'0' and R2'0'2'0'. I'm off on holiday tomorrow so I won't be able to do too much over the next ten days, but I'll drop a post to you when I've reworked everything - hopefully clearing up the issues. Regards TerryW 7. Oct 3, 2015 ### TerryW Hi Peter, I've made progress while on holiday and I'm a step nearer a consistent set of answers to this problem. I noticed that I made a transcription error in my 'Transformations' attachment so I've attached a corrected version. I had a further look at the transformations for R1010 and R2020 and found that they were a bit more complicated, but when I worked through them, the results I got for R1'0'1'0' and R2'0'2'0' were -2GM/R3 and GM/R3 respectively, ie they have the same form as the Riemann tensor components derived in Chapter 1 (which I included in my original attachment). I'm pretty sure that these are correct because they represent the accelerations one would observe between two test particles floating inside an orbiting space station, which is in effect, the same situation as this problem - I also found a paragraph in Chapter 1 which is pretty conclusive on the matter. I then tried to derive the connection coefficients Γ1'0'0' and Γ2'0'0' using formulae given in the text and which I have derived myself in an earlier exercise. I was hoping the result would give Γ1'0'0' = R1'0'1'0'x and Γ2'0'0' = R2'0'2'0'y, but my results are wide of the mark. My workings run to several pages but I would really appreciate it if you could take a look at what I have done to see if you can see where it is all going wrong. Would you be willing to do this? Regards TerryW File size: 215.3 KB Views: 34 8. Oct 3, 2015 ### Staff: Mentor I can, but it won't be for several days. I don't currently have access to my copy of MTW, and I would like to review it while I'm looking at what you've done. 9. Oct 3, 2015 ### TerryW Thanks for this. It will take me a couple of days to write up my workings into a presentable form anyway. By the way, although I do have my original copy of MTW (bought
\rho_0 \sin \alpha \, ( \Gamma_{013456} + \Gamma_{13456\underline{10}} ) + 2 \sin^2 \alpha \, \Gamma_{01346\underline{10}} \cr & \hspace{2.5cm} + \sin 2 \alpha \sqrt{1 + \sinh^2 \rho_0 \sec^2 \alpha} \, ( \Gamma_{01356\underline{10}} + \Gamma_{02346\underline{10}} ) \bigg) \end{align} We substitute this $\gamma_{\phi_0\alpha\beta\phi_1\phi_2\phi_3}$ in the $\kappa-$symmetry equation and take each product of six $\Gamma$-matrices to the right through the exponential factors in the killing spinor denoted by $M$ in \eqref{adskss2}. After doing the $\Gamma$-matrix algebra, the l.h.s. in the $\kappa$-symmetry equation becomes \begin{align} \Gamma_{\kappa} \cdot M \cdot \, \epsilon_0 =& \, M \cdot \Gamma_{0235689} \cdot \epsilon_0 \, + \, M \sin \beta \tan \alpha \tanh^2 \rho_0 \, e^{\Gamma_{14} \phi_1} e^{\Gamma_{36} \phi_3} \left( \Gamma_{0125689} + \Gamma_{02456789\underline{10}} \right) \cdot \epsilon_0 \cr & - M \cos \beta \tan \alpha \tanh^2 \rho_0 \, e^{\Gamma_{14} \phi_1} e^{\Gamma_{25} \phi_2} \left( \Gamma_{0135689} + \Gamma_{03456789\underline{10}} \right) \cdot \epsilon_0 \cr & - M \tanh \rho_0 \cosh^{-1} \rho_0 \sqrt{1 + \sinh^2 \rho_0 \sec^2 \alpha} \, e^{- \Gamma_0 \gamma \phi_0} e^{\Gamma_{14} \phi_1} \left( \Gamma_{0123567\underline{10}} + \Gamma_{023456} \right) \cdot \epsilon_0 \end{align} The $\kappa-$symmetry equation is satisfied and half of the eleven-dimensional supersymmetries are preserved by the M5 world-volume, when the following projection is applied on the constant spinor $\epsilon_0$ \begin{align} \left( 1 - \Gamma_{147\underline{10}} \right) \epsilon_0 = 0 \,. \end{align} The remaining two solutions in this kind can be obtained by doing the appropriate $SU(3)$ transformations on the complex coordinate $\Phi_1$. \vspace{.3cm} \subsubsection{Solution: $\zeta_1 \Phi_2 = l \sinh \rho_0 \, e^{i \, \xi^{(0)}}$} \label{halfBPSM5no2} This is an another example of non-compact worldvolume solution which end in the AdS boundary into a 4-dimensional submanifold. This will also be a holographic dual of a codimension-2 defect in the 6d $(0,2)$ theory. In terms of the real coordinates the embedding equations are \begin{align} \theta = \frac{\pi}2 \qquad \chi = 0 \qquad \sinh \rho \, \sin \alpha \cos \beta = \sinh \rho_0 \qquad 2 \phi_2 + \xi_1 = \xi^{(0)} \end{align} \noindent The induced metric again is of topology $AdS_5 \times S^1$ and the scalar curvature of the above metric is $- \frac{5 \, \cosh^{-2} \rho_0}{l^2}$. With the determinant: $$\det h = - (2 l )^{12} \sinh^8 \rho_0 \cosh^4 \rho_0 \cot^2 \alpha \csc^4 \alpha \sec^8 \beta \tan^2 \beta.$$ The above M5 brane solution satisfies the $\kappa-$symmetry constraint \begin{align} \Gamma_{\kappa} \epsilon = \epsilon \,, \end{align} and half of the eleven-dimensional supersymmetries are preserved by the world-volume, when the following projection is applied on the constant spinor $\epsilon_0$ \begin{align} \left( 1 - \Gamma_{257\underline{10}} \right) \epsilon_0 = 0 \,. \end{align} \subsubsection{Solution: $\zeta_1 \Phi_3 = l \sinh \rho_0 \, e^{i \, \xi^{(0)}}$} \label{halfBPSM5no3} This is the fourth half-BPS solution in the first set of solutions. In terms of the real coordinates the embedding equations are \begin{align} \theta = \frac{\pi}2 \qquad \chi = 0 \qquad \sinh \rho \, \sin \alpha \sin \beta = \sinh \rho_0 \qquad 2 \phi_3 + \xi_1 = \xi^{(0)} \end{align} The above solution will also preserve half the supersymmetries if the following projections on $\epsilon_0$ are imposed \begin{align} ( 1 - \Gamma_{367\underline{10}} ) \epsilon_0 = 0 \,. \end{align} \subsection*{Solutions of the $\textbf{II}^{nd}$ kind} We now consider the solutions which wrap the maximal circle on $S^4$ parametrized by the coordinate $\xi_2$. \vspace{.6cm} \subsubsection{Solution: $\zeta_2 \Phi_0 = l \sinh \rho_0 \, e^{i \, \xi^{(0)}}$} This solution describes a dual-giant graviton with worldvolume wraping the $S^5$ sphere in the AdS directions. In terms of the real coordinates the embedding equations are \begin{align} \theta = \frac{\pi}2 \qquad \chi = \frac{\pi}2 \qquad \rho= constant \qquad 2 \phi_0 + \xi_2 = \xi^{(0)} \end{align} The above solution will also preserve half of the supersymmetries if the following projections on $\epsilon_0$ are imposed \begin{align} ( 1 + \Gamma_{07\underline{10}} ) \epsilon_0 = 0 \,. \end{align} \subsubsection{Solution: $\zeta_2 \Phi_1 = l \sinh \rho_0 \, e^{i \, \xi^{(0)}}$} This solution describes a non-compact worldvolume brane with holographic duality to a codimension-2 defect in the 6d theory. In terms of the real coordinates the embedding equations are \begin{align} \theta = \frac{\pi}2 \qquad \chi = \frac{\pi}2 \qquad \sinh \rho \cos \alpha = \sinh \rho_0 \qquad 2 \phi_1 + \xi_2 = \xi^{(0)} \end{align} The above solution will also preserve half the supersymmetries if the following projections on $\epsilon_0$ are imposed \begin{align} ( 1 + \Gamma_{1489} ) \epsilon_0 = 0 \,. \end{align} Similar to the first set of solutions there are two more solutions that can be obtained by doing the appropriate $SU(3)$ rotations on the coordinate $\Phi_1$. \subsubsection{Solution: $\zeta_2 \Phi_2 = l \sinh \rho_0 \, e^{i \, \xi^{(0)}}$} In terms of the real coordinates the embedding equations are \begin{align} \theta = \frac{\pi}2 \qquad \chi = \frac{\pi}2 \qquad \sinh \rho \sin \alpha \cos \beta = \sinh \rho_0 \qquad 2 \phi_2 + \xi_2 = \xi^{(0)} \end{align} The above solution will also preserve half the supersymmetries if the following projections on $\epsilon_0$ are imposed \begin{align} ( 1 + \Gamma_{2589} ) \epsilon_0 = 0 \,. \end{align} \subsubsection{Solution: $\zeta_2 \Phi_3 = l \sinh \rho_0 \, e^{i \, \xi^{(0)}}$} In terms of the real coordinates the embedding equations are \begin{align} \theta = \frac{\pi}2 \qquad \chi = \frac{\pi}2 \qquad \sinh \rho \sin \alpha \sin \beta = \sinh \rho_0 \qquad 2 \phi_3 + \xi_2 = \xi^{(0)} \end{align} The above solution will also preserve half the supersymmetries if the following projections on $\epsilon_0$ are imposed \begin{align} ( 1 + \Gamma_{3689} ) \epsilon_0 = 0 \,. \end{align} \subsection{Some $\frac14$-BPS solutions}\label{quarterBPS} From the analysis we have seen so far we can consider to combine two of the solutions from above. This time the parametrization of the 1-dimensional curve that is wrapped on the $S^4$ can be understood by using both the coordinates: $\xi_1$ and $\xi_2$. \\ \noindent The complete ansatz for an M5-brane world-volume made from the set of solutions of the Euler-Lagrange equation is \begin{align} \theta = \frac{\pi}2 \qquad \sinh \rho \cos \alpha = \sinh \rho_0 \qquad 2 \phi_1 + \xi_1 = 0 \quad 2 \phi_1 + \xi_2 = 0 \,. \end{align} Along with the $S^4$ coordinate '$\chi$' fixed to an arbitrary value(other than $0$ or $\frac{\pi}2$). In terms of the special complex variable we have defined in \eqref{zetavsZ} this ansatz takes the form: $\zeta_1 \Phi_1 = c_1$ and $\zeta_2 \Phi_1 = c_2$ with $c_1$, $c_2$ some arbitrary constants. This choice of ansatz indicates that the curve that is wrapped on the $S^4$ by the probe M5 solution is parametrized by the Hopf fibre direction coordinate in which the frame component $e^9$ vanishes and $e^{\underline{10}}$ becomes $l\, \left( \cos^2 \chi \, d\xi_1 + \sin^2 \chi \, d\xi_2 \right) = l \, d\xi_1 \,( \text{or} \,\, l \, d\xi_2 )$.\\ \noindent The induced metric on the world-volume remains the same as in \eqref{inducedmetrichalfBPS1} and the product of the six $\gamma$ matrices to be used in the kappa-symmetry equation is also the same as in \eqref{gammakappaHBPS1}. After pushing through all the six-product $\Gamma_{ab\ldots f}$ matrices in \eqref{gammakappaHBPS1} through the matrix factor $M$ in the killing spinor in \eqref{adskss2}, the kappa-symmetry constraint: $\Gamma_{\kappa} \epsilon = \epsilon$ can be made to be satisfied as in the previous 8 half-BPS solutions we have seen so far. But this time we need to impose two independent projection conditions in order to accomplish this. It means that the worldvolume now preserves the quarter of the 11-dimensional spacetime supersymmetries and the answer in terms of the projection conditions is the following \\ \begin{align} ( 1 - \Gamma_{147\underline{10}} ) \epsilon_0 \, = \, 0 \qquad \qquad ( 1 - \Gamma_{789\underline{10}} ) \epsilon_0 \, = \, 0 \,. \end{align} \\ \noindent Likewise, there are other solutions which can be considered with the ansatz $\zeta_1 \Phi_i = c_1$ and $\zeta_2 \Phi_i = c_2$ with $i = 0, 2,$ and $3$. Each of these world-volume solutions are BPS and preserve a quarter of the 11-d supersymmetries, with the second projection condition being the same and the first condition being altered according to the illustrations given in previous subsection in \ref{halfBPSM5no0}, \ref{halfBPSM5no2} and \ref{halfBPSM5no3}, respectively. The world volume metric and the the six-product $\gamma_{\tau \sigma_1 \ldots \sigma_5}$ of the world-volume $\gamma$-matrices remains unchanged as given for the respective cases in \ref{halfBPSM5no0}, \ref{halfBPSM5no2} and \ref{halfBPSM5no3}. For convenience, we write the projection conditions for the each individual $\frac 14$-BPS solutions separately \\ \begin{align} \zeta_1 \Phi_0 = c_1 \,\,;\,\, \zeta_2 \Phi_0 = c_2\,\quad:& \qquad ( 1 - \Gamma_{089} ) \epsilon_0 \, = \, 0 \quad \quad ( 1 - \Gamma_{789\underline{10}} ) \epsilon_0 \, = \, 0
and a Matrix embedded PC exchanges information between the Cohda, MicroAutoBox, and the ego vehicle controller area network (CAN bus). The Matrix also runs a state machine which manages the role of each vehicle in the platoon, and is discussed further in Section \ref{statemanager}. A diagram of the hardware setup is shown in Figure \ref{fig:hardware}. \textcolor{black}{An important hardware consideration for platooning is that of communication latencies}. In \cite{smith2019balancing} we discussed how including a time stamp in transmitted messages enables each vehicle to account for V2V communication delays. The idea is to use the time stamp to estimate the delay $d$ in time-steps (with sampling time $\Delta t = 0.1$s), and then to shift the velocity forecast used for MPC by $d$ steps, where we assume the transmitting vehicle will maintain a constant velocity beyond its planned trajectory. For the experimental work presented in this paper, however, we assume there are no communication delays between vehicles, which is done for two reasons. The first reason is that we have observed that communication latencies are typically small enough to be ignored for our application. The second reason is that estimating $d$ accurately is challenging in practice. Since the clocks on the test vehicle computers are not synchronized, one must estimate the clock skew between vehicles, which could potentially be time-varying, in order to accurately estimate delays. \subsection{Closed track experiments} \label{CPG} Preliminary vehicle platooning experiments were conducted on a closed test track at the Hyundai-KIA Motors California Proving Grounds in California City, CA (see Figure \ref{fig:ioniqs}). For all of the tests the leader vehicle does velocity tracking of a predetermined velocity trajectory (meaning $v_L^{des}$ in \eqref{leaderObj1} becomes time-dependent), and the follower vehicles do distance tracking relative to the leader vehicle. The predetermined velocity trajectories used for tracking were either from real velocity data collected during previous experiments, or artificial velocity data generated by our simulation tool. In Figure \ref{fig:CPGtest} we show experimental results from a test using artificial velocity data which has a step function-like trajectory. For these experiments we used a larger admissible range of the wheel torque for the follower vehicles, as seen in the bottom plot of Figure \ref{fig:CPGtest}. \textcolor{black}{In particular, we note that the torque plotted is the \textit{desired} torque, i.e. the output of the MPC algorithm, as opposed to the \textit{measured} torque (estimated by the vehicle). However, the inter-vehicle distances and vehicle velocities are both from on-board measurements (the position data is then obtained offline by integrating the velocity data). We can see that as the platoon accelerates and decelerates, the followers accurately track the desired distance of 6m to the front vehicle - all tracking errors stay below about 1m throughout the experiment}. We note, however, that there is slightly larger tracking error (as well as larger variation of the wheel torque command) for the second follower in this experiment. We can mainly attribute this to state estimation error since GPS was used to estimate the distance $s_i(t)$ for all experiments at the California Proving Grounds, as discussed in Section \ref{stateEstimation}. A video of the testing is available online at \url{https://youtu.be/U-O9iUZElR8}, which includes several test runs with varying levels of the \textit{trust horizon $F$} (discussed in Remark \ref{trustRemark}). We note that in test runs with a small trust horizon, for example $F = 10$ (half of the velocity forecast is trusted) or $F = 0$ (none of the velocity forecast is trusted, meaning the vehicles effectively do not use V2V communication), large gaps appear between the platooning vehicles while they are accelerating. This behavior is expected, since using the full velocity forecast relaxes the constraints on following distance so that the follower vehicles can get closer to the vehicle ahead. In the test run shown in Figure \ref{fig:CPGtest} we used $F = 15$, demonstrating that we are able to get accurate tracking performance when using a large portion of the velocity forecast (elsewhere in the paper we use $F = N_p = 20$). \textcolor{black}{Similar to Section \ref{throughput}, we estimate throughput for the test run shown in Figure \ref{fig:CPGtest} by treating the platoon as if it begins stopped at an intersection - our estimate is shown in Table \ref{tab:throughputAnalysis1}}. \textcolor{black}{Furthermore, in Table \ref{tab:throughputAnalysis2} we show a baseline level of throughput computed using data from a test run with $F = 0$. As expected, significantly higher throughput is achieved by utilizing the velocity forecast}. \subsection{Public Road Demonstration} \label{arcadia} To demonstrate vehicle platooning in an urban environment with a moderate level of traffic, we conducted further experiments in Arcadia, CA. Our testing area is a 2.45 km long stretch of roadway on Live Oak Ave between S Santa Anita Ave and Peck Rd, and has eight consecutive intersections which are instrumented to send out SPaT and MAP messages for our vehicle platoon to receive. All tests in Arcadia were completed with a 3-vehicle platoon using the same MPC parameters as shown in Table \ref{modelParameters}, \textcolor{black}{with the exception that $v_L^{\text{des}} = 14$ m/s was used here}. Footage of our testing is available online: \url{https://youtu.be/xPYR_xP3FuY}. It captures a few instances where the platoon stops at the stop bar for a red light with no vehicles queued ahead of it. When the light turns green the platoon reacts immediately and moves through the intersection more quickly and compactly than the human-driven vehicles near it, further demonstrating the potential for throughput improvement (see Figure \ref{fig:overhead}). \begin{figure} \centering \includegraphics[width = \columnwidth]{images/overhead_drone_shot.png} \caption{Overhead view of the platoon crossing an intersection in Arcadia, CA.} \label{fig:overhead} \end{figure} \section{Conclusion and Future Work} \label{conclusion} In this paper we presented the design and evaluation of a vehicle platooning system that can operate in an urban corridor with intersections and other traffic participants. The primary motivation for advancing platooning to an urban setting is to improve traffic efficiency by increasing throughput at intersections, which create bottlenecks for traffic flow. \textcolor{black}{We evaluated the performance of our vehicle platooning architecture by estimating the level of throughput that would be achieved at the intersection, calculated by measuring the time instants at which each vehicle crosses the intersection}. An important challenge we encountered while testing on public roads is that of safely disengaging the platooning system and passing control back to the safety driver when necessary. To do so, we designed our system so that if any driver taps the brake pedal when the platoon is active, the controller for every platooning vehicle disengages immediately (via the finite state machine) and all drivers are notified immediately via a sound. We note, however, that this design can be problematic in certain scenarios. For example, suppose the platoon is approaching an intersection and begins braking when the driver in the leader vehicle, out of caution, disengages the platoon. This requires the drivers in the follower vehicles to react immediately, as their vehicles will suddenly stop braking when the controllers disengage. In the future we hope to address this issue by creating a safety system that ensures the vehicles start transitioning to a safe state immediately when the `plan' is cancelled, providing the safety driver more time to react. One potential approach, for example, is to have the platooning system transition to an ACC state of operation immediately after disengagement. The ACC system would then remain active and maintain a safe distance to the front vehicle until the driver takes over. \textcolor{black}{Another future research direction relates to the procedure for setting cost weights in the MPC objective functions, which were manually tuned here. Indeed, in order to converge on acceptable values for the tuning parameters affecting vehicle drivability, such as the time headway constraint or penalty on vehicle jerk, multiple trial runs on a closed test track were necessary. To reduce development time, it would be interesting to see how a learning-based approach could potentially expedite this procedure. Furthermore, we note
synthesizing a larger molecule XY out of two parts X and Y while ATP is broken down to ADP and the phosphate ion Pi Thus, they have the same net effect as this other pair of reactions: $\begin{array}{ccc} \mathrm{X} + \mathrm{Y} &\longleftrightarrow & \mathrm{XY} \;\;\;\quad \quad \qquad (3) \\ \mathrm{ATP} &\longleftrightarrow & \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} \qquad (4) \end{array}$ The first reaction here is just the synthesis of XY from X and Y. The second is a deliberately simplified version of ATP hydrolysis. The first involves an increase of energy, while the second releases energy. But in the schema used in biology, these processes are ‘coupled’ so that ATP can only break down to ADP + Pi if X and Y combine to form XY. As we shall see, this coupling crucially relies on a conserved quantity: the total number of Y molecules plus the total number of Pi ions is left unchanged by reactions (1) and (2). This fact is not a fundamental law of physics, nor even a general law of chemistry (such as conservation of phosphorus atoms). It is an emergent conservation law that holds approximately in special situations. Its approximate validity relies on the fact that the cell has enzymes that make reactions (1) and (2) occur more rapidly than reactions that violate this law, such as (3) and (4). In the series to come, we’ll start by providing the tiny amount of chemistry and thermodynamics needed to understand what’s going on. Then we’ll raise the question “what is coupling?” Then we’ll study the reactions required for coupling ATP hydrolysis to the synthesis of XY from components X and Y, and explain why these reactions are not yet enough for coupling. Then we’ll show that coupling occurs in a ‘quasiequilibrium’ state where reactions (1) and (2), assumed much faster than the rest, have reached equilibrium, while the rest are neglected. And then we’ll explain the role of emergent conservation laws! The paper: • John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola, Biochemical coupling through emergent conservation laws. The blog series: Part 1 – Introduction. Part 2 – Review of reaction networks and equilibrium thermodynamics. Part 3 – What is coupling? Part 4 – Interactions. Part 5 – Coupling in quasiequilibrium states. Part 6 – Emergent conservation laws. Part 7 – The urea cycle. Part 8 – The citric acid cycle. Effective Thermodynamics for a Marginal Observer 8 May, 2018 guest post by Matteo Polettini Suppose you receive an email from someone who claims “here is the project of a machine that runs forever and ever and produces energy for free!” Obviously he must be a crackpot. But he may be well-intentioned. You opt for not being rude, roll your sleeves, and put your hands into the dirt, holding the Second Law as lodestar. Keep in mind that there are two fundamental sources of error: either he is not considering certain input currents (“hey, what about that tiny hidden cable entering your machine from the electrical power line?!”, “uh, ah, that’s just to power the “ON” LED”, “mmmhh, you sure?”), or else he is not measuring the energy input correctly (“hey, why are you using a Geiger counter to measure input voltages?!”, “well, sir, I ran out of voltmeters…”). In other words, the observer might only have partial information about the setup, either in quantity or quality. Because he has been marginalized by society (most crackpots believe they are misunderstood geniuses) we will call such observer “marginal,” which incidentally is also the word that mathematicians use when they focus on the probability of a subset of stochastic variables. In fact, our modern understanding of thermodynamics as embodied in statistical mechanics and stochastic processes is founded (and funded) on ignorance: we never really have “complete” information. If we actually had, all energy would look alike, it would not come in “more refined” and “less refined” forms, there would not be a differentials of order/disorder (using Paul Valery’s beautiful words), and that would end thermodynamic reasoning, the energy problem, and generous research grants altogether. Even worse, within this statistical approach we might be missing chunks of information because some parts of the system are invisible to us. But then, what warrants that we are doing things right, and he (our correspondent) is the crackpot? Couldn’t it be the other way around? Here I would like to present some recent ideas I’ve been working on together with some collaborators on how to deal with incomplete information about the sources of dissipation of a thermodynamic system. I will do this in a quite theoretical manner, but somehow I will mimic the guidelines suggested above for debunking crackpots. My three buzzwords will be: marginal, effective, and operational. “Complete” thermodynamics: an out-of-the-box view The laws of thermodynamics that I address are: • The good ol’ Second Law (2nd) • The Fluctuation-Dissipation Relation (FDR), and the Reciprocal Relation (RR) close to equilibrium. • The more recent Fluctuation Relation (FR)1 and its corollary the Integral Fluctuation Relation (IFR), which have been discussed on this blog in a remarkable post by Matteo Smerlak. The list above is all in the “area of the second law”. How about the other laws? Well, thermodynamics has for long been a phenomenological science, a patchwork. So-called stochastic thermodynamics is trying to put some order in it by systematically grounding thermodynamic claims in (mostly Markov) stochastic processes. But it’s not an easy task, because the different laws of thermodynamics live in somewhat different conceptual planes. And it’s not even clear if they are theorems, prescriptions, or habits (a bit like in jurisprudence2). Within stochastic thermodynamics, the Zeroth Law is so easy nobody cares to formulate it (I do, so stay tuned…). The Third Law: no idea, let me know. As regards the First Law (or, better, “laws”, as many as there are conserved quantities across the system/environment interface…), we will assume that all related symmetries have been exploited from the offset to boil down the description to a minimum. This minimum is as follows. We identify a system that is well separated from its environment. The system evolves in time, the environment is so large that its state does not evolve within the timescales of the system3. When tracing out the environment from the description, an uncertainty falls upon the system’s evolution. We assume the system’s dynamics to be described by a stochastic Markovian process. How exactly the system evolves and what is the relationship between system and environment will be described in more detail below. Here let us take an “out of the box” view. We resolve the environment into several reservoirs labeled by index $\alpha$. Each of these reservoirs is “at equilibrium” on its own (whatever that means4). Now, the idea is that each reservoir tries to impose “its own equilibrium” on the system, and that their competition leads to a flow of currents across the system/environment interface. Each time an amount of the reservoir’s resource crosses the interface, a “thermodynamic cost” has to be to be paid or gained (be it a chemical potential difference for a molecule to go through a membrane, or a temperature gradient for photons to be emitted/absorbed, etc.). The fundamental quantities of stochastic thermodynamic modeling thus are: • On the “-dynamic” side: the time-integrated currents $\Phi^t_\alpha$, independent among themselves5. Currents are stochastic variables distributed with joint probability density $P(\{\Phi_\alpha\}_\alpha)$ • On the “thermo-” side: The so-called thermodynamic forces or “affinities”6 $\mathcal{A}_\alpha$ (collectively denoted $\mathcal{A}$). These are tunable parameters that characterize reservoir-to-reservoir gradients, and they are not stochastic. For convenience, we conventionally take them all positive. Dissipation is quantified by the entropy production: $\sum \mathcal{A}_\alpha \Phi^t_\alpha$ We are finally in the position to state the main results. Be warned that in the following expressions the exact treatment of time and its scaling would require a lot of specifications, but keep in mind that all these relations hold true in the long-time limit, and that
## Tuples Are Powerful In this post I lay out the unjustifyable reasons why Gorgonia lacks tuple types. Along the way we revisit the idea of constructing integer types from natural numbers using only tuples and the most basic functionalities. I then close this blog post with further thoughts about computation in general and what that holds for Gorgonia's future. Over Chinese New Year clebrations, a friend asked (again) about the curious lack of a particular feature in Gorgonia, the deep-learning package for Go: tuples, which led to this tweet (that no one else found funny :( ) The feature that was missing is one that I’ve vehemently objected to in the past. So vehemently objected I was to this that by the first public release of Gorgonia, there was only one reference that it ever existed (by the time I released Gorgonia to public, I had been working of 3 versions of the same idea). [Read More] ## Term Rewriting Chinese Relatives ### Learn Chinese AND Functional Programming At the Same Time I recently attended QFPL’s excellent Haskell course. Tony Morris was a little DRY*It's a joke. Tony kept mentioning Don't Repeat Yourself and being lazy but nonetheless was an excellent presenter *The course shook my confidence in my existing ability to reason in Haskell for a bit but it was for the better - I had some fundamentals that were broken and Tony explained some things in a way that fixed it... for now - I have no doubt some basics will be lost to the ether in the next few months. So for the rest of the week I was in a bit of a equational-reasoning mode. Then my dad sent me a cute link to a calculator that calculate vocatives for Chinese relatives. Given English as my first language (hence not default mode of thinking), this kicked me off in to a chain of thoughts about languages and symbols (you’d find a high amount of correlation between my switching modes of thinking and blog posts - the last time this happened, I wrote about yes and no). One of the difficult things that many people report with programming languages is that the decoupling of syntax and semantics. I’ve often wondered if we might be better off with a syntax that is based off symbols (rather like APL) - the initial hurdle might be higher, but once that’d done, syntax and semantics are completely decoupled. Then we’d not have flame wars on syntax, rather a more interesting flame war on semantics and pragmatics. Another line of thinking I had was the hypothetical development of computing and logics in a parallel universe where Chinese was the dominant linguistic paradigms - it’s one that I’ve had since I visited China for the first time. Combined, these trains of thoughts led to this blog post. So let’s learn some Chinese while learning some (really restricted) functional programming! Bear in mind it’s a very rough unrigorous version. [Read More] ## Go For Data Science This may come as a surprise for many people, but I do a large portion of my data science work in Go. I recently gave a talk on why I use Go for data science. The slides are here, and I’d also like to expand on a few more things after the jump: [Read More] ## Data Empathy, Data Sympathy Today’s blog post will be a little on the light side as I explore the various things that come up in my experience working as a data scientist. I’d like to consider myself to have a fairly solid understanding of statistics*I would think it's accurate to say that I may be slightly above average in statistical understanding compared to the rest of the population.. A very large part of my work can be classified as stakeholder management - and this means interacting with other people who may not have a strong statistical foundation as I have. I’m not very good at it in the sense that often people think I am hostile when in fact all I am doing is questioning assumptions*I get the feeling people don't like it but you can't get around questioning of assumptions.. Since the early days of my work, there’s been a feeling that I’ve not been able to put to words when I dealt with stakeholders. I think I finally have the words to express said feelings. Specifically it was the transference of tacit knowledge that bugged me quite a bit. Consider an example where the stakeholder is someone who’s been experienced in the field for quite sometime. They don’t necessarily have the statistical know-how when it comes to dealing with data, much less the rigour that comes with statistical thinking. More often than not, decisions are driven by gut-feel based on what the data tells them. I call these sorts of processes data-inspired (as opposed to being data-driven decision making). These gut-feel about data can be correct or wrong. And the stakeholders learn from it, becoming experienced knowledge. Or what economists call tacit knowledge. The bulk of the work is of course transitioning an organization from being data-inspired to becoming actually data-driven. [Read More] ## Sapir-Whorf on Programming Languages ### Or: How I Got Blindsided By Syntax In my previous blog post I had a brief mention about the Sapir-Whorf hypothesis, and how it blind-sided me to an obvious design. In this blog post I’ll retell yet another story of how Sapir-Whorf blind-sided me yet again - just moments, actually. I’m getting sick of this shit, yo. Sapir-Whorf, Briefly Briefly, the Sapir-Whorf hypothesis states that the language you speak influences the way you think. The proper term for it is “linguistic relativity”. [Read More] ## Tensor Refactor: A Go Experience Report ### May Contain Some Thoughts on Generics in Go I recently finished a major refactor of tensor, which is a package for generic multidimensional arrays in Go. In this blog post I will recount the refactoring process, and why certain decisions were made. Further more I will also share some thoughts with regards to generics in Go while trying not to sound like a complete prat. There has been major refactors done to the tensor subpackage in Gorgonia - a Go library for deep learning purposes (think of it as TensorFlow or PyTorch for Golang). It’s part of a list of fairly major changes to the library as it matures more. While I’ve used it for a number of production ready projects, an informal survey of found that the library was still a little difficult to use (plus, it’s not used by any famous papers so people are generally more resistant to learning it than say, Tensorflow or PyTorch). Along the way in the process of refactoring this library, there were plenty of hairy stuff (like creating channels of negative length), and I learned a lot more about building generic data structures that I needed to. [Read More] ## 21 Bits Ought to Be Enough for (Everyday) English ### Or, Bitpacking Shennanigans I was working on some additional work on lingo, and preparing it to be moved to go-nlp. One of the things I was working on improving is the corpus package. The goal of package corpus is to provide a map of word to ID and vice versa. Along the way package lingo also exposes a Corpus interface, as there may be other data structures which showcases corpus-like behaviour (things like word embeddings come to mind). When optimizing and possibly refactoring the package(s), I like to take stock of all the things the corpus package and the Corpus interface is useful for, as this would clearly affect some of the type signatures. This practice usually involves me reaching back to the annals of history and looking at the programs and utilities I had written, and consolidate them. One of the things that I would eventually have a use for again is
\section{Introduction} Quantum critical behavior has been induced in many heavy fermion compounds by reducing the critical temperature of a low temperature phase transition to zero via techniques such as pressure, alloying or an applied magnetic field \cite{rf1,rf2} . The critical behavior of some of these compounds has been understood in the framework of the Wilson renormalization group approach, when the quantum mechanical fluctuations as well as the thermal fluctuations of the order parameter are taken into account \cite{rf3,rf4} . For others, however, where the fluctuations appear to be predominantly local, the extended Wilson approach does not satisfactorily explain the experimental results. There is as yet no fully comprehensive theory of this type of critical behavior so it is a very active area of research, both experimental and theoretical \cite{rf5,rf6,rf7} . In some heavy fermion compounds, a sudden jump from a small volume to a large volume Fermi surface may have occurred at the field induced transition \cite{rf9, rf10,rf11} . As the Luttinger sum rule relates the density of electrons in partially filled bands to the volume of the Fermi surface, this change in volume indicates a sudden change in the degree of localization of the f-electrons. This Fermi surface volume change may also be viewed as supporting the Kondo collapse scenario where the behavior at the quantum critical point has been interpreted as arising from a sudden disappearance of the states at the Fermi level associated with a Kondo resonance or renormalized f-band \cite{rf7,rf12} . To investigate this more fully, we first of all review the steps in the derivation of Luttinger theorem to assess the implications of a system having electrons in partially filled bands and Fermi surfaces with different volumes. We then relate this behavior to that of a local model with a field induced quantum critical point which has a Kondo regime with renormalized f-states at the Fermi level. We consider a system of non-interacting electrons in eigenstates specified by the set of quantum numbers denoted by $\alpha$ with single particle energies $E_{\alpha}$ . When two-body interaction terms are included the single-particle Green's function $G_{\alpha}(\omega)$ can be written in the form, \begin{equation} G_{\alpha}(\omega) = \frac{1}{\omega-\epsilon_{\alpha}-\Sigma_{\alpha}(\omega)}, \end{equation} where $\epsilon_{\alpha}=E_{\alpha}-\mu$ is the single particle excitation energy relative to the chemical potential $\mu$ and $\Sigma_{\alpha}(\omega)$ is the proper self-energy term. The expectation value for the total number of electrons in the system, $N$, is then given by \begin{equation} N=-\frac{1}{\pi}\sum_{\alpha} \int_{-\infty}^{0} \lim_{\delta\to 0+} \left[ {\rm Im}G_{\alpha}(\omega^{+}) \right] d\omega,\label{Neq} \end{equation} where $\omega^{+}=\omega+i\delta \ (\delta>0)$ and we have taken $\mu=0$. On replacing $G_{\alpha}(\omega^{+})$ in the integrand by $\left(1-\Sigma^{\prime}_{\alpha}(\omega^{+})\right)G_{\alpha}(\omega^{+})+\Sigma^{\prime}_{\alpha}(\omega^{+})G_{\alpha}(\omega^{+})$ , where the prime indicates a derivative with respect to $\omega$, the first integral can be performed to give \begin{equation} N=\sum_{\alpha} \left[ \frac{1}{2} - \frac{1}{\pi} \arctan \left( \frac {\epsilon_{\alpha}+\Sigma^{R}_{\alpha}(0^{+})} {\delta+\Sigma^{I}_{\alpha}(0^{+})} \right) \right]_{\delta\to 0^{+}} + \sum_{\alpha}I_{\alpha},\label{Neqmod} \end{equation} where \begin{equation} I_{\alpha} = -\frac{1}{\pi} \int_{-\infty}^{0} {\rm Im} \left[ \frac {\partial\Sigma_{\alpha}(\omega^{+})} {\partial\omega} G_{\alpha}(\omega^{+}) \right]_{\delta\to 0^{+}} d\omega, \end{equation} and $\Sigma^{R}_{\alpha}(\omega^{+})$ and $\Sigma^{I}_{\alpha}(\omega^{+})$ are the real and imaginary parts of $\Sigma_{\alpha}(\omega)$ respectively. Note that in deriving the general Eqn. (\ref{Neq}) no assumptions of translational invariance or periodicity have been made, so it applies to systems with impurities, including the Friedel sum rule for the single impurity Anderson model if it is expressed in terms of the diagonalized one electron states. It also takes into account any partially filled localized or atomic states. Luttinger \cite{rf13,rf13a} showed, within perturbation theory, that the imaginary part of the self-energy at the Fermi level vanishes, i.e. $\left[\Sigma_{\alpha}^{I}(0^{+})\right]_{\delta\to 0^{+}}=0$ due to phase space restrictions on scattering processes. If this condition holds and $\Sigma_{\alpha}^{R}(0)$ is finite, which we denote as condition (i), then we can simplify Eqn. (\ref{Neqmod}) and rewrite it in the form, \begin{equation} N=\sum_{\alpha} \left[ 1-\theta\left( \epsilon_{\alpha} + \Sigma_{\alpha}^{R} (0) \right) \right] + \sum_{\alpha}I_{\alpha}.\label{Neqmod1} \end{equation} We will refer to this equation as a generalized Luttinger- Friedel sum rule (GLFSR). Condition (i) is required to be able to define a Fermi surface. If the one-electron states correspond to Bloch states in a lattice with periodic boundary conditions so that the index $\alpha$ denotes a wave-vector {\bf k} and band index $n$ and a spin index $\sigma$, then if (i) holds, a Fermi surface can be defined as the locus of points ${\bf k}_{\rm F}$ which satisfies \begin{equation} \epsilon_{n,{\bf k}_{\rm F}} + \Sigma^{R}_{n,{\bf k}_{F}}(0) = 0. \end{equation} In this case Eqn. (\ref{Neqmod1}) becomes \begin{equation} N= \sum_{{\bf k},\sigma} \left[ 1 - \theta\left( \epsilon_{{\bf k},\sigma} + \Sigma^{R}_{{\bf k},\sigma}(0) \right) \right] + \sum_{{\bf k},\sigma} I_{{\bf k},\sigma}. \end{equation} The Luttinger sum rule relating the total number of electrons in partially filled bands to the sum of the volumes of the spin up and spin down Fermi surfaces follows if the Luttinger integral, $\sum_{{\bf k},\sigma}I_{{\bf k},\sigma}=0$. If there is a change in the volume of the Fermi surface at a transition, yet no change in the total number of electrons in the states contributing to the sum rule, it follows that the term, $\sum_{{\bf k},\sigma}I_{{\bf k},\sigma}$ , cannot be zero through the transition . We will exploit the consequences of this observation for a particular model later in this paper. The relation in Eqn. (\ref{Neqmod1}) can be given an interpretation in terms of quasiparticles if $\Sigma^{R}_{\alpha}(\omega)$ has a finite derivative with respect to $\omega$ at $\omega=0$ and the derivative, $\partial\Sigma^{I}_{\alpha}(\omega)/\partial\omega$ , is zero evaluated at $\omega=0$, which we denote as condition (ii). The excitation energy of a quasiparticle $\tilde{\epsilon}_{\alpha}$ can be defined as $\tilde{\epsilon}_{\alpha}=z_{\alpha}\left(\epsilon_{\alpha}+\Sigma^{R}_{\alpha}(0)\right)$ , where $z_{\alpha} (<1)$ is the quasiparticle weight given by \begin{equation} z^{-1}_{\alpha} = \left[ 1 - \frac {\partial\Sigma^{R}_{\alpha}(\omega)} {\partial\omega} \right]_{\omega=0}. \end{equation} If conditions (i) and (ii) are satisfied one can define a total quasiparticle density of states, $\tilde{\rho}(\omega)$ via \begin{equation} \tilde{\rho}(\omega) = \sum_{\alpha} \delta\left( \omega-\tilde{\epsilon}_{\alpha} \right). \end{equation} Note that the quasiparticle density of states is different from the quasiparticle contribution to the spectral density of $\sum_{\alpha}G_{\alpha}(\omega)$. In terms of the quasiparticles Eqn. (\ref{Neqmod1}) takes the form, \begin{equation} N= \sum_{\alpha} \left[ 1 - \theta(\tilde{\epsilon}_{\alpha}) \right] + \sum_{\alpha} I_{\alpha}, \end{equation} which sums over the number of occupied free quasiparticle states. Equivalently the first term of the right hand side can be expressed as an integral over the free quasiparticle density of states $\tilde{\rho}(\omega)$ up to the Fermi level, \begin{equation} N=\int_{-\infty}^{0} \tilde{\rho}(\omega)d\omega + \sum_{\alpha}I_{\alpha}. \end{equation} The existence of long-lived quasiparticle excitations in the neighborhood of the Fermi level is one of the basic assumptions in the phenomenological Fermi liquid theory of Landau, and the microscopic theory provides a criterion for such excitations to exist. For a Fermi liquid a further condition is usually invoked, that the one-electron excitations of the interacting system are adiabatically connected to those of the non-interacting system, which we denote as condition (iii). The proof of the Luttinger sum rule, that $\sum_{\alpha}I_{\alpha}=0$, as given by Luttinger and Ward \cite{rf14} requires this third condition (iii) to hold, and adiabatic continuity would appear to be a general requirement in an alternative non-perturbative derivation for a Fermi liquid \cite{rf15,rf15a} . However, it is possible for quasiparticle excitations to exist at the Fermi level in a phase in which condition (iii) does not hold, and we have given an example in earlier work \cite{rf15,rf15a} . Here we consider a local system which, though not directly applicable as a model of heavy fermion compounds, has a field induced quantum critical point and has a sum rule that mirrors the change in volume of the Fermi surface as may have been observed in some heavy fermion compounds. \begin{figure}[b] \includegraphics[width=0.4\textwidth]{fig_1 \caption{ (Color online) A plot of $h_{c}/J$, where $h_{c}$ is critical field to induce a transition, as a function of the interaction parameter $J$ for a particle-hole symmetric model with $U/\pi\Delta = 5, J_{c} = 2.8272\times 10^{-5}$ and for an asymmetric model with $\epsilon_{f}/\pi\Delta = -0.659, U/\pi\Delta = 0.5, J_{c} = 5.4220\times 10^{-3}$(dashed line).} \end{figure} \section{Model} The model we consider is a version of the two impurity Anderson model in the presence of magnetic field. The Hamiltonian for this model takes the form, ${\mathcal H}=\sum_{i=1,2}{\mathcal H}_{i}+{\mathcal H}_{12}$ , where ${\mathcal H}_{i}$ corresponds to an individual Anderson impurity model with channel index $i$, \begin{eqnarray} {\mathcal H}_{i} &=& \sum_{\sigma} \epsilon_{f,i,\sigma} f^{\dagger}_{i,\sigma}f_{i,\sigma} + \sum_{k,\sigma} \epsilon_{k,i} c^{\dagger}_{k,i,\sigma}c_{k,i,\sigma}\nonumber\\ & &\mbox{} + \sum_{k,\sigma} \left( V_{k,i}f^{\dagger}_{i,\sigma}c_{k,i,\sigma}+{\rm h.c.} \right) + U_{i}n_{f,i,\uparrow}n_{f,i,\downarrow}, \end{eqnarray} where $f^{\dagger}_{i,\sigma}$, $f_{i,\sigma}$, are creation and annihilation operators for an electron at the impurity site in channel $i$ and spin component $\sigma=\uparrow, \ \downarrow$, and energy level $\epsilon_{f,i,\sigma}=\epsilon_{f,i}-h\sigma$, where $h = g\mu_{\rm B}H/2$, $H$ is a local magnetic field, $g$ is the g-factor and $\mu_{\rm B}$ the Bohr magneton. The creation and annihilation operators $c^{\dagger}_{k,i,\sigma}$, $c_{k,i,\sigma}$ are for partial wave conduction electrons with energy $\epsilon_{k,i}$ in channel $i$, each with a bandwidth $2D$ with $D = 1$. With the assumption of a flat wide conduction band the hybridization factor, $\Delta_{i}(\omega)=\pi\sum_{k}|V_{k,i}|^{2}\delta(\omega-\epsilon_{k,i})$, can be taken as constant independent of $\omega$. For the inter-impurity interaction Hamiltonian ${\mathcal H}_{12}$ we take into account an antiferromagnetic exchange term $J$ and a direct interaction $U_{12}$, \begin{equation} {\mathcal H}_{12} = 2J{\bf S}_{f,1}\cdot{\bf S}_{f,2} + U_{12} \sum_{\sigma,\sigma^{\prime}} n_{f,1,\sigma}n_{f,2,\sigma^{\prime}}. \end{equation} \begin{figure}[b] \includegraphics[width=0.4\textwidth]{fig_2 \caption{(Color online) A plot of the quasiparticle weight factors, $z_{\uparrow} = \tilde{\Delta}_{\uparrow}/\Delta$ and $z_{\downarrow} =\tilde{\Delta}_{\downarrow}/\Delta$ as a function of the magnetic field $h/h_{c}$ in the region of the transition for the parameter set $\epsilon_{f}/\pi\Delta = -0.659, U/\pi\Delta = 0.5, J_{c} = 5.4420\times 10^{-3}, J = 5.8482\times10^{-3}$, where $h_{c}/\pi\Delta = 0.20795159$.} \end{figure} The Kondo version of the two impurity model has a long history and was originally put forward to study the competition between the Kondo effect and the intersite RKKY interaction in heavy fermion systems. It was in this context that it was shown to have $T = 0$ phase transition \cite{rf16,rf16a,rf16b,rf16c,rf16d,rf16e} . Studies based on the two impurity Anderson model in the absence of a magnetic field have revealed a discontinuous change in spectral density
choices of the values for $N$ and $\alpha$ as the mixed-integer PDE-constrained optimization problem, thereby yielding $250$ instances of this problem class. We present the setup of \cref{alg:slip} in \S\ref{sec:comp_slip_setup}. \subsection{Integer Control of a Steady Heat Equation}\label{sec:cex_2nd_elliptic} We consider the class of IOCPs \begin{gather}\label{eq:iocp_steady} \begin{aligned} \min_{u,x} \quad & \frac{1}{2} \vert \vert u - \bar v \vert \vert_{L^2(-1,1)}^2 + \alpha \TV(x) \\ \text{s.t.} \quad & -\varepsilon(t) \frac{d^2 u}{dt^2}(t)=f(t)+x(t) \text{ for a.a.\ } t \in (-1,1),\\ \quad & u(t)=0 \text{ for } t \in \{-1,1\}, \\ \quad & x(t) \in \Xi = \{-2, \dots, 23\} \subset \mathbb{Z} \text{ for a.a.\ } t \in (-1,1), \end{aligned}\tag{SH} \end{gather} which is based on \cite[page 112\,ff.]{kouri2012} with the choices $\varepsilon(t)=0.1 \chi_{(-1,0.05)}(t) +10 \chi_{[0.05,)}(t)$, $f(t)=e^{-(t+0.4)^2}$, and $\bar v(t) = 1$ for all $t \in [-1,1]$. We rewrite the problem in the equivalent reduced form \begin{align*} \min_{x} \quad & \frac{1}{2} \| S x +u_f-\bar v \vert \vert_{L^2(-1,1)}^2 + \alpha \TV(x)\\ \text{s.t.} \quad & x(t) \in \Xi = \{-2, \dots, 23\} \subset \mathbb{Z} \text{ for a.a.\ } t \in (-1,1), \end{align*} where the function $S : L^2(-1,1) \to L^2(-1,1)$ denotes the linear solution map of boundary value problem that constrains \eqref{eq:iocp_steady} for the choice $f = 0$ and $u_f \in L^2(-1,1)$ denotes the solution of the boundary value problem that constrains \eqref{eq:iocp_steady} for the choice $x = 0$. With this reformulation and the choice $F(x) \coloneqq \frac{1}{2} \vert \vert u - \bar v \vert \vert_{L^2(-1,1)}^2$ for $x \in L^2(-1,1)$, we obtain the problem formulation \eqref{eq:p}, which gives rise to \cref{alg:slip} and the corresponding subproblems. We run \cref{alg:slip} for all combinations of the parameter values $\alpha \in \{10^{-3}, \dots, 10^{-7}\}$ and uniform discretizations of the domain $(-1,1)$ into $N \in \{ 512, \ldots, 8192 \}$ intervals. The number of intervals coincides with the problem size constant $N$ in the resulting IPs of the form \eqref{eq:ip}. Because we have a uniform discretization, we obtain $\gamma_i = 1$ for all $i \in [N]$ On each of these 25 discretized instances of \eqref{eq:iocp_steady}, we run our implementation of \cref{alg:slip} with three different initial iterates $x^0$, thereby giving a total of 75 runs of \cref{alg:slip}. Regarding the initial iterates $x^0$, we make the following choices: \begin{enumerate} \item we compute a solution of the continuous relaxation, where $\Xi$ is replaced by $\conv \Xi = [-2,23]$ and $\alpha$ is set to zero, with Scipy's (see \cite{2020SciPy-NMeth}) implementation of limited memory BFGS with bound constraints (see \cite{liu1989limited}) and round it to the nearest element in $\Xi$ on every interval, \item we set $x^0(t) = 0$ for all $t \in (-1,1)$, and \item we compute the arithmetic mean of the two previous choices and round it to the nearest element in $\Xi$ on every interval. \end{enumerate} For each of these instances / runs, we compute the discretization of the least squares term, the PDE, and the corresponding derivative of $F$ with the help of the open source package Firedrake (see \cite{Rathgeber2016}). Note that the derivative is computed in Firedrake with the help of so-called adjoint calculus in a \emph{first-discretize, then-optimize} manner. (see \cite{hinze2008optimization}) \subsection{Generic Signal Reconstruction Problem}\label{sec:cex_signal} We consider the class of IOCPs \begin{gather}\label{eq:iocp_signal} \begin{aligned} \min_x \quad & \frac{1}{2} \vert \vert Kx - f \vert \vert_{L^2(0,1)}^2 + \alpha \TV(x) \\ \text{s.t.} \quad & x(t) \in \Xi = \{-5, \dots, 5 \} \ \text{ for a.a.\ } t \in (0,1), \end{aligned}\tag{SR} \end{gather} which is already in the form of \eqref{eq:p} with the choice $F(x) \coloneqq \frac{1}{2} \vert \vert Kx - f \vert \vert_{L^2(0,1)}^2$ for $x \in L^2(0,1)$. In the problem formulation \eqref{eq:iocp_signal} the term $Kx \coloneqq (k * x)(t) = \int_0^1 k(t-\tau) x(\tau) \dd{\tau} = \int_0^t k(t-\tau) x(\tau) \dd{\tau}$ for $t \in [0,1]$ denotes the convolution of the input control $x$ and a convolution kernel $k$. We choose the convolution kernel $k$ as a linear combination of $200$ Gaussian kernels $k(t) = \chi_{[0,\infty)}(t) \sum_{i=1}^{200} b_i \frac{1}{\sqrt{2 \pi} \sigma_i} \exp\left(\frac{-(t-\mu_i)^2}{2 \sigma_i^2}\right)$ in our numerical experiments. The coefficients $b_i$, $\mu_i$, and $\sigma_i$ in the Gaussian kernels are sampled from random distributions, specifically the $c_i$ are drawn from a uniform distribution of values in [0,1), the $\mu_i$ are drawn from a uniform distribution of values in $[-2,3)$ and the $\sigma_i$ are drawn from an exponential distribution with rate parameter value one. Furthermore, $f$ is defined as $f(t) \coloneqq 5 \sin(4 \pi t)+10$ for $t \in [0,1]$. We sample ten kernels $k$ and run \cref{alg:slip} for all combinations of the parameter values $\alpha \in \{10^{-3}, \dots, 10^{-7}\}$ and discretizations of the domain $(-1,1)$ into $N \in \{ 512, \ldots, 8192 \}$ intervals. The number of intervals coincides with the problem size constant $N$ in the resulting IPs of the form \eqref{eq:ip}. We choose the initial control iterate $x^0(t) = 0$ for all $t \in [0,1]$ for all runs, thereby giving a total of 250 runs of \cref{alg:slip}. For each of these instances / runs, we compute a uniform discretization of the least squares term, the operator $K$, and the corresponding derivative of $F$ with the help of a discretization of $[0,1]$ into $8192$ intervals and approximate the integrals over them with Legendre--Gauss quadrature rules of fifth order. For choices $N < 8192$ we apply a broadcasting operation of the controls to neighboring intervals to obtain the corresponding evaluations of $F$ and coefficients in \eqref{eq:ip} with respect to a smaller number of intervals for the control function ansatz. We obtain again $\gamma_i = 1$ for all $i \in [N]$ due to the uniform discretization of the domain $[0,1]$. \subsection{Computational Setup of \cref{alg:slip}}\label{sec:comp_slip_setup} For all runs of \cref{alg:slip} we use the reset trust-region radius $\Delta^0 = \frac{1}{8}N$ and the step acceptance ratio $\rho = 0.1$. Regarding the solution of the generated subproblems, we have implemented both \ref{itm:astar} and \ref{itm:top} in C++. For the solution approach \ref{itm:scip} we employ the SCIP Optimization Suite 7.0.3 with the underlying LP solver SoPlex \cite{GamrathEtal2020ZR} to solve the IP formulation \eqref{eq:ip}. For all IOCP instances with $N \in \{512, \ldots, 2048 \}$ we prescribe a time limit of 240 seconds for the IP solver but note that almost all instances require only between a fraction of a second and a single digit number of seconds computing time for global optimality so that the time limit is only reached for a few cases. For higher numbers of $N$ the running times of the subproblems are generally too long to be able to solve all instances with a meaningful time limit for the solution approach \ref{itm:scip}. In order to assess the run times for \ref{itm:scip} in this case as well we draw $50$ instances of \eqref{eq:ip} from the pool of all generated subproblems both for $N = 4096$ and $N = 8192$ and solve them with a time limit of one hour each. Moreover, we do the same with the $50$ instances of \eqref{eq:ip} for which the solution approach \ref{itm:astar} has the longest run times. We note that a run of \cref{alg:slip} may return different results depending on which solution approach is used for the trust-region subproblems even if all trust-region subproblems are solved optimally because the minimizers need not to be unique and different algorithms may find different minimizers. In order to be able to compare the performance of the algorithms properly we record the trust-region subproblems that are generated when using \ref{itm:astar} as subproblem solver and pass the resulting collection of instances of \eqref{eq:ip} to the solution approaches \ref{itm:top} and \ref{itm:scip}. A laptop computer with an Intel(R) Core i7(TM) CPU with eight cores clocked at 2.5\,GHz and 64 GB RAM serves as the computing platform for all of our experiments. \section{Results}\label{sec:numresults} We present the computational results of both IOCPs. We provide run times for each of the three algorithmic solution approaches \ref{itm:astar}, \ref{itm:top}, and \ref{itm:scip}. The 75 runs of \cref{alg:slip} on instances of \eqref{eq:iocp_steady}, see \S\ref{sec:cex_2nd_elliptic}, generate 30440 instances of \eqref{eq:ip} in total. The 250 runs of \cref{alg:slip} on instances of \eqref{eq:iocp_steady}, see \S\ref{sec:cex_signal}, generate 655361 instances of \eqref{eq:ip} in total. A detailed tabulation including a break down with respect to the values of $\alpha$ and $N$ can be found in \cref{tbl:number_of_generated_ips} in \S\ref{sec:tabulated_data}. We analyze the recorded run times of all solution approaches with respect to the value $N$ in \S\ref{sec:run_times_wrt_N}. The run times of \ref{itm:astar} and \ref{itm:top} turn out to be generally much lower than those of \ref{itm:scip} and we analyze their run times in more detail with respect to the product of $N$ and $\Delta$ and the number of nodes in the graph in
it is assumed that they are distinct from one another. If X is finite, and A is any subset of X, then X/A is finite, so A is in the topology. These notes covers almost every topic which required to learn for MSc mathematics. Note: The topology which is both discrete and indiscrete such topology which has one element in set X. i.e. (i)One example of a topology on any set Xis the topology T = P(X) = the power set of X(all subsets of Xare in T , all subsets declared to be open). Set of points. A handwritten notes of Topology by Mr. Tahir Mehmood. This can be done in topos theory, but relies on an impredicative use of power_sets_. DefinitionA.2 A set A ⊂ X in a topological space (X,⌧) is called closed (9) if its complement is open. Indiscrete topology is finer than any other topology defined on the same non empty set. 2. Thanks for contributing an answer to Mathematics Stack Exchange! What are the differences between the following? There are 3 of these. Idea. The points are isolated from each other. A power module offers a validated and specified solution, while a discrete power supply enables more customization to the application. At the other end of the spectrum, we have the discrete topology, T = P(X), the power set of X. A basis B for a topology on Xis a collection of subsets of Xsuch that (1)For each x2X;there exists B2B such that x2B: (2)If x2B 1 \B 2 for some B … From the definition of the discrete metric, taking a ball of radius $1/2$ around any element $x \in X$ gives you that $\{x\} \in T$. An element of Tis called an open set. A study of the strong topologies on finite dimensional probabilistic normed spaces Now suppose that K has the discrete topology . f˚;Xg(the trivial topology) f˚;X;fagg; a2f0;1;2g. Figure 1. The points are isolated from each other. (ii) The intersection of a finite number of subsets of X, being the subset of X, belongs to $$\tau$$. Uniform spaces are closely related to topological spaces since one may go back and forth between topological and uniform spaces because uniform spaces are … Why is a discrete topology called a discrete topology? Now we shall show that the power set of a non empty set X is a topology on X. Let X be a set, then the discrete topology T induced from discrete metric is P (X), which is the power set of X I know T ⊂ P (X), but how do we know T = P (X) The power set P(X) of a non empty set X is called the discrete topology on X, and the space (X,P(X)) is called the discrete topological space or simply a discrete space. Any set can be given the discrete topology, in which every subset is open. A set … Note: The MPU decoupling scheme is … To learn more, see our tips on writing great answers. Show that for any topological space X the following are equivalent. Over the years, several optimization techniques were widely used to find the optimum shape and size of engineering structures (trusses, frames, etc.) It may be noted that indiscrete topology defined on the non empty set X is the weakest or coarser topology on that set X, and discrete topology defined on the non empty set X is the stronger or finer topology on that set X. Example 1, 2, 3 on page 76,77 of [Mun] Example 1.3. Since the power set of a finite set is finite there can be only finitely many open sets (and only finitely many closed sets). Required fields are marked *. Over the years, several optimization techniques were widely used to find the optimum shape and size of engineering structures (trusses, frames, etc.) For a given set of requirements, a double-ended topology requires a smaller core than a single-ended topology and does not need an additional reset winding. (Formal topology arose out of a desire to work predicatively.) If T 1 is a ner topology on Xthan T 2, then the identity map on Xis necessarily continuous when viewed as function from (X;T 1) ! Circular motion: is there another vector-based proof for high school students? We can also get to this topology from a metric, where we define d(x 1;x 2) = ˆ 0 if x 1 = x 2 1 if x 1 6=x 2 I'm not familiar with this notation and I can't find the answer in my textbook or in google. Next,weshallshowthatthemetric of the space induces a topology on the space so the power set of Y) So were I to show that a set (Y) with the discrete topology were path-connected I'd have to show a continuous mapping from [0,1] with the Euclidean topology to any two points (with the end points having a and b as their image). the strong topology on this PN space is the discrete topology on the set [R.sup.n]. This proves that $P(X) \subseteq T$, and you already have $T \subseteq P(X)$, hence $T = P(X)$. Example 1.1.9. Definition: Assume you have a set X.A topology on X is a subset of the power set of X that contains the empty set and X, and is closed under union and finite intersection.. Discrete topology is finer than the indiscrete topology defined on the same non empty set. I'm doing a Discrete Math problem that involves a set raised to the power of an int: {-1, 0, 1} 3. At the other extreme is the topology T2 = {∅,X}, called the trivial topology on X. Also, any set can be given the trivial topology (also called the indiscrete topology), in which only the empty set … In this case, every subset of X is open. Example 1.1.9. Topology on a Set. Your email address will not be published. 4 TOPOLOGY: NOTES AND PROBLEMS Remark 2.7 : Note that the co-countable topology is ner than the co- nite topology. Let x∈X.Then a neighborhood of x, N xis any set containing B(x,),forsome>0. Discrete set. Left-aligning column entries with respect to each other while centering them with respect to their respective column margins. (a) X has the discrete topology. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Require Import Ensembles. For this example, one can start with an arbitrary set, but in order to better illustrate, take the set of the first three primes: \{2,3,5\}. discrete space. I'm aware that if there is a set A, then 2 A would be the powerset of A but this is obviously different P(X) the power set of X(discrete topology). the strong topology on this PN space is the discrete topology on the set [R.sup.n]. Can I combine two 12-2 cables to serve a NEMA 10-30 socket for dryer? (iii) $$\phi$$ and X, being the subsets of X, belong to $$\tau$$. Do you need a valid visa to move out of the country? The following example is given for the STM32MP157 device. Thus the 1st countable normal space R 5 in Example II.1 is not metrizable, because it is not fully normal. Let $X$ be a set, then the discrete topology $T$ induced from discrete metric is $P(X)$, which is the power set of $X$, I know $T \subset P(X)$, but how do we know $T=P(X)$. Constructing a topology for a family of discrete subsets. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Example: For every non-empty set X, the
meaning the motion of both disks, at different altitudes); it's not the same as an orbit in a single spatial plane, which is what your results apply to. I am aware of that. That is why I said While the orbiting particle is not the exact analogue of a spinning (disc) at constant altitude,..... in my last post. It was meant to be just a clue that under certain circumstances angular momentum is conserved in GR. Now I will try a slightly different example that may be closer to what you require. Refering to the above diagram, consider a particle orbiting at constant ##\varphi## in a small circle around the North pole. For a small enough circle, this will approximate a small flat disc.While describing this circle, r and ##\varphi## remain constant. This implies rotational symmetry for a rotation in the ##\theta## direction. Noether's conservation theorem tells us that angular momentum is conserved if there is rotational symmetry. Therefore, (if I have interpreted Noether correctly) the angular momentum of the disc (or ring) at constant altitude should remain constant over time. Due to the spherical symmetry of the Schwarzschild metric this circle at constant r, does not have to be around the North pole. We need an actual analytical description of the motion of the rotating particles that explicitly *shows* that their period is constant as seen by the observer at the center of the disk. (That's one objective that the analysis I'm doing in the "spin-off" thread is ultimately aimed at.) I could do an analysis of pervect's coordinate system for a linearly accelerating 2+1D disc that would show that the period remains constant as measured by the observer at the centre of the disc, but I am pretty sure he assumed that when constructing the transformation system. Anyway, it seems that the idea that Born rigidity failure between two stacked thin discs is due to sheer stress caused by differential speeds of the two discs, depends on the failure of conservation of angular momentum in an accelerating reference frame. I think we need to start a separate thread on that conservation law. PAllen Anyway, it seems that the idea that Born rigidity failure between two stacked thin discs is due to sheer stress caused by differential speeds of the two discs, depends on the failure of conservation of angular momentum in an accelerating reference frame. I think we need to start a separate thread on that conservation law. Actually, it follows from conservation of angular momentum. Proper angular momentum is equivalent to what is measured in an instantly colocated, comoving, inertial frame, in SR. This remains constant for each disc. For the top and bottom disk, undergoing orthogonal Rindler motion, each disc perceive's the other as having its inertia and angular speed modified by Rindler g00, conserving angular momentum but producing shear (due to the difference in angular speed). PeterDonis Mentor Any metric or transformation I came up with that included linear acceleration along with rotation would assume that the proper angular momentum is conserved, but you require that I prove that first. I'm not looking for a metric or transformation; I'm looking for a description of a congruence of worldlines describing a linearly accelerated rotating disk (or a stack of them) that has a zero expansion tensor. If that congruence turns out to have each thin disk at a constant ##z## having a constant angular velocity, fine. But you can't just assume that's the case, and also assume that the congruence has zero expansion. You have to prove that both of those things can simultaneously be true. (Interestingly, I note that pervect assumes that proper angular velocity is constant in his spin off thread.) But the congruence of worldlines that pervect described using that assumption does not have a zero expansion tensor; at least, computations so far indicate that it doesn't. It was meant to be just a clue that under certain circumstances angular momentum is conserved in GR. Nobody is arguing about that; certainly I'm not. Now I will try a slightly different example that may be closer to what you require. Once again, you don't have to prove that angular momentum can be conserved. That's not the problem. The problem is finding a congruence describing a linearly accelerated, rotating disk, that has zero expansion. That is the critical question that we do not have an answer for. Your arguments here about Noether's theorem and angular momentum conservation are fine, but they don't address that critical question. See further comments below. I could do an analysis of pervect's coordinate system for a linearly accelerating 2+1D disc that would show that the period remains constant as measured by the observer at the centre of the disc, but I am pretty sure he assumed that when constructing the transformation system. And that isn't relevant to the critical question; see above. Anyway, it seems that the idea that Born rigidity failure between two stacked thin discs is due to sheer stress caused by differential speeds of the two discs, depends on the failure of conservation of angular momentum in an accelerating reference frame. No, it depends on the nonexistence of a congruence describing the discs' motion that has zero expansion. I don't see any way of addressing that question with arguments about angular momentum conservation. First, angular momentum could be perfectly well conserved in an object that is tearing itself apart due to shear stress, as long as the pieces that were flying apart were doing so along appropriate trajectories. (PAllen's post in response to yours gives an example of how the nonzero shear could arise.) Second, angular momentum conservation might not hold even in an object that *was* moving on a congruence having zero expansion, if the object was accelerated. If the force producing the acceleration is not applied in a way that produces zero torque, then the object's angular momentum will change, even if the overall motion has zero expansion. Actually, it follows from conservation of angular momentum. Proper angular momentum is equivalent to what is measured in an instantly colocated, comoving, inertial frame, in SR. This remains constant for each disc. For the top and bottom disk, undergoing orthogonal Rindler motion, each disc perceive's the other as having its inertia and angular speed modified by Rindler g00, conserving angular momentum but producing shear (due to the difference in angular speed). If the top observer measures the proper angular velocity of his disc to be w and the bottom observer also measures the speed of his disc to also be w, then the top observer will indeed perceive the the bottom disc to be turning slower by a factor of g00 and the bottom observer will consider the the top disc to be turning faster by a factor of g00. Now if we perform a "one time operation" of physically speeding the rear disc up by a factor of g00, they will both consider the discs to turning at the same constant rate for the rest of time and their is no sheer. This is the same as speeding up the rear clock by the appropriate factor so that the clock frequencies tick at the same rate according to both observers. Last edited: But pervect showed that if the observers are moving relative to the plane, and the plane is accelerating, then those observers see the plane as curved, not plane. So their motion does make a difference. I have had another look at the transformations given by Pervect and there appears to be a number of issues with them. For certain inputs it is possible to get the inertial coordinates progressively going further back in time, while the time is going forward in the rocket. The vertical height of the rocket relative to height in the elevator changes over time. In the inertial coordinates the rocket is banana
Bayes', GaussianNB), ... SearchSpace('Decision Tree', DecisionTreeClassifier) ... ] >>> config = Config( ... local_dir = 'miraiml_local', ... problem_type = 'classification', ... score_function = roc_auc_score, ... search_spaces = search_spaces, ... ensemble_id = 'Ensemble' ... ) >>> def on_improvement(status): ... print('Scores:', status.scores) >>> engine = Engine(config, on_improvement=on_improvement) """ def __init__(self, config, on_improvement=None): self.__validate__(config, on_improvement) self.config = config self.on_improvement = on_improvement self.train_predictions_df = None self.test_predictions_df = None self.__is_running__ = False self.must_interrupt = False self.mirai_seeker = None self.models_dir = config.local_dir + 'models/' self.train_data = None self.ensembler = None self.n_cycles = 0 self.last_improvement_timestamp = None @staticmethod def __validate__(config, on_improvement): """ Validates the constructor parameters. """ if not isinstance(config, Config): raise TypeError('miraiml.Engine\'s constructor requires an object ' + 'of miraiml.Config') if on_improvement is not None and not callable(on_improvement): raise TypeError('on_improvement must be None or a function') def is_running(self): """ Tells whether the engine is running or not. :rtype: bool :returns: ``True`` if the engine is running and ``False`` otherwise. """ return self.__is_running__ def interrupt(self): """ Makes the engine stop on the first opportunity. .. note:: This method is **not** asynchronous. It will wait until the engine stops. """ self.must_interrupt = True if self.ensembler is not None: self.ensembler.interrupt() while self.__is_running__: time.sleep(.1) self.must_interrupt = False def load_train_data(self, train_data, target_column, restart=False): """ Interrupts the engine and loads the train dataset. All of its columns must be either instances of ``str`` or ``int``. .. warning:: Loading new training data will **always** trigger the loss of history for optimization. :type train_data: pandas.DataFrame :param train_data: The training data. :type target_column: str or int :param target_column: The target column identifier. :type restart: bool, optional, default=False :param restart: Whether to restart the engine after updating data or not. :raises: ``TypeError`` if ``train_data`` is not an instance of ``pandas.DataFrame``. :raises: ``ValueError`` if ``target_column`` is not a column of ``train_data`` or if some column name is of a prohibited type. """ self.__validate_train_data__(train_data, target_column) self.columns_renaming_map = {} self.columns_renaming_unmap = {} for column in train_data.columns: column_renamed = str(column) self.columns_renaming_map[column] = column_renamed self.columns_renaming_unmap[column_renamed] = column self.target_column = target_column train_data = train_data.rename(columns=self.columns_renaming_map) self.interrupt() self.train_data = train_data.drop(columns=target_column) self.train_target = train_data[target_column] self.all_features = list(self.train_data.columns) if self.mirai_seeker is not None: self.mirai_seeker.reset() if restart: self.restart() @staticmethod def __validate_train_data__(train_data, target_column): """ Validates the train data. """ if not isinstance(train_data, pd.DataFrame): raise TypeError('Training data must be an object of pandas.DataFrame') train_columns = train_data.columns if target_column not in train_columns: raise ValueError('target_column must be a column of train_data') for column in train_columns: if not isinstance(column, str) and not isinstance(column, int): raise ValueError('All columns names must be either str or int') def load_test_data(self, test_data, restart=False): """ Interrupts the engine and loads the test dataset. All of its columns must be columns in the train data. The test dataset is the one for which we don't have the values for the target column. This method should be used to load data in production. .. warning:: This method can only be called after :func:`miraiml.Engine.load_train_data` :type test_data: pandas.DataFrame, optional, default=None :param test_data: The testing data. Use the default value if you don't need to make predictions for data with unknown labels. :type restart: bool, optional, default=False :param restart: Whether to restart the engine after loading data or not. :raises: ``RuntimeError`` if this method is called before loading the train data. :raises: ``ValueError`` if the column names are not consistent. """ if self.train_data is None: raise RuntimeError('This method cannot be called before load_train_data') self.__validate_test_data__(test_data) self.test_data = test_data.rename(columns=self.columns_renaming_map) if restart: self.restart() def __validate_test_data__(self, test_data): """ Validates the test data. """ for column in self.columns_renaming_map: if column != self.target_column and column not in test_data.columns: raise ValueError( 'Column {} is not a column in the train data'.format(column) ) def clean_test_data(self, restart=False): """ Cleans the test data from the buffer. .. note:: Keep in mind that if you don't intend to make predictions for unlabeled data, the engine will run faster with a clean test data buffer. :type restart: bool, optional, default=False :param restart: Whether to restart the engine after cleaning test data or not. """ self.interrupt() self.test_data = None if restart: self.restart() def shuffle_train_data(self, restart=False): """ Interrupts the engine and shuffles the training data. :type restart: bool, optional, default=False :param restart: Whether to restart the engine after shuffling data or not. :raises: ``RuntimeError`` if the engine has no data loaded. .. note:: It's a good practice to shuffle the training data periodically to avoid overfitting on a particular folding pattern. """ if self.train_data is None: raise RuntimeError('No data to shuffle') self.interrupt() seed = int(time.time()) self.train_data = self.train_data.sample(frac=1, random_state=seed) self.train_target = self.train_target.sample(frac=1, random_state=seed) if restart: self.restart() def reconfigure(self, config, restart=False): """ Interrupts the engine and loads a new configuration. .. warning:: Reconfiguring the engine will **always** trigger the loss of history for optimization. :type config: miraiml.Config :param config: The configurations for the behavior of the engine. :type restart: bool, optional, default=False :param restart: Whether to restart the engine after reconfiguring it or not. """ self.interrupt() self.config = config if self.mirai_seeker is not None: self.mirai_seeker.reset() if restart: self.restart() def restart(self): """ Interrupts the engine and starts again from last checkpoint (if any). It is also used to start the engine for the first time. :raises: ``RuntimeError`` if no data is loaded. """ if self.train_data is None: raise RuntimeError('No data to train') self.interrupt() def starter(): try: self.__main_loop__() except Exception: self.__is_running__ = False raise Thread(target=starter).start() def __improvement_trigger__(self): """ Called when an improvement happens. """ self.last_improvement_timestamp = time.time() if self.on_improvement is not None: self.on_improvement(self.request_status()) def __update_best__(self, score, id): """ Updates the best id of the engine. """ if self.best_score is None or score > self.best_score: self.best_score = score self.best_id = id def __check_stagnation__(self): """ Checks whether the engine has reached stagnation or not. If so, the engine is interrupted. """ if self.config.stagnation >= 0: diff_in_seconds = time.time() - self.last_improvement_timestamp if diff_in_seconds/60 > self.config.stagnation: self.interrupt() def __main_loop__(self): """ Main optimization loop. """ self.__is_running__ = True if not os.path.exists(self.models_dir): os.makedirs(self.models_dir) self.base_models = {} self.train_predictions_df = pd.DataFrame() self.test_predictions_df = pd.DataFrame() self.scores = {} self.best_score = None self.best_id = None self.ensembler = None self.mirai_seeker = MiraiSeeker( self.config.search_spaces, self.all_features, self.config ) self.n_cycles = 0 self.last_improvement_timestamp = time.time() start = time.time() for search_space in self.config.search_spaces: if self.must_interrupt: break id = search_space.id base_model_path = self.models_dir + id base_model_class = search_space.model_class if os.path.exists(base_model_path): base_model = load_base_model(base_model_class, base_model_path) parameters = base_model.parameters parameters_values = search_space.parameters_values for key, value in zip(parameters.keys(), parameters.values()): if key not in parameters_values: warnings.warn( 'Parameter ' + str(key) + ', set with value ' + str(value) + ', from checkpoint is not on the ' + 'provided search space for the id ' + str(id), RuntimeWarning ) else: if value not in parameters_values[key]: warnings.warn( 'Value ' + str(value) + ' for parameter ' + str(key) + ' from checkpoint is not on the provided ' + 'search space for the id ' + str(id), RuntimeWarning ) else: base_model = self.mirai_seeker.seek(search_space.id) dump_base_model(base_model, base_model_path) self.base_models[id] = base_model train_predictions, test_predictions, score = base_model.predict( self.train_data, self.train_target, self.test_data, self.config) self.train_predictions_df[id] = train_predictions self.test_predictions_df[id] = test_predictions self.scores[id] = score self.__update_best__(self.scores[id], id) total_cycles_duration = time.time() - start will_ensemble = len(self.base_models) > 1 and\ self.config.ensemble_id is not None if will_ensemble: self.ensembler = Ensembler( list(self.base_models), self.train_target, self.train_predictions_df, self.test_predictions_df, self.scores, self.config ) ensemble_id = self.config.ensemble_id if self.ensembler.optimize(total_cycles_duration): self.__update_best__(self.scores[ensemble_id], ensemble_id) self.__improvement_trigger__() self.n_cycles = 1 while not self.must_interrupt: gc.collect() start = time.time() for search_space in self.config.search_spaces: self.__check_stagnation__() if self.must_interrupt: break id = search_space.id base_model = self.mirai_seeker.seek(id) train_predictions, test_predictions, score = base_model.predict( self.train_data, self.train_target, self.test_data, self.config) self.mirai_seeker.register_base_model(id, base_model, score) if score > self.scores[id] or ( score == self.scores[id] and len(base_model.features) < len(self.base_models[id].features) ): self.scores[id] = score self.train_predictions_df[id] = train_predictions self.test_predictions_df[id] = test_predictions self.__update_best__(score, id) if will_ensemble: self.ensembler.update() self.__update_best__(self.scores[ensemble_id], ensemble_id) self.__improvement_trigger__() dump_base_model(base_model, self.models_dir + id) else: del train_predictions, test_predictions total_cycles_duration += time.time() - start self.n_cycles += 1 if will_ensemble: if self.ensembler.optimize(total_cycles_duration/self.n_cycles): self.__update_best__(self.scores[ensemble_id], ensemble_id) self.__improvement_trigger__() self.__is_running__ = False def request_status(self): """ Queries the current status of the engine. :rtype: miraiml.Status :returns: The current
\section{Introduction} \label{sec:intro} \IEEEPARstart{V}{p9} has been developed by Google \cite{vp9} as an alternative to mainstream video codecs such as H.264/AVC \cite{h264} and High Efficiency Video Coding (HEVC) \cite{hevc} standards. VP9 is supported in many web browsers and on Android devices, and is used by online video streaming service providers such as Netflix and YouTube. As compared to both its predecessor, VP8 \cite{vp8} and H.264/AVC video codecs, VP9 allows larger prediction blocks, up to size $64 \times 64$, which results in a significant improvement in coding efficiency. In VP9, sizes of prediction blocks are decided by a recursive splitting of non-overlapping spatial units of size $64 \times 64$, called superblocks. This recursive partition takes place at four hierarchical levels, possibly down to $4 \times 4$ blocks, through a search over the possible partitions at each level, guided by a rate-distortion optimization (RDO) process. The Coding tree units (CTUs) in HEVC, which are analogous to VP9's superblocks, have the same default maximum size of $64 \times 64$ and minimum size of $8 \times 8$, which can be further split into smaller partitions ($4 \times 4$ in the intra-prediction case). However, while HEVC intra-prediction only supports partitioning a block into four square quadrants, VP9 intra-prediction also allows rectangular splits. Thus, there are four partition choices at each of the four levels of the VP9 partition tree for each block at that level: no split, horizontal split, vertical split and four-quadrant split. This results in a combinatorial complexity of the partition search space since the square partitions can be split further. A diagram of the recursive partition structure of VP9 is shown in the Fig. \ref{fig:partition_types}. \begin{figure} \centering \includegraphics[width=8cm]{partition_types} \caption{\small{{Recursive partition of VP9 superblock at four levels showing the possible types of partition at each level.}}} \label{fig:partition_types} \end{figure} Although the large search space of partitions in VP9 is instrumental to achieve its rate-distortion (RD) performance, it causes the RDO based search to incur more computational overhead as compared to VP8 or H.264/AVC, making the encoding process slower. Newer video codecs, such as AV1 \cite{av1} and future Versatile Video Coding (VVC) \cite{vvc}, allow for prediction units of sizes from $128\times128$ to $4 \times 4$, giving rise to even deeper partition trees. As ultra high-definition (UHD) videos become more popular, the need for faster encoding algorithms will only escalate. One important way to approach this issue is to reduce the computational complexity of the RDO-based partition search in video coding. While state of the art performances in visual data processing technologies, such as computer vision, rely heavily on deep learning, mainstream video coding and compression technology remains dominated by traditional block-based hybrid codecs. However, deep learning based image compression techniques such as \cite{RNNimagecomp, GANcomp, variationalcomp} are also being actively explored, and have shown promise. A few of such image based deep learning techniques have also been extended to video compression with promising results \cite{deepcoder, adversarialvideo, imageinterpolation}. A second category of work uses deep learning to enhance specific aspects of video coding, such as block prediction \cite{biprediction, interpred}, motion compensation \cite{fractionalinterpolation, motioncomp}, in-loop filtering \cite{inloop, residualinloop, multiscaleinloop}, and rate control \cite{ratecontrol}, with the objective of improving the efficiency of specific coding tools. Given these developments, there is a possibility of a future paradigm shift in the domain of video coding towards deep learning based techniques. Our present work is motivated by the success of deep learning-based techniques such as \cite{HEVCintra, inter&intra}, on the current and highly practical task of predicting the HEVC partition quad-trees. In this paper, we take a step in this direction by developing a method of predicting VP9 intra-mode superblock partitions in a novel bottom-up way, by employing a hierarchical fully convolutional network (H-FCN). Unlike previous methods of HEVC partition prediction, which recursively split blocks starting with the largest prediction units, our method predicts block merges recursively, starting with the smallest possible prediction units, which are $4 \times 4$ blocks in VP9. By taking a bottom-up approach optimized using an H-FCN model we are able to achieve better performance with a much smaller network than \cite{inter&intra}. By integrating the trained model with the reference VP9 encoder, we are able to substantially speed up intra-mode encoding at a reasonably low RD cost, as measured by the Bj{\o}ntegaard delta bitrate (BD-rate) \cite{bjontegaard}, which quantifies differences in bitrate at a fixed encoding quality level relative to another reference encode. Our method also surpasses the higher speed levels of VP9 in terms of speedup, while maintaining a lower BD-rate as we show in the experimental results. The main steps of our work presented in this paper can be summarized as follows: \begin{enumerate} \item We created a large and diverse database of VP9 intra encoded superblocks and their corresponding partition trees using video content from the Netflix library. \item We developed a fast H-FCN model that efficiently predicts VP9 intra-mode superblock partition trees using a bottom-up approach. \item We integrated the trained H-FCN model with the VP9 encoder to demonstrably speed up intra-mode encoding. \end{enumerate} The source code of our model implementation, including the modifications made to the reference VP9 decoder and encoder for creating the database of superblock partitions, and using the trained H-FCN model for faster intra encoding, respectively, is available online at \cite{repo}. The rest of the paper is organized as follows. In Section \ref{sec:relatedwork}, we briefly review earlier works relevant to the current task. Section \ref{sec:database} describes the VP9 partition database that we created to drive our deep learning approach. Section \ref{sec:method} elaborates the proposed method. Experimental results are presented in Section \ref{sec:results}. Finally, we draw conclusions and provide directions for future work in Section \ref{sec:conclusion}. \section{Related Work} \label{sec:relatedwork} The earliest machine learning based methods that were designed to infer the block partition structures of coded videos from pixel data relied heavily on feature design. A decision tree based approach was used to predict HEVC partition quadtrees for intra frames from features derived from the first and second order block moments in \cite{icipML}, reportedly achieving a 28\% reduction in computational complexity along with 0.6\% increase in BD-rate. Using a support vector machine (SVM) classifier on features derived from measurements of the variance, color and gradient distributions of blocks, a 36.8\% complexity reduction was gained against a 3\% increase in BD-rate in \cite{screencontent}, on the screen content coding extension of HEVC in the intra-mode. With the advent of deep learning techniques in recent years, significant further breakthroughs were achieved \cite{intraCNN, inter&intra, HEVCintra}. In \cite{HEVCintra}, a parallel convolutional neural network (CNN) architecture was employed to reduce HEVC intra encoding time by 61.1\% at the expense of a 2.67\% increase in BD-rate. Three separate CNN models were used to learn the three-level intra-mode partition structure of HEVC in \cite{intraCNN}, obtaining an average savings of 62.2\% of encoding time against a BD-rate increase of 2.12\%. This approach was extended in \cite{inter&intra} to reduce the encoding time of both intra and inter modes using a combination of a CNN with a long short-term memory (LSTM) architecture. This approach reduced the average intra-mode encoding time by 56.9-66.5\% against an increase of 2.25\% in BD-rate, while in the inter mode, a 43.8\%-62.9\% average reduction was obtained versus an increase of 1.50\% in BD-rate. However, there has been little work reported on the related problem of reducing the computational complexity of RDO based superblock partition decisions in VP9, and even less work employing machine learning techniques. A multi-level SVM based early termination scheme for VP9 block partitioning was adopted in \cite{mlvp9}, which reduced encoding time by 20-25\% against less than a 0.03\% increase in BD-rate in the inter mode. Although superblock partition decisions using RDO consume bulk of the compute expense of intra-mode encoding in VP9, to the best of our knowledge, there has been no prior work on predicting the complete partition trees of VP9 superblocks. The problem of VP9
r[2] ** 2 * Mxt[1,0] * Mxx[0,0] * nx[0] - r[0] ** 3 * r[1] * np.exp((1j) * omega * r[2] * t) * r[3] ** 2 * r[2] ** 2 * Mxt[1,0] * Mxx[0,0] * nx[0] - r[0] * r[1] ** 3 * np.exp((1j) * omega * r[2] * t) * r[3] ** 2 * r[2] ** 2 * Mxt[0,0] * Mxx[1,1] * nx[1] + r[0] ** 3 * r[1] * np.exp((1j) * omega * r[2] * t) * r[3] ** 2 * r[2] ** 2 * Mxt[0,0] * Mxx[1,1] * nx[1] - r[0] * r[1] ** 3 * np.exp((1j) * omega * r[2] * t) * r[3] ** 2 * r[2] ** 3 * Mxx[1,0] * Mxx[0,1] * nx[1] + r[0] ** 3 * r[1] * np.exp((1j) * omega * r[2] * t) * r[3] ** 2 * r[2] ** 3 * Mxx[1,0] * Mxx[0,1] * nx[1] - r[0] * r[1] ** 3 * np.exp((1j) * omega * r[2] * t) * r[3] ** 2 * r[2] ** 2 * Mxt[0,0] * Mxx[1,0] * nx[0] + r[0] ** 3 * r[1] * np.exp((1j) * omega * r[2] * t) * r[3] ** 2 * r[2] ** 2 * Mxt[0,0] * Mxx[1,0] * nx[0] - r[0] * r[1] ** 3 * np.exp((1j) * omega * r[2] * t) * r[3] ** 2 * Mtt[1,0] * r[2] * Mxx[0,0] * nx[0] + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mtx[0,0] * Mtt[1,1] * nt[1] * r[1] + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mxt[0,0] * Mtt[1,0] * nt[0] * r[1] + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mtt[1,0] * Mxx[0,0] * nx[0] * r[1] + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mxt[0,0] * Mtt[1,1] * nt[1] * r[1] - r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mxt[1,0] * Mtt[0,1] * nt[1] * r[1] + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mtx[0,0] * Mxx[1,1] * nx[1] * r[1] ** 2 + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mtx[0,0] * Mxx[1,0] * nx[0] * r[1] ** 2 + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mxt[0,0] * Mxx[1,1] * nx[1] * r[1] ** 2 - r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mtx[1,0] * Mxx[0,1] * nx[1] * r[1] ** 2 - r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mtx[1,0] * Mxx[0,0] * nx[0] * r[1] ** 2 - r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mxt[1,0] * Mxx[0,1] * nx[1] * r[1] ** 2 - r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mxt[1,0] * Mxx[0,0] * nx[0] * r[1] ** 2 + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * r[1] ** 3 * Mxx[1,0] * Mxx[0,1] * nx[1] + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mxt[0,0] * Mxx[1,0] * nx[0] * r[1] ** 2 + r[2] ** 2 * r[3] * r[0] ** 3 * np.exp((1j) * omega * r[1] * t) * Mxx[1,0] * Mtt[0,1] * nt[1] * r[1] ** 2 - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * r[0] ** 2 * Mtx[1,0] * Mxx[0,1] * nx[1] - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * r[0] ** 2 * Mtx[1,0] * Mxx[0,0] * nx[0] - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * r[0] ** 2 * Mxt[1,0] * Mxx[0,1] * nx[1] - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * r[0] ** 2 * Mxt[1,0] * Mxx[0,0] * nx[0] - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * Mxt[1,0] * Mtt[0,1] * nt[1] * r[0] - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * Mxt[1,0] * Mtt[0,0] * nt[0] * r[0] - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * Mtx[1,0] * Mtt[0,1] * nt[1] * r[0] - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * Mtx[1,0] * Mtt[0,0] * nt[0] * r[0] - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * Mtt[0,0] * Mxx[1,1] * nx[1] * r[0] - r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * Mtt[0,0] * Mxx[1,0] * nx[0] * r[0] + r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * Mxt[0,0] * Mtt[1,1] * nt[1] * r[0] - r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mxt[1,0] * Mtt[0,1] * nt[1] * r[1] - r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mxt[1,0] * Mtt[0,0] * nt[0] * r[1] - r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mtx[1,0] * Mtt[0,1] * nt[1] * r[1] - r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mtx[1,0] * Mtt[0,0] * nt[0] * r[1] - r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mtt[0,0] * Mxx[1,1] * nx[1] * r[1] - r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mtt[0,0] * Mxx[1,0] * nx[0] * r[1] + r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mxt[0,0] * Mtt[1,1] * nt[1] * r[1] + r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mtt[1,0] * Mxx[0,1] * nx[1] * r[1] + r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mtx[0,0] * Mtt[1,0] * nt[0] * r[1] + r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mtx[0,0] * Mtt[1,1] * nt[1] * r[1] + r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * Mxt[0,0] * Mtt[1,0] * nt[0] * r[1] - r[2] ** 3 * r[3] ** 2 * r[0] * np.exp((1j) * omega * r[1] * t) * r[1] ** 3 * Mxx[0,0] * Mxx[1,1] * nx[1] + r[2] ** 3 * r[3] * np.exp((1j) * omega * r[0] * t) * r[1] ** 2 * Mtt[1,0] * Mxx[0,1] * nx[1] * r[0] + r[2] ** 3 *
clustering, in the range indicated by the vertical blue lines. The black vertical line gives the scale $2\varepsilon$. } \label{fig-fits-two-xis_n-2_n-1} \end{figure*} \begin{figure*} \resizebox{!}{5.5cm}{\includegraphics[]{n0_K0,50_CF_coo_040.eps}} \resizebox{!}{5.5cm}{\includegraphics[]{n0_K1_CF_coo_040.eps}} \\ \resizebox{!}{5.5cm}{\includegraphics[]{n0_K1,50_CF_coo_040.eps}}\\ \resizebox{!}{5.5cm}{\includegraphics[]{n2_K0,23_CF_coo_045.eps}} \resizebox{!}{5.5cm}{\includegraphics[]{n2_K0,45_CF_coo_045.eps}}\\ \resizebox{!}{5.5cm}{\includegraphics[]{n2_K0,70_CF_coo_060.eps}} \resizebox{!}{5.5cm}{\includegraphics[]{n2_K1_CF_coo_050.eps}} \caption{Plots for $\xi$ at the latest simulation time in each indicated model, along with lines indicating our best ``extrapolated fit" and the prediction of stable clustering.} \label{fig-fits-two-xis_n0_n2} \end{figure*} \subsection{Direct tests for finite box size effects} \label{Direct tests for finite box size effects} As we have discussed above, the restriction of our measurements of the correlation function and power spectrum to the scales in which these quantities are self-similar should ensure that our results are unaffected by effects arising from the finite size of our simulation box. Indeed, for our models with $n=-1$ and $n=-2$ in which we expect such effects to be most important, our tests for self-similarity have identified clearly the presence of such effects at larger scales. It is interesting to test also more directly for finite size effects by using simulations which are identical other than for the simulation box size, and in particular to verify that the correlation function and power spectrum are indeed unaffected by the box size in the range of scale indicated by the test of self-similarity. To perform such a test we have resimulated the model with $n=-1$ and $\kappa=1$ (i.e. EdS) with $N=256^3$ particles (starting from the same amplitude of initial fluctuations as given in Table \ref{Table1}). For one simulation we have used the same smoothing ($\varepsilon=0.01$) as in our fiducial simulations, and for the other a larger smoothing ($\varepsilon=0.03$). Shown in Fig. \ref{fig_compare_N=256} are the measured $\xi(r)$ and $\Delta^2(k)$ in these two simulations at the time $t_s=4$ alongside those for the simulation with $N=64^3$ used in our analysis above, at the same time.The comparison between the two simulations with $\varepsilon=0.01$ shows that there are indeed significant finite size effects at scales which are large but well inside the box size at this time. Indeed, in excellent agreement with what we have inferred from our analysis using self-similarity of the smaller simulation, these effects extend at this time (the latest time used for our analysis above) down to scales in which $\xi \sim 10^1$ -- $10^2$. It is interesting to note also that the peculiar ``bump" feature at $\xi \sim 1$ in the $64^3$ simulation clearly disappears in the larger box simulation, and is indeed unphysical as one would anticipate (in a scale-free model). The comparison of the two simulations with $N=256^3$ with different smoothing is shown as it confirms further the conclusions of the analysis in Sect. \ref{Effects of force smoothing}: we see that the correlation functions for the two simulations agree very well down to close to the larger $\varepsilon$, while in $\Delta^2(k)$ the effect of the smoothing in decreasing the power is clearly visible already at $k \approx 0.1 (\pi/\varepsilon)$. The exponent $\gamma$ obtained fitting either simulation in the self-similar region gives a value very consistent with that obtained from the $64^3$ simulations. \begin{figure*} \centering\includegraphics[scale=0.7]{size_compare.eps} \caption{ Comparison of $\xi(r)$ (left) and $\Delta^2(k)$ (right) in three simulations of the model $n=-1$ and $\kappa = 1$, at the time $t_s=4$, and starting from an amplitude of initial fluctuations given in Table \ref{Table1}. One pair of simulations differ only in particle number (i.e. box size), while the pair at $N=256^2$ differ only in force smoothing. The $x$-axis in both plots has been rescaled to the initial grid spacing $\Lambda$, and the black vertical lines indicate the values of $\varepsilon$ on the plots of $\xi$, and those of $(0.1)\pi/\varepsilon$ on the plots of $\Delta^2(k)$.} \label{fig_compare_N=256} \end{figure*} \section{Discussion and conclusion} In this final section we first give a brief summary of our most important results and conclusions. We then compare our results to some previous studies in the literature (of the $\kappa=1$ case). We conclude with a brief discussion of the possible implications of our study, in particular in relation to the issue of ``universality" of gravitational clustering in cosmological models, and then trace finally some directions for future research. \subsection{Summary of main results} \begin{itemize} \item We have pointed out the interest in studying structure formation in a family of EdS models, noting that, like the canonical case, they are expected to display, for power law initial conditions, the property of self-similarity. The latter provides a powerful tool to distinguish the physically relevant results of a numerical simulation: where the observed clustering is not self-similar, it is necessarily dependent on the length and time scales which are introduced by the finite simulation. This family of models allows us to investigate in such a context not just, as in the usual EdS model, the dependence on initial conditions of clustering, but also the dependence on cosmology. \item We have derived in these models both the theoretical predictions for the self-similar behaviour, and those for the exponents $\gamma_{sc} (n, \kappa)$ characterising the non-linear regime in the additional hypothesis of stable clustering. \item We have explained that the (theoretical) exponent $\gamma_{sc} (n, \kappa)$ has a very simple physical meaning: it controls the relative size of virialized structures compared to their initial relative comoving size. Larger $\gamma_{sc} (n, \kappa)$ corresponds thus to a greater ``shrinking" of substructures contained in a given structure. It follows that we expect $\gamma_{sc} (n, \kappa)$ to be a good control parameter for the validity of stable clustering, and also for the range in which it can be observed in a finite simulation (if it is a good approximation). \item We have reported the results of an extensive suite of $N$ body simulations, performed using an appropriate modification of the GADGET2 code which we have implemented and tested. To control the code's relative accuracy in different simulations we have defined a suitable generalized variant of the usual test based on the Layzer-Irvine equation. \item We have analysed in detail the two point correlation properties in real and reciprocal space of the evolved distributions for models covering a broad range in the $(n, \kappa)$ parameter space. Our chosen range in $n$ extends to the region $n>1$ which has not been studied previously even in the standard ($\kappa=1$) case. Our results show that all our models display a clear tendency to establish self-similar evolution over a range of comoving scale which grows monotonically in time. Further, we find that in all cases the self-similar region of the fully non-linear part of the correlation function (and power spectrum) are well fit by a simple power law from with an exponent in good agreement with the prediction of stable clustering. As anticipated theoretically, the robustness and range of validity of these numerical results turns out itself to be a function of $\gamma_{sc}$: as it decreases below $\gamma_{sc} \approx 1.5$ the range of strongly non-linear self-similar clustering diminishes considerably, so much so that at $\gamma_{sc} \approx 1$ the prediction of stable clustering can only be marginally tested. Given these limitations we are not able to detect any deviation from the stable clustering prediction in this class of models, and conclude that significantly larger simulations would be needed to do so. \end{itemize} \subsection{Comparison with other studies} \subsubsection{Three dimensions} Previous studies have considered the usual EdS model for different values of $n$ in the range $-3<n \leq 1$, focussing both on the validity of self-similarity and stable clustering. Concerning self-similarity our results are in agreement with all previous studies for this range, but help to clarify two questions. Firstly they show that for the usual EdS model self-similarity indeed extends to $n>1$. We have studied only the case $n=2$, but we can be extremely confident, given the observed trend with $n$, numerical results in the one dimensional case (\cite{joyce+sicard_2011, benhaiem+joyce+sicard_2013}), and the theoretical considerations concerning $\gamma_{sc}$, that this result extends up to $n=4$. Secondly, our results throw light on the difficulty, documented and discussed in the previous literature, of observing self-similarity in numerical simulations in the range $-3 < n < -1$, which even led to the suggestion that it might not apply for this case because of infra-red convergence of possibly relevant physical quantities (\cite{jain+bertschinger_1997, jain+bertschinger_1998}). Our study shows very clearly that the difficulty in observing self-similarity is not due to any intrinsic problem related to infrared properties of these spectra. Indeed we have no difficulty observing self-similarity over an extensive range even for a model with $n=-2$ when the cosmology is modified. The difficulty of observing self-similarity is due essentially just to the very limited
of four sites in the top row of sites, and holes in the remaining sites. Then, $3$ Givens rotations were used to create a uniform superposition of the hole in various states in the top row. Finally, $4$ Givens rotations were used to create a uniform superposition between top and bottom rows. This was done for both spins, giving $2 \times 7 = 14$ rotations. We then annealed to $U\neq 0$ while reducing the vertical coupling to unit strength. The results are shown in Fig.~\ref{fig:ffanneal}. As seen, increasing the annealing time increases the success probability, until it converges to $1$. The annealing times required to achieve substantial (say, $90\%$) overlap with the ground state are not significantly different from the preparation approach discussed next based on ``joining" plaquettes. Note that for this particular model, we know the ground state energy with very high accuracy from running an exact diagonalization routine so we know whether or not we succeed in projecting onto it after phase estimation. The system size is small enough that calculating ground state energies is easy to accomplish on a classical computer using Lanczos methods. Of course, on larger systems requiring a quantum computer to simulate, we will not have access to the exact energy. In this case, it will be necessary to do multiple simulations with different mean-field starting states as described at the start of this section; then, if the annealing is sufficiently slow and sufficiently many samples are taken, the minimum over these simulations will give a good estimate of the ground state energy. Once we have determined the optimum mean-field starting state and annealing path, we can then use this path to re-create the ground state to determine correlation functions. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{annealff} \end{center} \caption{Success probability as a function of total annealing time for an annealing path in the Hubbard model that begins at $U=0$ and vertical hopping equal to $2$ and ends at $U=2.5t$ (blue) or $U=4t$ (red) and all hoppings equal to $1$. The initial state is a Slater determinant state (of free fermion single-particle states). Annealing is linear along the path. The $x$-axis represents the total time for the anneal. The $y$-axis represents the probability that the phase estimation routine returns an energy that is within $10^{-3}$ of the ground state energy. This probability is obtained by sampling runs of the phase estimation circuit, giving rise to some noise in the curves.} \label{fig:ffanneal} \end{figure} \subsection{Preparing of $d$-wave resonating valence bond states from plaquettes} \label{sec:plaquettes} \subsubsection{ $d$-wave resonating valence bond states} An alternative approach to adiabatic state preparation has previously been proposed in the context of ultracold quantum gases in optical lattices. In this approach, $d$-wave resonating valence bond (RVB) states are prepared.\cite{Trebst2005} RVB states are spin liquid states of a Mott insulator, in which the spins on nearby sites are paired into short-range singlets. RVB states were first introduced as proposed wave functions for spin liquid ground states of Mott insulators\cite{Anderson:1973eo}, and then conjectured to describe the insulating parent compounds of the cuprate superconductors.\cite{Anderson:1987ii} In more modern language, they are described as $\mathbb{Z}_2$ topologically-ordered spin liquid states.\cite{Wen91c} The idea was that, upon doping, the singlet pairs would begin to move and form a condensate of Cooper pairs. Although the insulating parent compounds of the cuprates are antiferromagnetically-ordered, it is nevertheless possible that the cuprate superconductors are close to an RVB spin liquid ground state and are best understood as doped spin liquids. The RVB scenario has been confirmed for $t$-$J$ and Hubbard models of coupled plaquettes \cite{fye,Altman:2002iu} and ladders (consisting of two coupled chains) \cite{Tsunetsugu:94,Troyer:96} and remains a promising candidates for the ground state of the Hubbard model and high-temperature superconductors.\cite{RVB:04} Instead of starting from mean-field Hamiltonians, we build up the ground state wave function from small spatial motifs, in this case four-site plaquettes, which are the smallest lattice units on which electrons pair in the Hubbard model.\cite{fye} Following the same approach as originally proposed for ultracold quantum gases, we prepare four-site plaquettes in their ground state, each filled with either two or four electrons. \cite{Trebst2005} These plaquettes then get coupled, either straight to a two-dimensional square lattice, or first to quasi-one dimensional ladders, which are subsequently coupled to form a square lattice. \subsubsection{Preparing the ground states of four-site plaquettes} \begin{figure} \begin{center} \includegraphics[width=3cm]{plaquette} \end{center} \caption{Labeling of the sites in a plaquettes} \label{fig:plaquette} \end{figure} The first step is preparing the ground state of the Hubbard model on 4-sites plaquettes filled with either two or four electrons. We start from very simple product states and adiabatically evolve them into the ground states of the plaquettes. To prepare the ground states of plaquettes with four electrons we start from a simple product state $c^\dag_{0,\uparrow}c^\dag_{0,\downarrow}c^\dag_{1,\uparrow}c^\dag_{1,\downarrow}|0\rangle$, using the labeling shown in Fig. \ref{fig:plaquette}. Our initial Hamiltonian, of which this state is the ground state has no hopping ($t_{ij}=0$), but already includes the Hubbard repulsion on all sites. To make the state with doubly occupied state the ground state we add large on-site potentials $\epsilon_2 = \epsilon_3 = 2U+3t$ on the empty sites. To prepare the ground state of the plaquette we then \begin{enumerate} \item start with $t_{ij}=0$, $U_i=U$ and a large potential on the empty sites (here $\epsilon_2 = \epsilon_3 = \epsilon$) so that the initial state is the ground state, \item ramp up the hoppings $t_{ij}$ from 0 to $t$ during a time $T_1$, \item ramp the potentials $\epsilon_{i}$ down to 0 during a time $T_2$. \end{enumerate} Time scales $T_1=T_2\approx10 t^{-1}$ and $\epsilon=U+4t$ are sufficient to achieve high fidelity. As we discuss below, it can be better to prepare the state quickly and then project into the ground state through a quantum phase estimation than to aim for an extremely high fidelity in the adiabatic state preparation. To prepare a plaquette with two electrons we start from the product state $c^\dag_{0,\uparrow}c^\dag_{0,\downarrow}|0\rangle$ and use the same schedule, except that we choose the non-zero potentials on three sites of the plaquette ($\epsilon_1= \epsilon_2 = \epsilon_3 = \epsilon$). \subsubsection{Coupling of plaquettes} \begin{figure} \begin{center} \includegraphics[width=6cm]{plaquettes} \end{center} \caption{Coupling of two plaquettes} \label{fig:plaquettes} \end{figure} After preparing an array of decoupled plaquettes, some of them with four electrons and some with two electrons depending on the desired doping, we adiabatically turn on the coupling between the plaquettes. As a test case we use that of Ref. \onlinecite{Trebst2005} and couple two plaquettes (see Fig. \ref{fig:plaquettes}), one of them prepared with two electrons and one with four electrons. Naively coupling plaquettes by adiabatically turning on the intra-plaquette couplings $t_{24}$ and $t_{35}$ will not give the desired result, since the Hamiltonian is reflection symmetric but an initial state with two electrons on one plaquette and four on the other breaks this symmetry. No matter how slowly the anneal is done, the probability of preparing the ground state will not converge to $1$. We thus need to explicitly break the reflection symmetry by either first ramping up a small chemical potential on a subset of the sites or -- more easily -- not completely ramping down the non-zero $\epsilon_i$ values when preparing plaquette ground states. Consistent with Ref. \onlinecite{Trebst2005} we find that a time $T_3\approx50t^{-1}$ is sufficient to prepare the ground state with high fidelity. See Fig.~\ref{fig:annealjoin} for success probabilities depending on times $T_1$ to ramp up hopping in a plaquette, $T_2$ to ramp down $\epsilon$ in a plaquette, and $T_3$ to join the plaquettes. The specific couplings in these figures are an initial potential $8t$ on the empty sites in the plaquette with four electrons, chemical potential $8t$ on the empty sites in the plaquette with two electrons and chemical potential $t$ on the occupied site in the plaquette with two electrons. See Fig.~\ref{fig:gapsjoin} for a plot of the spectral gaps for an example annealing schedule. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{annealjoin} \end{center} \caption{Probability of preparing ground state as a function of times $T_1$, $T_2$ ,$T_3$ in units of $t^{-1}$, following the annealing schedule described above. During time $T_3$, a uniform potential $\epsilon$ was ramped from $t$ to $0$ on the plaquette with two electrons to break the symmetry between plaquettes.} \label{fig:annealjoin} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8.5cm]{gaps} \end{center} \caption{Energy levels during annealing. $y$-axis is energy. $x$ axis is time. From time $0$ to
\section{Introduction} Breast cancer is the most commonly diagnosed form of cancer in women and the second-leading cause of cancer-related death after lung cancer~\cite{b1}. Statistics from the American Cancer Society indicate that approximately 232,670 (29\% of all cancer cases) American women will be diagnosed with breast cancer, and an estimated 40,000 (15\% of all cancer cases) women will die of it in 2014 ~\cite{b2}. In other words, 637 American women will be diagnosed with breast cancer, and 109 women will die of it every day. Similar statistics were also found in Canada, where approximately 23,800 (26\%) women were diagnosed with breast cancer, and 5,000 (14\%) died from it in 2013~\cite{b3}. Under this circumstance, detection and diagnosis of breast cancer has already drawn a great deal of attention from the medical world. Studies show that early detection, diagnosis and therapy is particularly important to prolong lives and treat cancers~\cite{b4}. If breast cancer is found early, the five-year survival rate of patients in stage 1 could reach 90\% with effective treatment. Medical imaging technology is one of the main methods for breast cancer detection. Commonly used medical imaging technologies include X-ray mammography, Computer Tomography (CT), ultrasound and Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and Single-Photon Emission Computed Tomography (SPECT). Among these technologies, mammography achieves the best results in early detection of asymptomatic breast cancer and is one of the least expensive ones. For this reason, it has become the principal method of breast cancer detection in clinical practice, and one of the most effective ways for general breast cancer survey. Masses and calcifications (including marcocalcifications and microcalcifications) are the most common and basic symptoms of breast cancer. Masses in mammography can be recognized as a local, high-contrast area, but the value of contrast is not unique. It changes when imaging conditions, sizes and backgrounds change. The X-ray absorption rates of masses are very close to dense glandular tissue in breast and other dense tissues. In addition, the boundaries of masses are always mixed with background structures. Therefore, microcalcification detection remains one of the most popular topics in medical image processing research~\cite{b5}. Modern equipment has improved the technical aspects of mammography. However, there still exist a relatively high false positive rate and a low specificity, due to fundamental physical limitations such as unobvious lesions, as well as controllable factors like radiologists’ inexperience in reading mammograms. This latter issue has been addressed using double reading, where two radiologists make their own judgments independently based on the same mammogram, and then combine and discuss both opinions. However, the solution is expensive and highly relying on radiologists' experience. To improve the accuracy of reading mammograms, interest in Computer-Aided Diagnosis (CADx) solutions has emerged~\cite{b4}. In this paper, we focus on breast cancer detection using a computer-aided automatic mammogram analysis system to improve the accuracy of diagnosis. In proposed analysis system, entropy-based method was implemented for feature selection as well as different pattern recognition methods, including the Back-propagation (BP) Network, the Linear Discriminant Analysis (LDA), the Naive Bayes (NB) Classifier, and voting schemes were employed for image classification. The rest of the paper is organized as follows. Section~\ref{related work} presents background regarding breast cancer detection and relevant work. Section~\ref{system design} proposes the automatic mammogram images analysis system. Subsequently, the experimental results are presented and analyzed in Section~\ref{experiments}. Finally, Section~\ref{conclusions} draws the conclusion and discusses lines of future work. \section{RELATED WORK} \label{related work} Calcification detection and classification has been an important research target of the automatic mammogram analysis systems. A wide variety of approaches have so far been developed to improve the detection performance. However, it's still challenging for the following reasons: 1) microcalcifications occur in various sizes, shapes, and distributions; 2) microcalcifications have low contrast in the region of interest (ROI); 3) dense tissue and/or skin thickness make suspicious lesion areas difficult to detect (especially in young women); 4) the dense tissue is easily misunderstood as microcalcification, which results in high false positive rates among most existing algorithms. Feature selection is crucial for detection and classification. A number of efforts have been devoted to employing numerous different features in the application of mammogram analysis. For example, the geometrical and statistical features in~\cite{b17,b18} are used for classification. But these methods depend on the image translation, scaling and rotation heavily, which results in misclassification. Consequently, these approaches can be improved by using a robust factor, such as Fourier transform, which is certainly translation invariant~\cite{b19}. In the data time-frequency analysis, the Fourier transform is traditionally applied, which is a global transform between the time and frequency domain. Therefore, the Fourier transform cannot express the local properties of signals in the time and frequency domains simultaneously. However, these local properties are the key characteristics of non-stationary signals in some circumstances. To analyze and process non-stationary signals, wavelet transforms is one of the methods to retain important temporal information in the frequency domain or partial frequency information in the time domain. In this work, both the Fourier transform and wavelet transforms are applied at the data transform stage to extract the features. However, not all of the extracted features benefit classification performance. For a feature to be useful in classification, it should be closely and uniquely associated with a certain class~\cite{b13}. Ideally, the feature will correlate with the desired class independent of the presence of other classes. If these conditions are met, the feature reduction (selection) problem can be addressed by measuring the correlation with that class then establishing a pass threshold. The pass threshold eliminates features that correlate poorly. There are two common approaches used to measure the correlation between two random variables, in this case between feature and class~\cite{b14}. The first is the linear correlation, where the variation in a feature value is compared to the variation in class value. The second approach and the one adopted for our study is Information Gain, a concept based on the reduction of entropy in the dataset. A target range for the number of features was determined from the work of, Lei and Huan~\cite{b15}, they proposed a fast correlation based filter approach and conducted an efficient way of analyzing feature redundancy. Their new feature selection algorithm was implemented and evaluated through extensive experiments comparing with other related feature selection algorithms based on ten different kinds of feature types. The number of features ranged from 57 to 650, and the sample size of feature types ranged from 32 to 9338. At the end of the experiment, they recorded the running time of the proposed system and the number of features selected for each algorithm. The results showed that the average selected number of features was 15 for the five compared feature selection algorithms, and the selected features could lead to classification accuracy to around 89\%. In this research, we chose a threshold of information gain which could lead to around 15 features left. Entropy is a measure of the uncertainty of a random variable~\cite{b15}. \section{System Design} \label{system design} The proposed automatic mammogram analysis system comprised of three consecutive stages, including the image processing stage, the feature selection stage and the image classification stage. Fig.~\ref{fig:framework} visualizes the framework of the proposed system with each components being detailed in the following sections. \begin{figure}[!htbp] \vspace{-0.2cm} \centerline{\includegraphics[width = 8cm]{fig/framwork.png}} \vspace{-0.2cm} \caption{Framework of automatic mammogram analysis system.} \label{fig:framework} \vspace{-0.4cm} \end{figure} \subsection{Image Processing} \label{sys:img process} In the first image processing stage, a set of scalar features were extracted from an original image. This stage consisted of two steps: image pre-processing and data transform (including wavelet and Fourier transforms). In the image pre-processing step, the original digitized mammogram image is flipped, de-noised, and scaled to a common maximum value. In the data transform step, the normalized images are decomposed by three wavelet transforms with different bases (Daubechies db2, Daubechies db4, and Biorthogonal bior6.8) and the Fourier transform separately. Multiple levels of decomposition were
# Aleph number explained In mathematics, particularly in set theory, the aleph numbers are a sequence of numbers used to represent the cardinality (or size) of infinite sets that can be well-ordered. They were introduced by the mathematician Georg Cantor[1] and are named after the symbol he used to denote them, the Hebrew letter aleph ( \aleph ).[2] The cardinality of the natural numbers is \aleph0 (read aleph-nought or aleph-zero; the term aleph-null is also sometimes used), the next larger cardinality of a well-orderable set is aleph-one \aleph1 , then \aleph2 and so on. Continuing in this manner, it is possible to define a cardinal number \aleph\alpha for every ordinal number \alpha, as described below. The concept and notation are due to Georg Cantor,[3] who defined the notion of cardinality and realized that infinite sets can have different cardinalities. The aleph numbers differ from the infinity ( infty ) commonly found in algebra and calculus, in that the alephs measure the sizes of sets, while infinity is commonly defined either as an extreme limit of the real number line (applied to a function or sequence that "diverges to infinity" or "increases without bound"), or as an extreme point of the extended real number line. ## Aleph-nought \aleph0 (aleph-nought, also aleph-zero or aleph-null) is the cardinality of the set of all natural numbers, and is an infinite cardinal. The set of all finite ordinals, called \omega or \omega0 (where \omega is the lowercase Greek letter omega), has cardinality \aleph0 . A set has cardinality \aleph0 if and only if it is countably infinite, that is, there is a bijection (one-to-one correspondence) between it and the natural numbers. Examples of such sets are These infinite ordinals: \omega, \omega+1 , \omega ⋅ 2, \omega2, \omega\omega and \varepsilon0 are among the countably infinite sets.[4] For example, the sequence (with ordinality \omega ⋅ 2 ) of all positive odd integers followed by all positive even integers \{1,3,5,7,9,...,2,4,6,8,10,...\} is an ordering of the set (with cardinality \aleph0 ) of positive integers. If the axiom of countable choice (a weaker version of the axiom of choice) holds, then \aleph0 is smaller than any other infinite cardinal. ## Aleph-one \aleph1 is the cardinality of the set of all countable ordinal numbers, called \omega1 or sometimes \Omega . This \omega1 is itself an ordinal number larger than all countable ones, so it is an uncountable set. Therefore, \aleph1 is distinct from \aleph0 . The definition of \aleph1 implies (in ZF, Zermelo–Fraenkel set theory without the axiom of choice) that no cardinal number is between \aleph0 and \aleph1 . If the axiom of choice is used, it can be further proved that the class of cardinal numbers is totally ordered, and thus \aleph1 is the second-smallest infinite cardinal number. Using the axiom of choice, one can show one of the most useful properties of the set \omega1 : any countable subset of \omega1 has an upper bound in \omega1 . (This follows from the fact that the union of a countable number of countable sets is itself countable – one of the most common applications of the axiom of choice.) This fact is analogous to the situation in \aleph0 : every finite set of natural numbers has a maximum which is also a natural number, and finite unions of finite sets are finite. \omega1~ is actually a useful concept, if somewhat exotic-sounding. An example application is "closing" with respect to countable operations; e.g., trying to explicitly describe the σ-algebra generated by an arbitrary collection of subsets (see e.g. Borel hierarchy). This is harder than most explicit descriptions of "generation" in algebra (vector spaces, groups, etc.) because in those cases we only have to close with respect to finite operations – sums, products, and the like. The process involves defining, for each countable ordinal, via transfinite induction, a set by "throwing in" all possible countable unions and complements, and taking the union of all that over all of \omega1 . ## Continuum hypothesis See main article: Continuum hypothesis. See also: Beth number. The cardinality of the set of real numbers (cardinality of the continuum) is \aleph0 2 ~. It cannot be determined from ZFC (Zermelo–Fraenkel set theory augmented with the axiom of choice) where this number fits exactly in the aleph number hierarchy, but it follows from ZFC that the continuum hypothesis, CH, is equivalent to the identity \aleph0 2 =\aleph1. [5] The CH states that there is no set whose cardinality is strictly between that of the integers and the real numbers.[6] CH is independent of ZFC: it can be neither proven nor disproven within the context of that axiom system (provided that ZFC is consistent). That CH is consistent with ZFC was demonstrated by Kurt Gödel in 1940, when he showed that its negation is not a theorem of ZFC. That it is independent of ZFC was demonstrated by Paul Cohen in 1963, when he showed conversely that the CH itself is not a theorem of ZFC – by the (then-novel) method of forcing.[5] ## Aleph-omega Aleph-omega is \aleph\omega=\sup\{\alephn:n\in\omega\}=\sup\{\alephn:n\in\left\{0,1,2,...\right\}\}~ where the smallest infinite ordinal is denoted . That is, the cardinal number \aleph\omega is the least upper bound of \left\{\alephn:n\in\left\{0,1,2,...\right\}\right\}~. \aleph\omega is the first uncountable cardinal number that can be demonstrated within Zermelo–Fraenkel set theory not to be equal to the cardinality of the set of all real numbers; for any positive integer n we can consistently assume that \aleph0 2 =\alephn~, and moreover it is possible to assume \aleph0 2 is as large as we like. We are only forced to avoid setting it to certain special cardinals with cofinality \aleph0~, meaning there is an unbounded function from \aleph0 to it (see Easton's theorem). ## Aleph-α for general α To define \aleph\alpha for arbitrary ordinal number \alpha~, we must define the successor cardinal operation, which assigns to any cardinal number \rho the next larger well-ordered cardinal \rho+ (if the axiom of choice holds, this is the next larger cardinal). We can then define the aleph numbers as follows: \aleph0=\omega \aleph\alpha+1= + \aleph \alpha ~ and for, an infinite limit ordinal, \alephλ=cup\beta\aleph\beta~. The α-th infinite initial ordinal is written \omega\alpha . Its cardinality is written \aleph\alpha~. In ZFC, the aleph function \aleph is a bijection from the ordinals to the infinite cardinals. ## Fixed points of omega For any ordinal α we have \alpha\leq\omega\alpha~. In many cases \omega\alpha is strictly greater than . For example, for any successor ordinal α this holds. There are, however, some limit ordinals which are fixed points of the omega function, because of the fixed-point lemma for normal functions. The first such is the limit of the sequence \omega,\omega\omega, \omega \omega\omega ,\ldots~. Any weakly inaccessible cardinal is also a fixed point of the aleph function.[7] This can be shown in ZFC as follows. Suppose \kappa=\alephλ is a weakly inaccessible cardinal. If λ were a successor ordinal, then \alephλ would be a successor cardinal and hence not weakly inaccessible. If λ were a limit ordinal less than \kappa~, then its cofinality (and thus the cofinality of \alephλ ) would be less than \kappa and so \kappa would not be regular and thus not weakly inaccessible. Thus λ\geq\kappa and consequently λ=\kappa which makes it a fixed point. ## Role of axiom of choice The cardinality of any infinite ordinal number is an aleph number. Every aleph is the cardinality of some ordinal. The least of these is its initial ordinal. Any set whose cardinality is an aleph is equinumerous with an ordinal and is thus well-orderable. Each finite set is well-orderable, but does not have an aleph as its cardinality. The assumption that the cardinality of each infinite set is an aleph number is equivalent over ZF to the existence of a well-ordering of every set, which in turn is equivalent to the axiom of choice. ZFC set theory, which includes the axiom of choice, implies that every infinite set has an aleph number as its cardinality (i.e. is equinumerous with its initial ordinal), and thus the initial ordinals of the aleph numbers serve as a class of representatives for all possible infinite cardinal numbers. When cardinality is studied in ZF without the axiom of choice, it is no longer possible to prove that each infinite set has some aleph number as its cardinality; the sets whose cardinality is an aleph number are exactly the infinite sets that can be well-ordered. The method of Scott's trick is sometimes used as an alternative way to construct representatives for cardinal numbers in the setting of ZF. For example, one can define to be the set of sets with the same cardinality as of minimum possible rank. This has the property that if and
# Using SetContext to efficiently solve NLPs One of the many complications that arise when solving non-linear optimization problems (NLPs) is that the presence of extrinsic indexes may introduce computational inefficiencies when array results within your model are indexed by the extrinsic dimensions, even though any single optimization needs only a single slice of the result. What happens is that while one NLP is being solved, array results are being computed and propagated within the internals of your model, where only a single slice of the result is actually used by the NLP being solved. Even though it may have required substantial computation to obtain, the remainder of the array ends up being thrown away without being used. The «SetContext» parameter of DefineOptimization provides a way to avoid this inefficiency. The inefficiency only arises when you are solving NLPs, so if your problem is linear or quadratic, there is no need to make use of «SetContext». Also, it only arises when extrinsic indexes are present -- in other words, when DefineOptimization found that extra indexes were present in the objective or constraints and thus array-abstracted over these extra (extrinsic) indexes, returning an array of optimization problems. Thus, if you are dealing with a single NLP, once again there is no need to worry about this. One situation where it does arise naturally is the case where you are performing an optimization that depends on uncertain quantities, such that in the uncertainty mode you end up with a Monte Carlo sample of NLPs. If you don't utilize «SetContext» appropriately, the time-complexity of your model will scale quadratically with sample size rather than linearly. ## Inefficiency Example Consider the following optimization based on the model depicted above: DefineOptimization(decision: Emissions_reduction, Maximize: Total_cost) For a given Emissions_reduction, the computed mid value for Total_cost is indexed by Location. Location becomes an extrinsic index, i.e., DefineOptimization is array-abstracted across the Location index and returns an array of optimizations. With a separate NLP for each location, the solution finds the optimal emissions reduction for each region separately. Now, consider what computation actually takes place as each NLP is being solved. First, the NLP corresponding to @Location = 1 is solved. The way an NLP is solved is that the solver engine proposes a candidate solution, in this case a possible value for Emissions_reduction. This value is "plugged in" as the value for Emissions_reduction and Total_cost is then evaluated. During that evaluation, Population_exposed, Excess_deaths and Total_cost are each computed and each result is indexed by Location. At this point, Total_cost[@Location = 1] is extracted and total cost for the remaining regions is ignored (since the NLP for @Location = 1 is being solved). The search continues, plugging a new value into Emissions_reduction and re-evaluating Total_cost, each time all the variables between Emissions_reduction and Total_cost are re-evaluated, and each time Excess_deaths and Total_cost needlessly compute their values for all locations, when in reality only the @Location=1 value ends up being used. ### Resolving inefficiency with SetContext The «SetContext» parameter of DefineOptimization provides the mechanism that allows us to avoid the inefficiency described above. In your call to DefineOptimization, you may optionally list a set of context variables. When the solution process for each NLP begins, each context array is first computed and then sliced along all the extrinsic indexes, extracting the particular slice required by the NLP. In this example we observed that Excess_deaths and Total_cost contain the extrinsic index Location. But this is due to the influence of Population_exposed which is not downstream of the decision node. Setting context on an array limits its computation to the slice required by the optimization. This will also reduce the dimensionality of dependent nodes. Setting context on Population_exposed will eliminate the Location index from Excess_deaths and Total_cost without altering the iteration dynamics. DefineOptimization(decision: Emissions_reduction, Maximize: Total_cost, SetContext: Population_exposed The section Selecting Context Variables below discusses which variables are appropriate or are bad choices for «SetContext». ### More inefficiency with the Monte Carlo Next, suppose we solve the optimization from an uncertainty view. Here, for each random scenario of the uncertain variables, we'll solve a separate optimization. Hence, Run joins Location as an extrinsic index -- we have an array of optimizations, indexed by Location and Run. When these are solved, each NLP in this 2-D array is solved separately. So, the NLP at [@Location = 1, @Run = 1] is solved first, with the optimizer engine re-evaluating the model at several candidate Emission_reduction values until it is satisfied that it has converged to the optimal reduction level. Then the NLP at [@Location = 1, @Run = 2] is solved in the same fashion, etc. In this model, every variable with the exception of our decision, Emissions_reduction, and the index Location, has uncertainty. So when one particular NLP is actively being solved, a Monte Carlo sample is being computed for each variable in the model, resulting finally in a 2-D Monte Carlo sample for Total_cost. From this 2-D array, the single cell corresponding to the coordinate of the active NLP is extracted and used, the remaining values wasted. In an ideal world, we would expect the solution time to scale linearly with the number of locations and linearly with the sample size. With the inefficiencies just described, the complexity to solve each individual NLP scales linearly with number of locations and linearly with sample size, but since the number of NLPs also scales linearly with these, our net solution time scales quadratically with each. Hence, if you double the number of locations, your model takes 4 times longer to find the optimal solution. If you increase your sample size from 100 to 1000, it suddenly takes 100 times longer (not 10-times longer) to solve. ### Applying SetContext to uncertain variables To avoid inefficiencies from the Run index, we could list additional context variables: DefineOptimization(decision: Emissions_reduction, Maximize: Total_cost, SetContext: Base_concentration, Threshold, Health_damage_factor, Population_exposed, Control_cost_factor, value_of_a_life) Compare the context variables selected here with the diagram. Do any of these depend directly or indirectly on the decision variable? Can you explain why this set of context variables was selected? ## Selecting Context Variables Which variable or variables should you select to use as a context variable? Here are some guidelines. ### No downstream operations on extrinsic indexes When you select X to be a context variable, it is essential that no variable depending on X operates over an extrinsic index. The presence of such a variable would alter the logic of your model and produce incorrect results! For example, if Y is defined as Sum(X, Location), where Location is extrinsic, then you should not select X as a context variable. Suppose there are two extrinsic indexes, I and J, and you know there are no downstream variables operating on I, but you can't say the same about J. You'd like to select X as a context variable for only the I index. What can you do? In this case, you can accomplish this with a slight modification to your model, introducing one new variable and modifying the definition of X slightly: Variable I_ctx := I Variable X := («origDefinition»)[I = I_ctx] Then you list I_ctx in the «SetContext» parameter. ### Not downstream from a decision variable Ideally, a context variable should not be downstream of a decision variable (i.e., it should not be influenced, directly or indirectly, by a decision variable's value). Variables downstream from decision variables are invalidated every time a new search point is plugged it, and thus must be recomputed every time. By selecting variables that are not downstream (as was done in the above example), the sliced value gets cached once for each NLP solved. Context variables that are downstream from decisions are handled somewhat differently by «SetContext». Instead of replacing the value with the appropriate slice, the pared definition of the variable is temporarily surrounded by a Slice operation. So, for example, if Control_cost is specified as a decision variable,
2C), but not the increase in cellular ATP caused by the mitochondrial stimulation (Figure 2D). In primary cultured cortical neurons [35], 810 nm laser produced a biphasic dose response in ATP production (Figure 3A) and MMP (Figure 3B) with a maximum at 3 J/cm2. At a high dose (30 J/cm2) the MMP was actually lowered below baseline. Interestingly the dose-response curve between fluence (J/cm2) and ROS production showed two different maxima (Figure 3C). One of these maxima occurred at 3 J/cm2 where the MMP showed its maximum increase. The second maximum in ROS production occurred at 30 J/cm2 where the MMP had been reduced below baseline. At a value between these two fluences (10 J/cm2) a dose at which the MMP was approximately back to baseline, there was not much ROS generation. These data are very good examples of the “biphasic dose response” or “Arndt-Schulz curve” which is often discussed in the PBM literature [7,8]. Thus it appears that ROS can be generated within mitochondria when the MMP is increased above normal values and also when it is decreased below normal values. It remains to be seen whether these two kinds of PBM-generated ROS are identical or not. One intriguing possibility is that whether the ROS generated by PBM is beneficial or detrimental may depend on the rate at which it is generated. If superoxide is generated in mitochondria at a rate that allows superoxide dismutase (SOD) to detoxify it to hydrogen peroxide, then the uncharged H2O2 can diffuse out of the mitochondria to activate beneficial signaling pathways, while if superoxide is generated at a rate or at levels beyond the ability of SOD to deal with it, then the charged superoxide may build up inside mitochondria and damage them. ### 3.2. PBM reduces ROS in oxidative stressed cells and tissues Notwithstanding, the ability of PBM to produce a burst of ROS in normal cells, it is well-accepted that PBM when as a treatment for tissue injury or muscle damage is able to reduce markers of oxidative stress [36,37,38]. How can these apparently contradictory findings be reconciled? A study attempted to answer this question [39]. Primary cultured cortical neurons were treated with one of three different interventions, all of which were chosen from literature methods of artificially inducing oxidative stress in cell culture. The first was cobalt chloride (CoCl2), which is used as a mimetic for hypoxia and works by a Fenton reaction producing hydroxyl radicals [40]. The second was direct treatment with hydrogen peroxide. The third was treatment with the mitochondrial complex I inhibitor, rotenone [41]. All three of these different treatments increased the intracellular mitochondrial ROS as judged by Cell-Rox Red (Figure 4A), and at the same time lowered the MMP as measured by tetramethyl-rhodamine methyl ester (TMRM) (Figure 4B). PBM (3 J/cm2 of 810 nm laser) raised the MMP back towards baseline, while simultaneously reducing the generation of ROS in oxidatively stressed cells (while slightly increasing ROS in normal cells). In control cells (no oxidative stress), PBM increased MMP above baseline and still produced a modest increase in ROS. Since most laboratory studies of PBM as a therapy have looked at various animal models of disease or injury, it is not surprising that most workers have measured reduction in tissue markers of oxidative stress (TBARS) after PBM [36,42]. There have been a lot of studies looking at muscles. In humans, especially in athletes, high-level exercise produces effects in muscles characterized by delayed-onset muscle soreness, markers of muscle damage (creatine kinase), inflammation and oxidative stress. One cellular study by Macedo et al [43] used muscle cells isolated from muscular dystrophy mice (mdx LA 24) and found that 5 J/cm2 of 830 nm increased the expression levels of myosin heavy chain, and intracellular [Ca2+]i. PBM decreased H2O2 production and 4-HNE levels and also GSH levels and GR and SOD activities. The mdx cells showed significant increase in the TNF-α and NFκB levels, which were reduced by PBM. While it is highly likely that the effects of PBM in modulating ROS are involved in the anti-inflammatory effects of PBM, it would be dangerous to conclude that that is the only explanation. Other signaling pathways (nitric oxide, cyclic AMP, calcium) are also likely to be involved in reduction of inflammation. As mentioned above we found [34] that PBM (3 J/cm2 of 810 nm laser) activated NF-kB in embryonic fibroblasts isolated from mice that had been genetically engineered to express firefly luciferase under control of an NF-kB promoter. Although it is well-known that NF-kB functions as a pro-inflammatory transcription factor, but on the other hand it is also well known that in clinical practice or in laboratory animal studies) PBM has a profound anti-inflammatory effect in vivo. This gives rise to another apparent contradiction that must be satisfactorily resolved. ### 4.2. PBM reduces levels of pro-inflammatory cytokines in activated inflammatory cells Part of the answer to the apparent contradiction highlighted above, was addressed in a subsequent paper [44]. We isolated primary bone marrow-derived dendritic cells (DCs) from the mouse femur and cultured them with GM-CSF. When these cells were activated with the classical toll-like receptor (TLR) agonists, LPS (TLR4) and CpG oligodeoxynucleotide (TLR9), they showed upregulation of cell-surface markers of activation and maturation such as MHC class II, CD86 and CD11c as measured by flow cytometry. Moreover IL12 was secreted by CpG-stimulated DCs. PBM (0.3 or 3 J/cm2 of 810 nm laser) reduced all the markers of activation and also the IL12 secretion. Figure 5. Yamaura et al [45] tested PBM (810 nm, 5 or 25 J/cm2) on synoviocytes isolated from rheumatoid arthritis patients. They applied PBM before or after addition of tumor necrosis factor-α (TNF-α). mRNA and protein levels of TNF-α and interleukins (IL)-1beta, and IL-8 were reduced (especially by 25 J/cm2). Hwang et al [46] incubated human annulus fibrosus cells with conditioned medium obtained from macrophages (THP-1 cells) containing proinflammatory cytokines IL1β, IL6, IL8 and TNF-α. They compared 405, 532 and 650 nm at doses up to 1.6 J/cm2. They found that all wavelengths reduced IL8 expression and 405 nm also reduced IL6. The “Super-Lizer” is a Japanese device that emits linear polarized infrared light. Imaoka et al [47] tested it against a rat model of rheumatoid arthritis involving immunizing the rats with bovine type II collagen, after which they develop autoimmune inflammation in multiple joints. The found reductions in IL20 expression in histological sections taken from the PBM-treated joints and also in human rheumatoid fibroblast-like synoviocyte (MH7A) stimulated with IL1β. Lim et al [48] studied human gingival fibroblasts (HGF) treated with lipopolysaccharides (LPS) isolated from Porphyromonas gingivalis. They used PBM mediated by a 635 nm LED and irradiated the cells + LPS directly or indirectly (transferring medium from PBM treated cells to other cells with LPS). Both direct and indirect protocols showed reductions in inflammatory markers (cyclooxygenase-2 (COX2), prostaglandin E2 (PGE2), granulocyte colony-stimulating factor (GCSF), regulated on activated normal T-cell expressed and secreted (RANTES), and CXCL11). In the indirect irradiation group, phosphorylation of C-Raf and Erk1/2 increased. In another study [49] the same group used a similar system (direct PBM on HGF + LPS) and showed that 635 nm PBM reduced IL6, IL8, p38 phosphorylation, and increased JNK phosphorylation. They explained the activation of JNK by the growth promoting effects of PBM. Sakurai et al reported [50] similar findings using HGF treated with Campylobacter rectus LPS and PBM (830 nm up to 6.3 J/cm2) to reduce levels of COX2 and PGE2. In another study [51] the same group showed a reduction in IL1β in the same system. ### 4.3. Effects of PBM on macrophage phenotype Another very interesting property of PBM is its ability to change the phenotype of activated cells of the monocyte or macrophage lineage. These cells can display two very different
""" The MIT License (MIT) Copyright (c) 2015 Guillermo Romero Franco (AKA Gato) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ import Queue import threading import weakref import numpy as N import math try: import Image, ImageDraw except: from PIL import Image, ImageDraw from mathtools import * from libs.sortedcontainers.sorteddict import SortedList from collections import namedtuple class Traveller: def __init__(self, route_waypoints, map, speed=0.1, intro_fn=None, outro_fn=None, cruise_z_offset = 0): self.cruise_z_offset = (0,0,cruise_z_offset) if cruise_z_offset else 0.0 self._route = iter(route_waypoints) self._map = map self._intro_fn = intro_fn # takes two parameters. Should return the last one as the last element self._outro_fn = outro_fn # takes two parameters. Should return the first one as the first element self._speed = speed self._waypoint_generator = self._getWaypoints() self._pos = self._next_pos = self._waypoint_generator.next() self._done = False # doesn't affect the intro/outro, but affects the parameters passed to those functions def _getNextRoutePoint(self): p = self._route.next() return N.array((p[0],p[1],self._map(p[0], p[1], False)), dtype="f") + self.cruise_z_offset def _getWaypoints(self): if self._intro_fn: p0 = self._getNextRoutePoint() p1 = self._getNextRoutePoint() for pt in self._intro_fn(p0,p1): p0 = pt yield pt else: p0 = self._getNextRoutePoint() yield p0 p1 = self._getNextRoutePoint() while True: try: p2 = self._getNextRoutePoint() except StopIteration: if self._outro_fn: for pt in self._outro_fn(p0,p1): yield pt else: yield p1 raise StopIteration yield p1 p0 = p1 p1 = p2 def advance(self, dt): if self._done: return None to_travel = dt * self._speed while True: v = self._next_pos - self._pos d = math.sqrt(N.dot(v,v)) if to_travel > d: self._pos = self._next_pos try: self._next_pos = self._waypoint_generator.next() except StopIteration: self._done = True return self._pos to_travel -= d else: self._pos = self._pos + v * to_travel / d self._pos[2] = max(self._pos[2], self._map(self._pos[0], self._pos[1],False)) return self._pos class Area: def __init__(self, x1,y1,x2,y2,extra=None): self.x1 = x1 self.x2 = x2 self.y1 = y1 self.y2 = y2 self.extra = extra self._adj = None def isFreeArea(self): try: int(self.extra) return True except: return False def rect(self): # returns the area in range 0..1 return (self.x1,self.y1,self.x2,self.y2) def setFreeAreaId(self, id): self.extra = id def getFreeAreaId(self): return int(self.extra) def setChildren(self, ch): self.extra = ch def getChildren(self): assert not self.isFreeArea() return self.extra def __str__(self): if self.extra is None: return "A[%s]"%(self.rect(),) if self.isFreeArea(): return "A[%s ID:%s]"%(self.rect(),self.extra) else: return "A[%s #C:%s]"%(self.rect(),len(self.extra)) def distanceTo(self, other): dx = (other.x1+other.x2) - (self.x1+self.x2) dy = (other.y1+other.y2) - (self.y1+self.y2) return math.sqrt(dx*dx+dy*dy)*0.5 def hasPoint(self, point): return (point[0]>=self.x1 and point[0]<=self.x2 and point[1]>=self.y1 and point[1]<=self.y2) def setAdjacencies(self, adj): self._adj = adj def getAdjacencies(self): return self._adj def getCenter(self): return (self.x1+self.x2)*0.5, (self.y1+self.y2)*0.5 class Router: def __init__(self, map, walkable_position, max_angle=45, smoothing=3, levels=6): self._map = map self._raster = self.getBlockingMap(max_angle, smoothing) self._free_areas = [] self._root_area = None self.processAll(walkable_position, levels) def getBlockingMap(self, max_angle=30.0, threshold=3): normals = self._map.getNormals() assert(normals is not None) min_z = math.cos(max_angle / 180.0 * math.pi) n = normals[:,:,2] n = N.piecewise(n, [n<min_z,n>=min_z], [1,0]) n1 = N.roll(n, 1,0) n2 = N.roll(n, -1,0) n3 = N.roll(n, 1,1) n4 = N.roll(n, 1,1) n5 = N.roll(n1, 1,1) n6 = N.roll(n1, -1,1) n7 = N.roll(n2, 1,1) n8 = N.roll(n2, -1,1) n = n+n1+n2+n3+n4+n5+n6+n7+n8 return N.piecewise(n, [n<threshold,n>=threshold], [0,1]) def processAll(self, walkable_position=None, levels=6): #saveMap(self._raster,"blocking.png") if walkable_position is not None: self.filterUnreachable(walkable_position) self.computeAreas(levels) self.buildGraph() #self.saveAreasImage("areas.png") self._raster = None # free the map def getContainingArea(self, map_pos): return self._getContainingArea(map_pos, self._root_area) def getAdjacencies(self, area_id): return self._free_areas[area_id].getAdjacencies() def printTouchingAreas(self, free_area_id): print "For region: ",self._free_areas[free_area_id] ta = self.getTouchingAreas(self._root_area, self._free_areas[free_area_id]) for t in ta: print " %s through %s"%(self._free_areas[t[0]], t[1]) def saveAreasImage(self, filename, hilights=None, waypoints=None): img = Image.new("RGB", (self._map_w, self._map_h)) draw = ImageDraw.Draw(img) for r in self._free_areas: draw.rectangle(r.rect(),outline=(64,0,0)) if hilights: points=[] for hl,portal in hilights: r = self.getArea(hl) points.append(r.getCenter()) draw.rectangle(r.rect(),fill=(0,0,128),outline=(128,0,0)) draw.line(points, fill=(255,255,0)) for hl,portal in hilights: if portal: x0,y0,x1,y1 = map(int,portal) draw.line(((x0,y0),(x1,y1)),fill=(0,255,0)) if waypoints: waypoints =[(w[0],w[1]) for w in waypoints] draw.point(waypoints,fill=(255,0,255)) x,y = waypoints[0] draw.ellipse((x-1,y-1,x+1,y+1), fill=(128,128,128)) img.save(filename) def printTree(self, area, indent=0): print " "*indent, area try: children = area.getChildren() except: return for c in children: self.printTree(c, indent+2) def getArea(self, area_id): return self._free_areas[area_id] #-- helper methods def filterUnreachable(self, reachable): # we'll use the flood fill in Image img = Image.fromarray(N.uint8(self._raster) * 64).copy() ImageDraw.floodfill(img,map(int,reachable[:2]), 128) n = N.array(img.getdata()).reshape(img.size[0], img.size[1]) self._raster = N.piecewise(n, [n==128,n!=128], [0,1]) #img.save("filled.png") #saveMap(self._raster, "reachable.png") def computeAreas(self, depth=8): self._free_areas = [] m = 2**depth h,w = self._raster.shape self._map_w = w self._map_h = h self._root_area = Area(0.0,0.0,w,h) self._split(self._root_area, depth) def buildGraph(self): self._adj = {} for area in self._free_areas: ta = self.getTouchingAreas(self._root_area, area) # this is rather expensive. So we cache it area.setAdjacencies(ta) # returns a list of (free_area_id, portal) def getTouchingAreas(self, node, area): if self.intersects(area, node): if node.isFreeArea(): portal = self.getPortal(area, node) if portal is not None: return [(node.getFreeAreaId(), portal)] else: touching = [] for child in node.getChildren(): touching += self.getTouchingAreas(child, area) return touching return [] def intersects(self, a1, a2): if a1.x1 > a2.x2: return False if a2.x1 > a1.x2: return False if a1.y1 > a2.y2: return False if a2.y1 > a1.y2: return False return True def touches(self, a1, a2): code = 0 if a1.x1 == a2.x2: code |= 1 if a2.x1 == a1.x2: code |= 1+4 if a1.y1 == a2.y2: code |= 2+8 if a2.y1 == a1.y2: code |= 2+16 if (code & 3) == 3: return False # diagonal adjacent return code >> 2 # areas must be touching. Otherwise it's undefined def getPortal(self, a1, a2): # vertical portal x0=x1=y0=y1=0 if a1.x1 == a2.x2: x0 = x1 = a1.x1 y0 = max(a1.y1,a2.y1) y1 = min(a1.y2,a2.y2) elif a2.x1 == a1.x2: x0 = x1 = a2.x1 y0 = max(a1.y1,a2.y1) y1 = min(a1.y2,a2.y2) # horizontal portal elif a1.y1 == a2.y2: y0 = y1 = a1.y1 x0 = max(a1.x1,a2.x1) x1 = min(a1.x2,a2.x2) elif a2.y1 == a1.y2: y0 = y1 = a2.y1 x0 = max(a1.x1,a2.x1) x1 = min(a1.x2,a2.x2) if x0==x1 and y0==y1: return None return x0,y0,x1,y1 #-- internal methods def _getContainingArea(self, pos, root_node): if root_node.hasPoint(pos): try: root_node.getFreeAreaId() return root_node except: for c in root_node.getChildren(): r = self._getContainingArea(pos, c) if r is not None: return r return None def _split(self, area, levels_to_go): obs = self._obstructed(area) if obs == 0: # not obstructed area.setFreeAreaId(len(self._free_areas)) # number of leaf self._free_areas.append(area) return True if levels_to_go==0 or obs == 2: # no more levels or fully obstructed return False x0,y0,x1,y1 = area.rect() xm = (x0+x1)/2 ym = (y0+y1)/2 levels_to_go -= 1 sub_areas = [ Area(x0,y0,xm,ym), Area(xm,y0,x1,ym), Area(x0,ym,xm,y1), Area(xm,ym,x1,y1) ] children = None for a in sub_areas: h = self._split(a, levels_to_go) if h: try: children.append(a) except: children = [a] area.setChildren(children) return bool(children) def _obstructed(self, area): x0,y0,x1,y1 = area.rect() x0 = int(x0) x1 = int(x1) y0 = int(y0) y1 = int(y1) o = N.any(self._raster[y0:y1,x0:x1]) if o: if N.all(self._raster[y0:y1,x0:x1]): return 2 return 1 return 0 def computeDistances(self, src, dest_list): # uses Dijkstra pm = self origin_area= self.getContainingArea(src) target_areas = [self.getContainingArea(dest) for dest in dest_list] target_area_ids = set([a.getFreeAreaId() if a is not None else -1 for a in target_areas ]) if origin_area is None or not any(target_areas): return [None]*len(dest_list) search_front_queue = SortedList() search_front_queue.add((0,origin_area.getFreeAreaId())) search_front_p = {} target_distances = {} frozen = set() loops = 0 while search_front_queue: loops += 1 d,a = search_front_queue.pop(0) if a in frozen: # ignore duplicates continue # target found if a == target_area_ids: target_distances[a] = d target_area_ids.remove(a) if len(target_area_ids) == 0: break frozen.add(a) area = pm.getArea(a) for adj, portal in area.getAdjacencies(): if adj in frozen: # don't try to even
# Halloween Golf: The 2spooky4me Challenge! A current internet meme is to type 2spooky4me, with a second person typing 3spooky5me, following the (n)spooky(n+2)me pattern. Your mission is to implement this pattern in your chosen language. You should write a program or function that takes a value n (from standard input, as a function argument, or closest alternative), and outputs the string (n)spooky(n+2)me (without the parentheses; to standard output, as a return value for a function, or closest alternative). Your solution should work for all inputs, from 1 up to 2 below your language's maximum representable integer value (2^32-3 for C on a 32-bit machine, for example). Example implementation in Python: def spooky(n): return "%dspooky%dme"%(n,n+2) spooky(2) -> "2spooky4me" This is , so standard loopholes are forbidden, and the shortest answer in bytes wins! The Stack Snippet at the bottom of this post generates the leaderboard from the answers a) as a list of shortest solution per language and b) as an overall leaderboard. ## Language Name, N bytes where N is the size of your submission. If you improve your score, you can keep old scores in the headline, by striking them through. For instance: ## Ruby, <s>104</s> <s>101</s> 96 bytes If there you want to include multiple numbers in your header (e.g. because your score is the sum of two files or you want to list interpreter flag penalties separately), make sure that the actual score is the last number in the header: ## Perl, 43 + 2 (-p flag) = 45 bytes You can also make the language name a link which will then show up in the snippet: ## [><>](http://esolangs.org/wiki/Fish), 121 bytes • For bonus points: Input %dspooky%dme, validate and return next in series. – clapp Oct 31 '15 at 6:13 • True, but Dennis would still win – clapp Oct 31 '15 at 6:19 • Who is Dennis? :O – NuWin Feb 15 '16 at 20:39 • @NuWin Dennis is the way. Dennis is the light. – Alex A. Feb 15 '16 at 23:02 • @NuWin Dennis is love, Dennis is life – user63571 Jan 25 '17 at 19:40 # gs2, 15 bytes I outgolfed Dennis! CP437: spooky•me♣╨V↕0B Hex dump: 73 70 6f 6f 6b 79 07 6d 65 05 d0 56 12 30 42 At the start of the program, STDIN is pushed (e.g. the string "3") and stored in variable A. The first ten bytes of the program push two strings, "spooky" and "me", to the stack. Then: • d0 pushes variable A. • 56 parses it as a number. • 12 30 increments it by two. • 42 swaps the top two elements on the stack, leaving "3" "spooky" 5 "me". The final stack is printed as 3spooky5me. • Holy hell, that's short. +1 – Addison Crump Oct 31 '15 at 23:47 • GJ, but Dennis might come over here and outmod you to preserve his reputation. – TheDoctor Oct 31 '15 at 23:48 • What no how how did you do that – a spaghetto Nov 1 '15 at 2:06 • I just realized... does gs2 stand for "golf script 2"? – mbomb007 Nov 3 '15 at 14:56 • By the way, I've added GS2 to my family of online interpreters, Try it online! – Dennis Dec 11 '15 at 22:57 # GS2, 17 bytes 56 40 27 27 04 73 70 6F 6F 6B 79 05 42 04 6D 65 05 I CAN'T OUTGOLF DENNIS HELP • relevant – Downgoat Oct 31 '15 at 3:24 • Man, it's so weird that we currently have 4 languages tied for first and 3 languages tied for second :P – ETHproductions Oct 31 '15 at 20:15 # Stuck, 17 bytes i_2+"spooky";"me" EDIT: GUESS YOU COULD SAY I'M STUCK AT 17 BYTES • Hey cool, someone actually using Stuck :D – Kade Nov 1 '15 at 0:31 • @Shebang I actually really like Stuck. Although it would be nice if it had some better methods for manipulating arrays. – a spaghetto Nov 1 '15 at 2:07 # GolfScript, 17 bytes ~.2+"spooky"\"me" Try it online on Web GolfScript. ### How it works ~ # Evaluate the input. .2+ # Push a copy and add 2. "spooky" # Push that string. \ # Swap it with the computed sum. "me" # Push that string. # Chef, 414 bytes S. Ingredients. g i 2 g t 115 l s 112 l p 111 l o 107 l k 121 l y 109 l m 101 l e Method. Take i from refrigerator.Put e into mixing bowl.Put m into mixing bowl.Put i into mixing bowl.Add t.Put y into mixing bowl.Put k into mixing bowl.Put o into mixing bowl.Put o into mixing bowl.Put p into mixing bowl.Put s into mixing bowl.Put i into mixing bowl.Pour contents of mixing bowl into the baking dish. Serves 1. A recipe for disaster. Do not try this at home. • Mm. That's some spooky tastes you got there. – Addison Crump Nov 8 '15 at 15:36 # CJam, 18 bytes ri_2+"spooky"\"me" Try it online. # Pyth - 17 bytes s[Q"spooky"hhQ"me # TeaScript, 18 bytes x+spooky${x+2}me Unfortunately this string can't be compressed so this is basically as short as it will get • Welcome to the 18th byte! :P – a spaghetto Oct 31 '15 at 2:36 # Pip, 18 bytes Looks like I'm in the second tier of golfing languages here. :^P [a"spooky"a+2"me"] Takes the number as a command-line argument and puts the appropriate elements in an array, which is joined together and autoprinted at the end of the program. # dc, 20 bytes ?dn[spooky]P2+n[me]P # Japt, 17 16 bytes U+"spooky{U+2}me Japt (Javascript shortened) is a language of my invention. It is newer than this challenge; thus, this answer is non-competing. Unlike my other seven unpublished languages, this one has an actual interpreter that is currently being developed and is already partially working. I wanted to post this because I like how it's the same length as all the existing first-place second-place answers. Here's how it works: U+"spooky{U+2}me" implicit: [U,V,W,X,Y,Z] = eval(input) U+ input + "spooky me" this string {U+2} with input+2 inserted here implicit: output last expression And there you have it. The spec for all functionality used here was finalized on Oct 29th; nothing was changed to make this answer any shorter. Here's the interpreter, as promised. • I'm sure this was pre-shoco, but I think you could've done {U}2me instead of {U+2}me :P – Oliver May 1 '18 at 18:27 # Gol><>, 21 bytes I:n"emykoops"6Ro{2+nH I guess I'm... tied with Perl? Try it online. I:n Input n, output n "emykoops" Push chars 6Ro Output top 6 chars (spooky) {2+n Output n+2 H Output stack and halt (me) # Vitsy, 21 Bytes Note: the Z command was made after this challenge began, but was not made for this challenge. VVN"ykoops"ZV2+N"em"Z V Grab the top item of the stack (the input) and make it a global variable. V Call it up - push the global variable to the top of the stack. N Output it as a number. "ykoops" Push 'spooky' to the stack. Z Output it all. V2+N Call the global variable again, add two, then output as num. "em"Z Push 'me' to the stack and output it all. More spoopy variation using multiple stacks (27 Bytes): &"ykoops"&"em"?DN?Z??2+N??Z & Make a new stack and move to it. "ykoops" Push 'spooky' to the current stack. &"em" Do the last to things with 'me'. ? Move over a stack. DN Output the input. ?Z Move over a stack (the one with 'spooky') and print it. ?? Move back to the original stack. 2+N Add 2 to the input and output it as a number. ??Z Move to the stack with 'me' in it and print it. Try it online! • Just wondering, why are strings inverted? – Cyoce Feb 23 '16 at 2:32 • It's pushing chars to the stack one by one. – Soham Chowdhury Mar 23 '16 at 8:45 # 05AB1E, 14 10 bytes DÌs’ÿæªÿme Try it online. ## Explanation DÌs’ÿæªÿme D get input n and duplicate it Ì increment by 2 s Swap. Stack is now [n+2, n]. ’ÿæªÿme Compressed string that expands to "ÿspookyÿme". The first ÿ is then replaced by n and the second by n+2.
& \sin\theta\\ 0 & -\sin\theta & \cos\theta\end{pmatrix}. \end{equation} Such a unitary gate can introduce the complex phase $e^{i\theta}$ and $e^{-i\theta}$ for $\lambda_2^*$ and $\lambda_3^*$, respectively. Therefore, in the following steps, we only need to consider the case where all eigenvalues are real. \textit{Step 3. }If the $T^*$ contains real negative eigenvalues, we consider the following three unitary transformations with affine map representations ${\mathcal E}_X(T_X,t_X)$, ${\mathcal E}_Y(T_Y,t_Y)$, and ${\mathcal E}_Z(T_Z,t_Z)$ with $t_X=t_Y=t_Z=\bm 0$ and \begin{equation}\label{suppeq:thm1_step3}\tag{A13} T_X=\begin{pmatrix} -1 & 0 & 0\\ 0 & 1 & 0\\0 & 0 & 1 \end{pmatrix},T_Y=\begin{pmatrix} 1 & 0 & 0\\ 0 & -1 & 0\\0 & 0 & 1 \end{pmatrix},T_Z=\begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\0 & 0 & -1 \end{pmatrix}. \end{equation} Since their products are still unitary transformations, we need only one unitary transformation to turn positive eigenvalues obtained in step $1$ into negative values. \textit{Step 4. }In this step we apply a unitary transformation to transfer the current basis to the $\{v_1^*,v_2^*,v_3^*\}$ basis. In the above four steps, the unitary gates in step $2$ to $4$ could be combined as one unitary gate, and according to Ref. \cite{Nielsen2010Quantum} this unitary gate can be approximated within error $\delta$ by a sequence of elementary gates chosen from a universal gate set. Therefore, the total error cannot exceed the sum of error in all steps, which is $7\delta$. By fixing $\delta=\frac{\epsilon}{7}$, we can decompose an arbitrary quantum channel into a sequence of unitary gates and elementary channels chosen from the $14$ elementary channels ${\mathcal E}_1,...,{\mathcal E}_{14}$ constructed in Eqs.~\eqref{suppeq:thm1_step1_1}-\eqref{suppeq:thm1_step1_6}. Following the four steps above, we can also bound the length of the sequence for the compilation. In step $2$-$4$, we need one unitary gate while in step $1$, the length of the table does not exceed $\log_{(1-\delta)}\delta+1$. In practice, $\delta=\frac{\epsilon}{7}$ is usually a small number close to $0$. Therefore, we can do the approximation $\log_{(1-\delta)}\delta=\frac{\ln(\delta)}{\ln(1-\delta)}\approx\frac{1}{\delta}\ln(\frac{1}{\delta})=O(\frac{1}{\epsilon}\log(\frac1\epsilon))$. This indicates that the length of the entire sequence is $O(\frac{1}{\epsilon}\log(\frac1\epsilon))$. This completes the proof of the part $(2)$ of the Theorem 1 in the main text. We can extend the Theorem 1 to the multi-qubit case. As mentioned in the main text, for a $d$-dimensional quantum state $\rho\in{\mathcal O}({\mathcal H}_S)$, a canonical and orthonormal basis \cite{Wang2015Algorithmic,Bruning2012Parametrizations} $\{O_\alpha\},O_\alpha\in{\mathcal O}({\mathcal H}_S)$ satisfies (i) $O_0=I$, (ii) $\text{tr}(O_\alpha)=0,\forall\alpha\neq0$, and (iii) $\text{tr}(O_\alpha^\dagger O_\beta)=\delta_{\alpha\beta}$. An arbitrary density operator $\rho$ can be written as $\rho=\frac 1d(I+\sum_{\alpha=1}^{d^2-1}p_\alpha Q_\alpha)$, where $Q_\alpha=\sqrt{d(d-1)}O_\alpha$. The parameters in $\{p_\alpha\}$ form the polarization vector $\bm{p}=(p_1,...,p_{d^2-1})$ of a $(d^2-1)$-dimensional ball with $||p||_2=1$ representing pure states and $||p||_2<1$ representing mixed states. Since a quantum state $\rho$ can be represented as a vector within a ball, a quantum channel ${\mathcal E}:{\mathcal O}({\mathcal H}_S)\to{\mathcal O}({\mathcal H}_S)$ can be written as an affine map represented by a distortion matrix $T\in{\mathbb R}^{(d^2-1)\times(d^2-1)}$ and a center shift $t\in{\mathbb R}^{d^2-1}$ \begin{subequations} \begin{equation}\label{suppeq:coro1_map1}\tag{A14} {\mathcal E}\to\mathcal T=\begin{pmatrix}1 & 0\\t & T \end{pmatrix},\mathcal{T}_{ij}=\frac1d\text{tr}[Q_\alpha{\mathcal E}(Q_\beta)], \end{equation} \begin{equation}\label{suppeq:coro1_map2}\tag{A15} \bm p\to T\bm p+t. \end{equation} \end{subequations} Notice that the above affine map is similar to the single-qubit case mentioned in the main text, we can extend part (1) of the Theorem 1 straightforwardly to the multi-qubit scenario. For the second part, we assume the target channel to be ${\mathcal E}^*(T^*,t^*)$ and $T^*$ has eigenvalues $\lambda_1^*,...,\lambda_{d^2-1^*}$. Without loss of generality, we can rank $\lambda_i^*$ in decreasing order of magnitude such that $|\lambda_1^*|\geq...\geq|\lambda_{d^2-1}^*|$. Inherited from the proof for single-qubit channel, $k_i$ is defined to be $k_i=\lceil\min\{\log_{(1-\delta)}|\lambda_i^*|,\log_{(1-\delta)}\delta\}\rceil,i=1,...,d^2-1$. If we directly use the constructive approach in the proof for the Theorem 1, we will create $2^{d^2}-2$ elementary channels ${\mathcal E}_1(T_1,t_1),...,{\mathcal E}_{2^{d^2}-2}(T_{2^{d^2}-2},t_{2^{d^2}-2})$. The first $2^{d^2-1}$ channels ${\mathcal E}_1(T_1,t_1),...,{\mathcal E}_{2^{d^2-1}}(T_{2^{d^2-1}},t_{2^{d^2-1}})$ have the same distortion matrix $T_1=...=T_{2^{d^2-1}}=\text{diag}\{1-\delta,...,1-\delta\}$ and their center shifts go through $2^{d^2-1}$ cases in which each element of the center shift can be $0$ or $\delta$. The next $2^{d^2-2}$ elementary channels that follows are ${\mathcal E}_{2^{d^2-1}+1}(T_{2^{d^2-1}+1},t_{2^{d^2-1}+1}),...,{\mathcal E}_{2^{d^2-1}+2^{d^2-2}}(T_{2^{d^2-1}+2^{d^2-2}},\\t_{2^{d^2-1}+2^{d^2-2}})$. They share the same distortion matrix $\text{diag}\{1, 1-\delta,...,1-\delta\}$ and their center shifts go through all cases that have the first element to be $0$ and other elements to be either $0$ or $\delta$. The rest channels are constructed similarly until the last two channels ${\mathcal E}_{2^{d^2}-3}(T_{2^{d^2}-3},t_{2^{d^2}-3}),{\mathcal E}_{2^{d^2}-2}(T_{2^{d^2}-2},t_{2^{d^2}-2})$, which have the distortion matrix $\text{diag}\{1,...,1,1-\delta\}$ and center shifts $(0,....,0)$ and $(0,0,...,0,\delta)$. Similar to the previous proof, we hold strings $s_i,i=1,...,d^2-1$ with the $j$-th element $s_{i,j}=0,1$ representing whether the $i$-th element of the center shift for the $j$-th channel of the sequence is $0$ or $\delta$. Under this construction, the error in total can be bounded above by $(2(d^2-1)+1)\delta$. Therefore, we fix $\delta=\frac{\epsilon}{2(d^2)-1}$ to guarantee that the distance between our approximation and target channel is no more than $\epsilon$. The length of the sequence is still bounded by $O(\frac1\epsilon\log(\frac1\epsilon))$. However, notice that the above construction requires $O(2^{d^2})$ elementary channels, which is double exponential to the number of qubits $n$. Here, we propose another construction that only requires $O(d^2)$ elementary channels. We still exploit the $4$-step compiling process in the previous section. Step $2$ to $4$ remain the same with previous construction to implement complex, negative eigenvalues in the distortion matrix and perform orthonormal basis transformation, In step $1$, we simply use ${\mathcal E}_1(T_1,t_1),...,{\mathcal E}_{2(d^2-1)}(T_{2(d^2-1)},t_{2(d^2-1)})$ with $T_{2i-1}=T_{2i}=\text{diag}\{1,...,1,1-\delta,1,...,1\}$ where the $i$-th diagonal element is $1-\delta$, $t_{2i-1}=(0,...,0)$, and $t_{2i}=(0,...,0,\delta,0,...,0)$ with the $i$-th element being $\delta$. Under this construction, step $1$ could be decomposed into sub-steps compiling ${\mathcal E}_{\text{step i}}(T_{\text{step i}},t_{\text{step i}})$ with $T_{\text{step i}}=\text{diag}\{1,...,1,|\lambda_i^*|,1, ...,1\},t_{\text{step i}}=(0,...,0,t_i^*,0,..,0)$ using ${\mathcal E}_{2i-1},{\mathcal E}_{2i}$ separately, where $|\lambda_i^*|$ and $t_i$ are the $i$-th diagonal element of distortion matrix and the $i$-th element of the center shift. In the $i$-th sub-step, we keep a $k_i$ length $0$-$1$ string $s_i$ and decompose ${\mathcal E}_{\text{step i}}$ into a sequence containing $k_i$ elementary channels. If the $j$-th element $s_{i,j}$ of $s_i$ is $0$, we add ${\mathcal E}_{2i-1}$ to the sequence and otherwise we add ${\mathcal E}_{2i}$. Therefore, the $i$-th element of approximation for the center shift is $\sum_{j=1}^{k_i}s_{i,j}\delta(1-\delta)^{j-1}$, which form a $\delta$-net in range $[0,1-|\lambda_i^*|]$ as $||\lambda_i^*|-(1-\delta)^{k_i}|<\delta,i=1,...,d^2-1$. Therefore, in each sub-step we can approximate $|\lambda_i^*|$ and $t_i^*$ within distance $\delta$. The total distance in this step cannot exceed $2(d^2-1)\delta$, which is the same with the previous construction. We can still fix $\delta=\frac{\epsilon}{2(d^2)-1}$. It is worthwhile to mention that there exists a trade-off between the above two constructions. Though the second construction only requires $O(d^2)$ elementary channels, the output sequence would have a total length $O(d^2\frac1\epsilon\log(\frac1\epsilon))$, which is exponential in the number of qubits $n$. \section*{B. Proof for the Theorem 2} In this section, we provide the detailed proof for the Theorem 2 in the main text. To derive a lower bound, we exploit the volume method \cite{Kitaev2002Classical,Harrow2002Efficient} based on the constraint that the whole space of $SU(d)$ should be covered by the $\epsilon$-balls centered by the gates that could be implemented by an elementary gate sequence. We start with the case when the subgroup $G=\langle g_1,...,g_n\rangle$ generated by $\{g_1,...,g_n\}$ is finite. We denote the size of $G$ as $|G|=K$. Consider compilation with fewer than $t$ $g^*$ gates and unlimited number of gates chosen from $G$. If no $g^*$ gate is used, we can only compile the $K$ gates in $G$. When we use at least one $g^*$ gates, any sequence containing $0<k\leq t$ $g^*$ gates could always be written as $g=(g_{i_{11}}g^*g_{i_{12}})...(g_{i_{k1}}g^*g_{i_{k2}})$ where $g_{i_{11}},g_{i_{12}},...,g_{i_{k1}},g_{i_{k2}}$ are chosen from $G$. Consider the subset $G^*=\{g_sg^*g_t|g_s,g_t\in G\}$ that can generate a dense subgroup of $SU(d)$. Any sequence that contains $k$ $g^*$ gates can be regarded as $k$ gates in $G^*$. Therefore, the number of $g^*$ gates can be regard as the number of gates from $G^*$. As the size of $G^*$ is $|G^*|=K^2$, the possible number of gate sequences containing no more than $t$ $g^*$ could be bounded above by $|G^*|+|G^*|^2+...+|G^*|^{t}\leq O(K^{2t+2})$. Therefore, the total number of gates we can accurately compile with a gate sequence with fewer than $t$ $g^*$ gates is bounded above by $O(K^{2t+2})$. We exploit the normalized Haar measure on $SU(d)$ space such that the volume of $SU(d)$ is one \cite{Harrow2002Efficient}.
current directory. You can use wildcards in the source names, but only the first file matching will be copied. Copying REL files should work, but isn't tested well. Mixing REL and non-REL files in an append operation isn't supported. ### CD CD (change directory) is used to select the current directory and to mount or unmount disk images. Subdirectory access is compatible to the syntax used by the CMD drives, although drive/partition numbers are completely ignored. Quick syntax overview: Command Remarks CD:_ changes into the parent dir (_ is the left arrow on the PET) CD_ ditto CD:foo changes into foo CD/foo ditto CD//foo changes into \foo CD/foo/:bar changes into foo\bar CD/foo/bar ditto You can use wildcards anywhere in the path. To change into an disk image the image file must be the last component in the path, either after a slash or a colon character. To mount/unmount image files, change into them as if they were a directory and use CD:_ (left arrow on the PET) to leave. Please note that image files are detected by file extension and file size and there is no reliable way to see if a file is a valid image file. ### CP, C<Shift-P> This changes the current partition, see "Partitions" below for details. ### D Direct sector access, this is a command group introduced by NODISKEMU. Some Commodore drives use D for disk duplication between two drives in the same unit, an attempt to use that command with NODISKEMU should result in an error message. D has three subcommands: DI (Info), DR (Read) and DW (Write). Each of those commands requires a buffer to be opened (similar to U1/U2), but due to the larger sector size of the storage devices used by NODISKEMU it needs to be a large buffer of size 2 (512 bytes) or larger. The exception is the DI command with page set to 0, its result will always fit into a standard 256 byte buffer. If you try to use one of the commands with a buffer that is too small a new error message is returned, "78,BUFFER TOO SMALL,00,00". In the following paragraphs the secondary address that was used to open the buffer is called "bufchan". ### DI In BASIC notation the command format is "DI"+chr$(bufchan)+chr$(device)+chr$(page) device is the number of the physical device to be queried, page the information page to be retrieved. Currently the only page implemented is page 0 which will return the following data structure: 1 byte : Number of valid bytes in this structure This includes this byte and is meant to provide backwards compatibility if this structure is extended at a later time. New fields will always be added to the end so old programs can still read the fields they know about. 1 byte : Highest diskinfo page supported Always 0 for now, will increase if more information pages are added (planned: Complete ATA IDENTIFY output for IDE and CSD for SD) 1 byte : Disk type This field identifies the device type, currently implemented values are: 0 IDE 2 SD 3 (reserved) 1 byte : Sector size divided by 256 This field holds the sector size of the storage device divided by 256. 4 bytes: Number of sectors on the device A little-endian (same byte order as the 6502) value of the number of sectors on the storage device. If there is ever a need to increase the reported capacity beyond 2TB (for 512 byte sectors) this field will return 0 and a 64-bit value will be added to this diskinfo page. If you want to determine if there is a device that responds to a given number, read info page 0 for it. If there is no device present that corresponds to the number you will see a DRIVE NOT READY error on the error channel and the "number of valid bytes" entry in the structure will be 0. Do not assume that device numbers are stable between releases and do not assume that they are continuous either. To scan for all present devices you should query at least 0-7 for now, but this limit may increase in later releases. ### DR/DW In BASIC notation the command format would be "DR"+chr$(bufchan)+chr$(device) +chr$(sector AND 255) +chr$((sector/256) AND 255) +chr$((sector/65536) AND 255) ### R Renames a file. Notice that the new name comes first! R:NEWNAME=OLDNAME Renaming files should work the same as it does on CMD drives, although the errors flagged for invalid characters in the name may differ. ### RD RD (remove directory) can only remove subdirectories of the current directory. RD:foo deletes foo ### S Scratches (deletes) a file. S:DELETEME,METOO' deletesDELETEMEandMETOO. Name matching is fully supported, directories are ignored. Scratching of multiple files separated by , is also supported with no limit to the number of files except for the maximum command line length (usually 100 to 120 characters). ### T-R and T-W If your hardware features RTC support the commands T-R (time read) and T-W (time write) are available. If the RTC isn't present, both commands return 30,SYNTAX ERROR,00,00; if the RTC is present but not set correctly T-R will return 31,SYNTAX ERROR,00,00. Both commands expect a fourth character that specifies the time format to be used. T-W expects that the new time follows that character with no space or other characters in between. For the A, B and D formats, the expected input format is exactly the same as returned by T-R with the same format character; for the I format the day of week is ignored and calculated based on the date instead. The possible formats are: • "A"SCII: "SUN. 01/20/08 01:23:45 PM"+CHR$(13) The day-of-week string can be any of "SUN.", "MON.", "TUES", "WED.", "THUR", "FRI.", "SAT.". The year field is modulo 100. • "B"CD or "D"ecimal: Both these formats use 9 bytes to specify the time. For BCD everything is BCD-encoded, for Decimal the numbers are sent/parsed as-is. Byte Value 0| Day of the week (0 for Sunday) 1| Year (modulo 100 for BCD; -1900 for Decimal, i.e. 108 for 2008) 2| Month (1-based) 3| Day (1-based) 4| Hour (1-12) 5| Minute (0-59) 6| Second (0-59) 7| AM/PM-Flag (0 is AM, everything else is PM) 8| CHR$(13) When the time is set a year less than 80 is interpreted as 20xx. ### U1/U2 Block reading and writing is fully supported while a disk image is mounted. U1 channel drive track sector reads a block, U2 channel drive track sector writes a block ### UI/UJ Soft/Hard reset - UI just sets the "73,..." message on the error channel, UJ closes all active buffers but doesn't reset the current directory, mounted image, swap list or anything else. ### U < Shift-J > Real hard reset - this command causes a restart of the AVR processor (skipping the bootloader if installed). < Shift-J > is character code 202. ## X: Extended commands ### XEnum Sets the "file extension mode". This setting controls if files on FAT are written with an x00 header and extension or not. Possible values for num are: 0: Never write x00 format files. 1: Write x00 format files for SEQ/USR/REL, but not for PRG 2: Always write x00 format files. 3: Use SEQ/USR/REL file extensions, no x00 header 4: Same as 3, but also for PRG If you set mode 3 or 4, extension hiding is automatically enabled. This setting can be saved in the EEPROM using XW, the default value is 0. For compatibility with existing programs that write D64 files, PRG files that have D64, D41, D71, D81, D80, D82 or DNP as an extension will always be written without an x00 header and without any additional PRG file extension. ### XE+ / XE- Enable/disable extension hiding. If enabled, files in FAT with a PRG/SEQ/USR/REL extension will have their extension removed and the file type changed to the type specified by the file extension - e.g. APPLICATION.PRG will become a PRG file named "APPLICATION", "README.SEQ" will become a SEQ file named "README". This flag can be saved in
\section{Introduction} The nonlinear Dirac (NLD) equation has had a long history in particle physics as a model field theory to describe the low energy behavior of the weak interactions starting with Fermi's theory of beta decay \cite{Fermi:1938}. This theory was recast by Feynman and Gell-Mann \cite{Feynman:1958} as a nonlinear quantum field theory with vector and axial vector 4-Fermi interactions. At the classical level, the Dirac equation was generalized to include local self-interactions by Ivanenko \cite{Ivanenko:1938}. Classical versions of the nonlinear Dirac (NLD) equation with different self-interaction terms and in different spatial dimensions have found many applications as models for various physical systems such as a model for extended particles \cite{Finkelstein:1951, finkelstein:1956,heisenberg:1957}, as a way of describing nonlinear optics \cite{barashenkov:1998}, optical realizations of relativistic quantum mechanics \cite{longhi:2010,dreisow:2010,tran:2014}, in honeycomb optical lattices hosting Bose-Einstein condensates \cite{haddad:2009}, among others. At the classical level, the various NLD equations allow for localized solutions with finite energy and charge \cite{ranada:1984}. This aspect of the NLD equation has led to its use as a model of extended objects in quantum field theory \cite{weyl:1950}. For the (1+1) dimensional NLD equation (i.e., one space dimension plus one time dimension), analytical solitary wave solutions have been obtained for the quadratic nonlinearity \cite{lee:1975,chang:1975}, for fractional nonlinearity \cite{mathieuprd:1985} as well as for general nonlinearity \cite{stubbe:1986,cooper:2010,xu:2013}. These results are summarized by Mathieu \cite{mathieu:1985}. The interaction dynamics of these solitary waves has been investigated in a series of works \cite{alvarez:1981,shao:2005,shao:2006,shao:2008,xu:2013,tran:2014} where quite insightful nonlinear phenomena have been found. One interesting question is how these solitary waves behave when they are placed in trapping potentials. Whether these solitary waves get trapped by these potentials or escape freely depends on competition between the effective size of the trapping potential and the soliton width. The effect of this competition in a spatially periodic parametric force proportional to $\cos(K x)$ has been studied in several systems having soliton solutions: the sine-Gordon equation \cite{scharf:1992,sanchez:1992,cuenda:2005}, self interacting $\varphi^4$ theory, and in nonlinear Schr\"odinger (NLS) equations \cite{sanchez:1994,scharf:1993}. The length-scale competition appears when the soliton width is comparable with the period $\lambda=2 \pi/K$. In this situation, the dynamics of the soliton is near a transition from bound to unbound behavior and is very sensitive to the specific details of the nonlinear dynamics. In this paper we will use the method of collective coordinates in a variational formulation which includes dissipation to study approximately the response of exact solitary wave solutions of the NLD equations with arbitrary nonlinearity parameter $\kappa$ to forcing that is proportional to $\cos(K x)$. The effect of the dissipation on the dynamics of the soliton will also be studied. The use of collective coordinates to study solitary waves has a long history. In conservative systems, variational methods were used to study the effect of small perturbations on the solitary waves found in the Korteweg-de Vries (KdV) equation, the modified KdV equation and the NLS equation \cite{bondeson:1979}. A similar approach was used to obtain the approximate time evolution of two coupled NLS equations \cite{kishvar:1990}, starting with trial wave functions based on the exact solution of the uncoupled problem. From a different perspective, Cooper and collaborators were interested in using robust post-gaussian trial wave functions which were {\em not} in the class of exact solutions to understand how well one could approximate exact solutions using this class of functions and to also understand what properties of soliton dynamics did not rely on knowing the exact solution \cite{cooper:1992, cooper:1993a,cooper:1993b,cooper:1993c,cooper:1994}. More recently it has been realized that if one chooses a robust enough trial wave function one can obtain a good estimate of the stability of the solitary wave in external environments by studying the linear stability of the reduced set of collective coordinates. Specifically, this method presumes that the influence of the parametric force on the soliton can be captured by assuming that some shape parameters of the wave function become functions of time. An example of this is found in \cite{cooper:2017} and references therein. For the solitary waves of the NLD equation, the question of length scale competition for the NLD equation with parametric driving of the type proportional to $\cos(K x)$ was studied in \cite{quintero2:2019} using a recently developed five collective coordinates (5CC) approximation \cite{quintero:2019}. In that paper, which only considered nonlinearity parameter $\kappa=1$ (the interaction term in the Lagrangian density being $(g^2/2) (\bPsi\Psi)^{2}$) it was shown that the behavior of the collective coordinates agreed quite well with numerical simulations of the various moments. It was also found that at $K=0$, the center of the soliton moved (apart from rapid small oscillations) like a free particle. Once $K$ was turned on, as long as the solitary wave was moving, when $2 \pi/K > l_s$ the solitary wave gets trapped and the collective coordinates of the solitary wave oscillate with two frequencies, the faster one being around $2 \omega_0$, where $\omega_0$ is the initial frequency of the soliton solution. When $2 \pi/K \gg l_s$, the effect of the driving term which acts as a trapping potential goes to zero and the solitary wave again moves freely with small rapid oscillations given approximately by $2 \omega_0$. In the intermediate regime the solitary wave was subject to instabilities at late times that are found in numerical simulations. This effect was not captured by the 5CC approximation \cite{quintero2:2019}. In this paper we consider a generalization of the problem studied in \cite{quintero2:2019}, where the nonlinearity term in the Lagrangian is taken to an arbitrary power $\kappa$. This power controls the width and shape of the solitary wave: as one increases $\kappa$ one decreases the width of the solitary wave. Exact solutions of the NLD equation at arbitrary $\kappa$ and their stability properties were studied in \cite{cooper:2010}. We will use the form of these solutions in the 5CC approximation to study the transition region between trapped and unconfined motion in the presence of parametric driving. Because the shape dependence is a function of $\kappa$, we expect and indeed find that at fixed value of $K$, and the same initial conditions, one can go from the ``free particle" regime to the trapped regime by increasing $\kappa$. When the dissipation is included the soliton loses energy until it disappears. As we increase $\kappa$ for fixed $\omega_0$, the solitary wave gets more ``spike like" so that it looks closer to a point particle. As a result of this the domain where the solitary wave is trapped gets enhanced. We map out this domain using the collective coordinate approach. We also compare the motion of the solitary wave at $\kappa=1/2$ and $ \kappa=2$ with numerical simulation of the NLD equations and get qualitative agreement. The transition regime curves would have been impossible to determine from numerical simulations for both numerical reasons and time feasibility constraints. The paper is organized as follows. In Sec. \ref{sec2} both the parametric force and the damping are introduced in the NLD equation and dynamical equations for the charge, the momentum and the energy are derived. We also show that the equations of motion can be obtained from a Lagrangian when we include a dissipation function \cite{Rayleigh}. In Sec. \ref{sec3} an ansatz with five collective coordinates is used as an approximate solution of the parametrically driven NLD equation and the equations of motion for the collective coordinates are obtained. Then, in Sec. \ref{sec4} we present numerical solutions of the collective coordinates equations and discuss the transition of behavior as a function of $\kappa$ for fixed $K$ as well as the transition of behavior from oscillatory to free for $\kappa=1/2$, $\kappa=1$ and $\kappa=2$ as we increase $K$. Finally, the main results of the work and the conclusions are summarized in Sec. \ref{sec5}. \section{Damped and parametrically driven NLD equation} \label{sec2} The parametrically driven nonlinear Dirac equation was recently investigated in Ref. \cite{quintero2:2019}. Here
0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 0 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 0 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 0 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 0 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 0 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 0 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 0 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 1 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R 0 1 R
\section{Introduction} The behavior of granular materials is of great technological interest \cite{jaeger96} and its investigation has a history of more than two hundred years. When granular materials are put in a rotating drum, avalanches are observed along the surface of the granular bulk~\cite{evesque88,rajchenbach90}. In industrial processes, such devices are mostly used for mixing different kinds of particles. However, it is also well known that particles of different sizes tend to segregate in the radial and axial directions~\cite{donald62,% bridgwater76,dasgupta91,nakagawa94,clement95,cantelaube95,hill94,hill95,% hill97,nakagawa97b,hill97b}. Recently, the particle dynamics of granular materials in a rotating drum has been described by using quasi two-dimensional systems, tracking individual grains via cameras and computer programs~\cite{cantelaube95}. Extensive numerical studies have also reproduced and predicted many of the experimental findings~\cite{walton93,ristow94,baumann94,baumann95,buchholtz95,ristow96,% dury97}. The segregation and mixing process depends on many parameters, such as size~\cite{bridgwater76,dury97}, shape~\cite{buchholtz95}, mass~\cite{ristow94}, frictional forces, angular velocity~\cite{dury97}, filling level of the drum~\cite{metcalfe95}, etc. The angle of repose of the material also depends on the parameters and it was argued that either the dynamic or static angle difference of the materials in the drum influence the axial segregation process~\cite{donald62,dasgupta91,hill94,hill95}. In this article, we investigate experimentally the dependence of the dynamic angle of repose on the rotation speed of a half-filled drum for particles of different material properties. It is found that the angle is up to 5 degrees higher at the end caps of the drum due to boundary friction. Using a three-dimensional discrete element code, we are able to quantify this boundary effect and discuss its dependence on gravity, particle size and density. \section{Experimental Results} \label{experiment} An acrylic cylinder of diameter 6.9 cm and length 49 cm was placed horizontally on two sets of roller supports and was rotated by a well-regulated electronic motor. The material used was mustard seeds which are relatively round of average diameter about 2.5 mm, and have a coefficient of restitution, estimated from a set of impact experiments, of about 0.75~\cite{nakagawa93}. A set of experiments were conducted to measure the angle of repose in different flow regimes. For a small rotation speed, $\Omega$, intermittent flow led to a different angle before and after each avalanche occurred, called the starting (maximum) and stopping (minimum) angle, respectively. For a larger rotation speed these intermittent avalanches became a continuous flat surface and thus enabled to define one angle of repose defined as the {\em dynamic angle of repose} as shown in Fig.~\ref{fig: sketch}a. When $\Omega$ increases, the flat surface deforms with increasing rotation speeds and develops a so-called S-shape surface for higher rotation speeds, shown in Fig.~\ref{fig: sketch}c. The deformation mostly starts from the lower boundary inwards and can be well approximated by two straight lines with different slopes close to this transition, sketched in Fig.~\ref{fig: sketch}b. For all measurements in this regime, we took the slope of the line to the right which corresponds to the line with the higher slope. \ifx\grdraft\undefined \else \vspace{-2ex} \begin{figure}[htb] \begin{center} \epsfig{file=bound_4a.eps,width=0.5\textwidth} \end{center} \caption{(a) Flat surface for low rotation speeds, (b) deformed surface for medium rotation speeds with two straight lines added as approximation and (c) fully developed S-shaped surface for higher rotation speeds.} \label{fig: sketch} \end{figure} \fi The average maximum and minimum angles of repose for the intermittent avalanches were found to be about 36 and 32 degrees, respectively, see Fig.~\ref{fig: mustard_all}. There seems to be a rather sharp transition from intermittent to continuous avalanches, which happens around $\Omega = 4$ rpm. For $\Omega$ greater than 4 rpm where the avalanches are continuous, the mustard seed data indicate a linear dependence of the dynamic angle of repose on the rotation speed which differs from the quadratic dependence found by Rajchenbach~\cite{rajchenbach90}. \ifx\grdraft\undefined \else \vspace{-4ex} \begin{figure}[htb] \begin{center} \input bound_0 \end{center} \caption{Experimentally measured starting ($\circ$) and stopping ($\star$) angle and dynamic angle of repose ($\bullet$) for mustard seeds.} \label{fig: mustard_all} \end{figure} \fi We also investigated the dynamic angle of repose for different particle diameters and materials in the continuous regime in more detail using a 27 cm long acrylic cylinder of diameter $2R=6.9$ cm. For a given rotation speed, $\Omega$, the dynamic angle of repose was measured four times at one of the acrylic end caps and the average value with an error bar corresponding to a confidence interval of 2$\sigma$, where $\sigma$ is the standard deviation of the data points, was then calculated. First we used mustard seeds of two different diameters, namely 1.7 mm (black) and 2.5 mm (yellow), with a density of 1.3 g/cm$^3$. The latter were the same that were used to produce Fig.~\ref{fig: mustard_all}. We varied the rotation speed, $\Omega$, from 5 rpm to 40 rpm and took the higher angle in the S-shaped regime which exists for higher rotation rates, see Fig.~\ref{fig: sketch}b. Both data sets are shown in Fig.~\ref{fig: mustard} for black ($\bullet$) and yellow ($\circ$) seeds. The figure also illustrates the transition to the S-shaped regime which occurs at the change of slope, e.g.\ at around 11 rpm for the smaller seeds and around 16 rpm for the larger seeds. One also notes that the dynamic angle of repose is much higher for the larger particles in the low frequency regime. For values of $\Omega > 15$ rpm in the S-shaped regime, the difference in the dynamic angle of repose for the two different types of mustard seeds decreases with increasing $\Omega$, and both curves cross around 30 rpm giving a slightly higher angle for the smaller seeds with the highest rotation speeds studied. We applied the same measurements to two sets of glass beads having a density of 2.6 g/cm$^3$. The smaller beads had a diameter of 1.5 mm with no measurable size distribution, whereas the larger beads had a diameter range of 3.0 $\pm$ 0.2 mm. Both data sets are shown in Fig.~\ref{fig: glass} for small ($\bullet$) and large ($\circ$) beads. It can be seen from this figure that the transition to the S-shaped regime occurs at around 16 rpm for the smaller beads and around 24 rpm for the larger beads. In general, we found that the small particles exhibit the S-shaped surface at lower values of $\Omega$ than the large particles. The angles of repose are, in general, lower for the glass beads compared to the mustard seeds which we attribute to the fact that the mustard seeds are not as round as the glass beads and rotations of the mustard seeds are therefore more suppressed. The coefficient of friction is also higher for mustard seeds. \ifx\grdraft\undefined \else \vspace{-4ex} \begin{figure}[htb] \begin{center} \input bound_2 \end{center} \caption{Dynamic angle of repose for black ($\bullet$) and yellow ($\circ$) mustard seeds.} \label{fig: mustard} \end{figure} \fi \ifx\grdraft\undefined \else \vspace{-4ex} \begin{figure}[htb] \begin{center} \input bound_1 \end{center} \caption{Dynamic angle of repose for small ($\bullet$) and large ($\circ$) glass beads.} \label{fig: glass} \end{figure} \fi There are two striking differences when comparing Figs.~\ref{fig: glass} (glass spheres) and \ref{fig: mustard} (mustard seeds). For rotation speeds, $\Omega$, lower than 15 rpm, the small and large glass beads have the same dynamic angle of repose which agrees with the findings in~\cite{zik94}, whereas the dynamic angle of repose is significantly higher (3 to 4 degrees) for the larger mustard seeds compared to the smaller ones. For rotation speeds, $\Omega$, higher than 15 rpm, the smaller glass beads show a higher dynamic angle of repose than the larger glass beads, and this angle difference increases with increasing rotation speed. For mustard seeds, the difference in the dynamic angle of repose between the smaller and the larger particles decreases with increasing $\Omega$ and the smaller seeds only show a higher angle for the highest rotation speeds studied. Both Fig.~\ref{fig: mustard} and Fig.~\ref{fig: glass} seem to indicate that the increase in the dynamic angle of repose with rotation speed, $\Omega$, in the S-shaped regime is larger for the smaller particles. All the above angle of repose were measured by looking through one of the acrylic end caps. In order to study the boundary effect of these end caps on the dynamic angle of repose, we performed Magnetic Resonance Imaging (MRI) measurements. This technique of studying non-invasively the flow properties of granular materials was first used by Nakagawa et al.~\cite{nakagawa93} and is in addition explained in more detail in ref.~\cite{nakagawa94}. We used the large mustard seeds, which had an average diameter of 2.5 mm. The dynamic angle of repose was measured based on the concentration data which was averaged in a thin cross-sectional slice in the middle of the cylinder far away from the end caps. It is shown in Fig.~\ref{fig: mri} as function of
## SCORM Hello I'm running Moodle 2.7 and my students are complaining about constant SCORM alerts "The SCORM player has detected ..." Is there a quick and easy way to disable this function. Usually there is nothing they can do about their connection. This alert comes and goes constantly distracting them. Any tips would be very much appreciated. Tim Average of ratings: - Those alerts are there for a reason. If they have flakey internet conectivity then it is likely that the SCORM content might display in their browser but any progress/grade they achieve may not be saved/passed back to Moodle - even if the SCORM package displays a succesful score or completion. In previous versions of Moodle this failed silently in the background with no warning provided to the user. If you have users with flakey internet connectivity you should not use SCORM or you should advise your students to use a computer that has reliable internet connection when using SCORM packages. Average of ratings: Useful (1) I also have many suspicions about those alerts, and I tend to believe they report a lot of "false positive". While I was testing SCORM packages on iPad, they hapened to pop up continuously on the iPad, while I did not get anything on a workstation at the same time. This was in an office with a good connection and with a perfectly operating Wifi... Average of ratings: - If your server performs badly it could also cause the reports - there's a 2 second timeout value associated with those requests - it uses javascript to check for the file '/lib/yui/build/moodle-core-checknet/assets/checknet.txt' - if it can't access that file it assumes that the network is unavailable. I can't see how that would give false positive reports - the fact you are seeing the errors suggests that network connectivity or server performance is an issue. If the network is definitely ok - you need to check the performance of your server - a 2 second delay when requesting a text file suggests that your server is not coping well. SCORM needs network connectivity - intermittent network issues or bad server performance can result in grades or completion tracking silently failing. Average of ratings: - I have exactly the same issue.  Desktop behaves fine, while iPad on the same network gets this message constantly every few seconds, making it unusable. I'd like to know who can this be disabled... Average of ratings: - Definitely looks like something worth investigating... I have created a bug report here : https://tracker.moodle.org/browse/MDL-45987 Average of ratings: - Hi All, it should be nice if you could provide more debug info about such failures on a iPad: I do not have such device so I cannot be of any immediate help. You should start looking at your web server logs to find regular hits to checknet.txt in case of desktop and we should add something like alerts - not sure if console logs are appropriate here - to identify the nature of the failure in case of an iPad. The related YUI module is located in lib/yui/src/checknet/js/checknet.js. While I understand that this issue is more than annoying, IMHO we should try to find the culprit on the iPad instead of giving the administrator the possibility to disable such an important network check (Ref.: MDL-28261). In the mean time, if you want to "disable" it you can comment out few lines in mod/scorm/player.php: diff --git a/mod/scorm/player.php b/mod/scorm/player.phpindex f9f0129..a4f5b01 100644--- a/mod/scorm/player.php+++ b/mod/scorm/player.php@@ -261,8 +261,10 @@ if (!empty($forcejs)) { } // Add the checknet system to keep checking for a connection.+/*$PAGE->requires->string_for_js('networkdropped', 'mod_scorm'); $PAGE->requires->yui_module('moodle-core-checknet', 'M.core.checknet.init', array(array( 'message' => array('networkdropped', 'mod_scorm'), )));+*/ echo$OUTPUT->footer(); HTH, Matteo Average of ratings: Useful (2) Hi Matteo, Thanks for the help, HOWEVER I now accidentally removed line 260 of the player.php file and the SCORM file is now not working! Could anyone help me by sending me the original mod/scorm/player.php line 260? Thanks, Alexis Average of ratings: - What version are you on Alexis? The files are available on the Moodle github. 2.7 latest: https://github.com/moodle/moodle/blob/MOODLE_27_STABLE/mod/scorm/player.php Average of ratings: - Hi Alexis, Read it in the version.php file (e.g.: \$release  = '2.8.5+ (Build: 20150319)';) or in the Notifications page. HTH, Matteo Average of ratings: - Hi Matteo/Matthew, Thanks so much for the help- it was the latest (2.8) version and I managed to refix the code now Appreciate the help guys!!! Average of ratings: - Using version 2.8.1-0, is there any down-side to modifying the checknet.js to increase the timeout to 4 seconds instead of current 2? lib\yui\build\moodle-core-checknet\moodle-core-checknet-min.js and value to modify is >> timeout:{value:2e3}, << timeout:{value:4e3}, Average of ratings: Useful (1) Hi Dan , I was following the instuctions with respect to the MDL 28261 , My moodle version is 2.6.3 , I couldnt find the folder moodle -core-checknet folder then I cpoied the checknet.php and module.js file into scorm folder , but I constantly started getting the message "Moodle's SCORM player has determined that your Internet connection is unreliabile or has been dropped. If you continue in this SCORM activity, your progress may not be saved. You should exit the activity now, and return when you have a dependable Internet connection. "  whenever the scorm module is tried to open , so I removed those files and replaced module.js file with the orginal copy , but still I get the message , can you please suggest  me to how to disable this message or  is there any other way to simple display an alert message when user lost session between scorm Package and LMS . Thanks Usha Average of ratings: - Yes, this is especially evident when they are using iPads. This rarely happens on PCs even though they are on the same wifi. We cannot build a separate version of courseware without SCORM for iPads only. It would be nice to have an option for turning the alerts off  even if they are silently failing - we do get "0" for grades, but we can track their activity and still tell how they answered and what options they chose. iPad comfort is becoming essential. Average of ratings: - Just to follow up on this issue. At the moment these warnings are here to stay - the warnings occur as the communication between the SCORM package and Moodle are failing and this means that grades and tracking information from the user will be lost. I have not been able to reproduce this on my own iPad when connecting to my local machine or when connecting to an externally hosted site. I suspect the reason you are seeing this more frequently is due to one of the following: 1. Wifi access is flaky 2. Connection between your wireless network and your moodle server is flaky or slow and the iPad times out when trying to communicate. 3. iPad specific bug related to network connections dropping and re-connecting. SCORM requires a reliable internet connection with the Moodle server to function correctly. We MUST warn users if tracking data is going to be lost - we are always getting reports like "my students tell me that they sat the SCORM but it didn't record the score - what happened?" - My preference would be to halt access to the SCORM completely when this warning occurs so if anything we may make it more difficult to continue accessing the SCORM rather than removing the warning. Of course - if someone manages to trace/diagnose this issue as being a bug with Moodle code and provides a patch to improve it I'm more than happy to take another look. Average of ratings: Useful (1) Dan, I can say with confidence this is not just a Moodle issue with SCORM sessions being lost.  We have had it reported by our clients using commercial LMS systems as well, and WiFi connections seem to be the worst.  The strange part is, that though the SCORM session is lost the learnercan continue through the course which means they still have a connection to the server.  This seems to point to an lower tolerance to connection problems in the SCORM players then a typical browser connection. However, this new feature
was having a hard time proving [puzzle 83](https://forum.azimuthproject.org/discussion/2098/lecture-27-chapter-2-adjoints-of-monoidal-monotones#latest) and I think it's because the unit conditions are the other way around for the lax and oplax monotones, respectively: > - For the lax monotone we should require: \$$I_Y \le f(I_X) . \$$ > - For the oplax monotone we should require: \$$f(I_X) \le_Y I_Y . \$$ > And a nitpick: one of these equations was using \$$1\$$ rather than \$$I\$$ to denote the unit; I've noticed this notation is also mixed at the end of the [next lecture](https://forum.azimuthproject.org/discussion/2098/lecture-27-chapter-2-adjoints-of-monoidal-monotones#latest). You're right on all of these! I'm fixing these mistakes now. It's important to get those inequalities pointing the right way. Thanks! • Options 21. edited May 2018 Anindya wrote: I was wondering if we needed both rules – "f preserves ⊗" and "f preserves I" – in the definition of a monoid homomorphism. That's a good question; for group homomorphisms we don't need the second condition. But for monoid homomorphisms we do. Jonathan and you have given some nice examples from logic; here's a typical example from analysis: Let $$X$$ be the set of functions $$f : [0,1] \to \mathbb{R}$$. Make this into a monoid with pointwise multiplication of functions as its multiplication and the constant function $$1$$ as the unit. Let $$F : X \to X$$ be the map that multiplies any function $$f$$ by the characteristic function of the interval $$[0,1/2]$$. Then $$F(fg) = F(f) F(g)$$ for all $$f,g \in X$$ but $$F(1) \ne 1 .$$ We can generalize this as follows. Suppose $$X$$ is any monoid and $$p \in M$$ is a central idempotent: an element that commutes with everything in $$X$$ and has $$p^2 = p$$. (In the previous example, $$p$$ is the characteristic function of the interval $$[0,1/2]$$.) Let $$F : X \to X$$ be the map that multiplies any element of $$X$$ by $$p$$. Then $$F(fg) = p f g = p^2 f g = p f p g = F(f) F(g)$$ but $$F(1) = p \ne 1$$ unless $$p$$ = 1. Puzzle. Suppose $$X$$ is any monoid and $$F : X \to X$$ is any map with $$F(fg) = F(f) F(g)$$ for all $$f,g \in X$$. Is $$F(1)$$ a central idempotent? Comment Source:Anindya wrote: > I was wondering if we needed both rules – "f preserves ⊗" and "f preserves I" – in the definition of a monoid homomorphism. That's a good question; for group homomorphisms we don't need the second condition. But for monoid homomorphisms we do. Jonathan and you have given some nice examples from logic; here's a typical example from analysis: Let \$$X\$$ be the set of functions \$$f : [0,1] \to \mathbb{R}\$$. Make this into a monoid with pointwise multiplication of functions as its multiplication and the constant function \$$1\$$ as the unit. Let $F : X \to X$ be the map that multiplies any function \$$f\$$ by the characteristic function of the interval \$$[0,1/2] \$$. Then $F(fg) = F(f) F(g)$ for all \$$f,g \in X\$$ but $F(1) \ne 1 .$ We can generalize this as follows. Suppose \$$X\$$ is any monoid and \$$p \in M\$$ is a **central idempotent**: an element that commutes with everything in \$$X\$$ and has \$$p^2 = p \$$. (In the previous example, \$$p\$$ is the characteristic function of the interval \$$[0,1/2] \$$.) Let $F : X \to X$ be the map that multiplies any element of \$$X\$$ by \$$p\$$. Then $F(fg) = p f g = p^2 f g = p f p g = F(f) F(g)$ but $F(1) = p \ne 1$ unless \$$p\$$ = 1. **Puzzle.** Suppose \$$X\$$ is any monoid and \$$F : X \to X\$$ is any map with \$$F(fg) = F(f) F(g) \$$ for all \$$f,g \in X\$$. Is \$$F(1)\$$ a central idempotent? • Options 22. edited May 2018 Re this puzzle: Suppose $$X$$ is any monoid and $$F : X \to X$$ is any map with $$F(fg) = F(f) F(g)$$ for all $$f,g \in X$$. Is $$F(1)$$ a central idempotent? It's taken me a bit of fiddling to come up with a counterexample to that one, but I think I've got one... First let's note that $$F(1)F(1) = F(1.1) = F(1)$$, so $$F(1)$$ is certainly idempotent. And $$F(1)F(g) = F(1.g) = F(g.1) = F(g)F(1)$$, so $$F(1)$$ commutes with anything in the image of $$F$$. But $$F(1)$$ need not be central (ie commute with anything in $$X$$), as the following example shows. Let $$2 = \{0, 1\}$$ be a two-element set, and let $$X$$ be the set of maps $$2\rightarrow 2$$. So $$X$$ has four elements: — the identity map — the map switching round $$0$$ and $$1$$ — the constant map $$c_0$$ sending everything to $$0$$ — the constant map $$c_1$$ sending everything to $$1$$ $$X$$ is a monoid under composition, but it is not commutative: $$c_0\circ c_1 \neq c_1 \circ c_0$$ Now let $$F : X \rightarrow X$$ be the constant map sending everything in $$X$$ to $$c_0$$. Then we certainly have $$F(x\circ y) = F(x)\circ F(y)$$, since both sides are always $$c_0$$. But as noted above, $$c_0$$ is not central because it does not commute with $$c_1$$. Comment Source:Re this puzzle: > Suppose \$$X\$$ is any monoid and \$$F : X \to X\$$ is any map with \$$F(fg) = F(f) F(g) \$$ for all \$$f,g \in X\$$. Is \$$F(1)\$$ a central idempotent? It's taken me a bit of fiddling to come up with a counterexample to that one, but I think I've got one... First let's note that \$$F(1)F(1) = F(1.1) = F(1)\$$, so \$$F(1)\$$ is certainly idempotent. And \$$F(1)F(g) = F(1.g) = F(g.1) = F(g)F(1)\$$, so \$$F(1)\$$ commutes with anything in the image of \$$F\$$. But \$$F(1)\$$ need not be central (ie commute with anything in \$$X\$$), as the following example shows. Let \$$2 = \\{0, 1\\}\$$ be a two-element set, and let \$$X\$$ be the set of maps \$$2\rightarrow 2\$$. So \$$X\$$ has four elements: — the identity map — the map switching round \$$0\$$ and \$$1\$$ — the constant map \$$c_0\$$ sending everything to \$$0\$$ — the constant map \$$c_1\$$ sending everything to \$$1\$$ \$$X\$$ is a monoid under composition, but it is not commutative: \$$c_0\circ c_1 \neq c_1 \circ c_0\$$ Now let \$$F : X \rightarrow X\$$ be the constant map sending everything in \$$X\$$ to \$$c_0\$$. Then we certainly have \$$F(x\circ y) = F(x)\circ F(y)\$$, since both sides are always \$$c_0\$$. But as noted above, \$$c_0\$$ is not central because it does not commute with \$$c_1\$$. • Options 23. (actually just noticed we can delete the "switch" map from X above to get an even smaller counterexample – a monoid with just three elements. since any two-element monoid is commutative, this is the smallest counterexample. more generally, consider the "words on an alphabet" example where we identify any two words that begin with the same letter.) Comment Source:(actually just noticed we can delete the "switch" map from X above to get an even smaller counterexample – a monoid with just three elements. since any two-element monoid is commutative, this is the smallest counterexample. more generally, consider the "words on an alphabet" example where we identify any two words that begin with the same letter.) • Options 24. Good, Anindya! Right, saying that a map from a monoid to itself $$F: X \to X$$ preserves multiplication quickly implies that $$F(1)$$ is an idempotent that commute with everything in the range of $$F$$, but no more... so on the principle that "you don't get something for nothing", there should be examples where $$F(1)$$ doesn't commute with everything in $$X$$. Then the challenge is to find a counterexample... a challenge you met. Comment Source:Good, Anindya! Right, saying that a map from a monoid to itself \$$F: X \to X\$$ preserves multiplication quickly implies that \$$F(1)\$$ is an idempotent that commute with everything _in the range_ of \$$F\$$, but no more... so on the principle that "you don't get something for nothing", there should be examples
'-', label=r"$\Lambda$CDM (nl)") fig_suptitle(fig, suptitle) ax.set_xlabel(r"$k [h{\rm Mpc}^{-1}]$") ax.set_ylabel(r"d$\ln P(k)/$d$\ln k$]") # LEGEND manipulation # legend_manipulation(ax, a_sim_info.info_tr()) legend_manipulation(ax, "") # close & save figure close_fig(out_dir + out_file, fig, save=save, show=show, use_z_eff=use_z_eff) def plot_corr_func_universal(r, xi, r_lin, xi_lin, r_nl, xi_nl, lab, suptitle, ylabel, figtext, out_dir, file_name, save, show, r2, extra_data=None, use_z_eff=False, chi=False): if lab == 'init': z_out = lab elif 'chi' in lab: z_out = "z" else: z_out = 'z' + lab[4:] fig = plt.figure(figsize=fig_size) if extra_data is None: extra_data = [] # check for r2 multiplier mlt = mlt_lin = mlt_nl = 1 if r2: mlt = r*r if xi_lin is not None: mlt_lin = r_lin*r_lin if xi_nl is not None: mlt_nl = r_nl*r_nl ylabel = r"$r^2" + ylabel + r"(r)$" file_name = out_dir + '%s_r2_%s' % (file_name, z_out) plt.xscale("linear") plt.yscale("linear") for data in extra_data: data["mlt"] = data["r"]*data["r"] else: ylabel = r'$' + ylabel + r"(r)$" file_name = out_dir + '%s_%s' % (file_name, z_out) plt.xscale("log") plt.yscale("log") # plot all -- sim, lin, non-lin plt.plot(r, xi*mlt, 'o', label=lab) for data in extra_data: plt.plot(data["r"], data["xi"]*data["mlt"], 'o', label=data["lab"]) if xi_lin is not None: plt.plot(r_lin, xi_lin*mlt_lin, '-', label=r"$\Lambda$CDM (lin)") if xi_nl is not None: plt.plot(r_nl, xi_nl*mlt_nl, '-', label=r"$\Lambda$CDM (nl)") # adjust figure, labels fig_suptitle(fig, suptitle) plt.xlabel(r"$r [h^{-1}{\rm Mpc}]$") plt.ylabel(ylabel) ext_legen = {'mlt_col' : 0.75, 'ncol' : 4, 'half_page' : True} fig_leg = legend_manipulation(figtext="", ext_legen=ext_legen) # save & show (in jupyter) close_fig(file_name, fig, save=save, show=show, use_z_eff=use_z_eff, ext_legen=fig_leg) def plot_corr_func_ratio(r, xi, r_lin, xi_lin, r_nl, xi_nl, lab, suptitle, ylabel, figtext, out_dir, file_name, save, show, extra_data, peak_loc=None, use_z_eff=False, chi=False): # names z_out = lab if lab == 'init' else 'z' + lab[4:] ylabel = r'$' + ylabel + r"(r)/" + ylabel + r"_{\rm nl}(r)$" file_name = out_dir + '%s_ratio_%s' % (file_name, z_out) # check same lengths, validity of xi_n; if np.array_equal(r, r_nl): xi_an = xi_nl suptitle += r" $\Lambda$CDM (nl)" elif np.array_equal(r, r_lin): xi_an = xi_lin suptitle += r" $\Lambda$CDM (lin)" else: raise ValueError("Invalid values of radiues.") # figure fig = plt.figure(figsize=fig_size) ax = plt.gca() ax.yaxis.grid(True) ymin = 0.5 ymax = 1.5 ax.set_ylim(ymin,ymax) # ax.set_yscale('symlog', linthreshy=1e-3, linscaley=6) # plot ratio, linear if use_z_eff: sim = use_z_eff['sim'] xi_an = power.corr_func(sim, z=use_z_eff['z'], non_lin=True)[1] xi_lin = power.corr_func(sim, z=use_z_eff['z'], non_lin=False)[1] ax.plot(r, xi_lin/xi_an, 'k-', label=r" $\Lambda$CDM (lin)") ax.plot(r, xi/xi_an, 'o', label=lab) # plot other data (if available) if extra_data is not None: for data in extra_data: xi_an = power.corr_func(sim, z=data['z_eff'], non_lin=True)[1] ax.plot(data['r'], data['xi']/xi_an, 'o', label=data['lab']) # plot BAO peak location (if available) if peak_loc is not None: ax.axvline(x=peak_loc, ls='--', color='k') # adjust figure, labels fig_suptitle(fig, suptitle) plt.xlabel(r"$r [h^{-1}{\rm Mpc}]$") plt.ylabel(ylabel) # if chi: # ext_legen = {'mlt_col' : 0.5, 'ncol' : 4} # else: # ext_legen = {'mlt_col' : 0.5, 'ncol' : 4} fig_leg = legend_manipulation(figtext="", ext_legen=True) # save & show (in jupyter) close_fig(file_name, fig, save=save, show=show, use_z_eff=use_z_eff, ext_legen=fig_leg) def plot_corr_func_single(corr_data, lab, a_sim_info, corr_data_lin=None, corr_data_nl=None, out_dir='auto', save=True, show=False, use_z_eff=False, is_sigma=False, only_r2=True, pt_ratio=False, extra_data=None, peak_loc=None): if out_dir == 'auto': out_dir = a_sim_info.res_dir if is_sigma: suptitle = "Amplitude of density fluctuation" file_name = "sigma" ylabel = r"\sigma^2" else: suptitle = "Correlation function" file_name = "corr_func" ylabel = r"\xi" if 'CHI' in a_sim_info.app: file_name = a_sim_info.app.lower() + '_' + file_name figtext = a_sim_info.info_tr() # modify labels if we are plotting multiple data if extra_data is not None: figtext = figtext.replace(a_sim_info.app + ": ", "") suptitle += ", " + lab lab = a_sim_info.app if 'CHI' in lab: lab = r'$\chi$' # get data r, xi = corr_data r_lin, xi_lin = corr_data_lin if corr_data_lin is not None else (None, None) r_nl, xi_nl = corr_data_nl if corr_data_nl is not None else (None, None) # first plot, xi(r) if not only_r2: plot_corr_func_universal( r, xi, r_lin, xi_lin, r_nl, xi_nl, lab, suptitle, ylabel, figtext, out_dir, file_name, save, show, False, extra_data, use_z_eff) # second plot, r*r*xi(r) plot_corr_func_universal( r, xi, r_lin, xi_lin, r_nl, xi_nl, lab, suptitle, ylabel, figtext, out_dir, file_name, save, show, True, extra_data, use_z_eff) # third plot, xi(r)/xi_lin/nl if pt_ratio :plot_corr_func_ratio( r, xi, r_lin, xi_lin, r_nl, xi_nl, lab, suptitle, ylabel, figtext, out_dir, file_name, save, show, extra_data, peak_loc, use_z_eff) # correlation function stacked data, linear and emu corr. func in files def plot_corr_func(corr_data_all, zs, a_sim_info, out_dir='auto', save=True, show=False, use_z_eff=False, is_sigma=False, only_r2=True, extra_data=None, peak_loc=None): for lab, corr_par, corr_lin, corr_nl in iter_data(zs, [corr_data_all['par'], corr_data_all['lin'], corr_data_all['nl']]): plot_corr_func_single( corr_par, lab, a_sim_info, corr_data_lin=corr_lin, corr_data_nl=corr_nl, out_dir=out_dir, save=save, show=show, is_sigma=is_sigma, only_r2=only_r2, extra_data=extra_data, peak_loc=peak_loc, use_z_eff=use_z_eff) def plot_peak_uni(a_sim_info, ax, bao_type, idx, use_z_eff=False, ls=None, get_last_col=False, fp_comp=False, single=False, yrange=None, zs_cut=6, normalize=False): # load all available data (GSL integration could have failed) peak_data = [x for x in a_sim_info.data["corr_func"]["par_peak"] if x["z"] != "init" and x["z"] < zs_cut] zs = [x["z"] for x in peak_data if x["z"] < zs_cut] a = [1./(1+z) for z in zs] # location / amplitude / width of the BAO peak, label data = np.array([x["popt"][idx] for x in peak_data if x["z"] < zs_cut]) data_err = np.array([x["perr"][idx] for x in peak_data if x["z"] < zs_cut]) label = a_sim_info.app if 'CHI' in label: label = get_chi_label(a_sim_info, single=single) # get last used color color = ax.get_lines()[-1].get_color() if get_last_col else None # comparison to the non-linear prediction at z_eff if use_z_eff: corr = [power.corr_func(use_z_eff['sim'], z=z, non_lin=True) for z in use_z_eff['z'] if z < zs_cut] data_nl = np.array([power.get_bao_peak(x)["popt"][idx] for x in corr]) # comparison to FP elif fp_comp: peak_data_nl = [x for x in fp_comp.data["corr_func"]["par_peak"] if x["z"] != "init" and x["z"] < zs_cut] data_nl = np.array([x["popt"][idx] for x in peak_data_nl]) # comparison to the non-linear prediction else: peak_data_nl = [x for x in a_sim_info.data["corr_func"]["nl_peak"] if x["z"] != "init" and x["z"] < zs_cut] data_nl = np.array([x["popt"][idx] for x in peak_data_nl]) if yrange is not None: yrange = yrange.get(bao_type, None) ax.set_ylim(*yrange) # ax.set_yscale('symlog', linthreshy=0.01, linscaley=0.5) if not single: label += ' (%s)' % bao_type # normalization if normalize: data -= data[0] - data_nl[0] # plot simulation peak ax.errorbar(zs, data / data_nl - 1, yerr=data_err / data_nl, ls=ls, label=label, color=color) def plot_peak_loc(a_sim_info, ax, use_z_eff=False, get_last_col=False, fp_comp=False, single=False, yrange=None, zs_cut=6): """ plot peak location to the given axis """ plot_peak_uni(a_sim_info, ax, "loc", 1, use_z_eff=use_z_eff, ls='-', get_last_col=get_last_col, fp_comp=fp_comp, single=single, yrange=yrange, zs_cut=zs_cut) ax.set_ylabel(r"$r_0/r_{0,\rm nl}-1$") def plot_peak_amp(a_sim_info, ax, use_z_eff=False, get_last_col=False, fp_comp=False, single=False, yrange=None, zs_cut=6): """ plot peak amplitude to the given axis """ plot_peak_uni(a_sim_info, ax, "amp", 0, use_z_eff=use_z_eff, ls=':', get_last_col=get_last_col, fp_comp=fp_comp, single=single, yrange=yrange, zs_cut=zs_cut) ax.set_ylabel(r"$A/A_{\rm nl}-1$") def plot_peak_width(a_sim_info, ax, use_z_eff=False, get_last_col=False, fp_comp=False, single=False, yrange=None, zs_cut=6): """ plot peak amplitude to the given axis """ plot_peak_uni(a_sim_info, ax, "width", 2, use_z_eff=use_z_eff, ls='--', get_last_col=get_last_col, fp_comp=fp_comp, single=single, yrange=yrange, zs_cut=zs_cut) ax.set_ylabel(r"$\sigma/\sigma_{\rm nl}-1$") def plot_corr_peak(sim_infos, out_dir='auto', save=True, show=False, use_z_eff=False, plot_loc=True, plot_amp=True, plot_width=True, fp_comp=False, single=False, chi=False, yrange=None, zs_cut=4, vline=None): # output if out_dir == 'auto': if len(sim_infos) == 1: out_dir = sim_infos[0].res_dir else: out_dir = report_dir out_file = "corr_peak" if fp_comp: out_file += '_fp_chi' if plot_loc: out_file += "_loc" if plot_amp: out_file += "_amp" if plot_width: out_file += "_width" # figure fig = plt.figure(figsize=fig_size) ax = plt.gca() ax.yaxis.grid(True) for i, a_sim_info in enumerate(sim_infos): # use effective redshift z_eff = use_z_eff[i] if use_z_eff else False # peak location if plot_loc: plot_peak_loc(a_sim_info, ax, use_z_eff=z_eff, fp_comp=fp_comp, single=single, yrange=yrange, zs_cut=zs_cut) # peak amplitude if plot_amp: get_last_col = plot_loc plot_peak_amp(a_sim_info, ax, use_z_eff=z_eff, get_last_col=get_last_col, fp_comp=fp_comp, single=single, yrange=yrange, zs_cut=zs_cut) # peak width if plot_width: get_last_col = plot_loc or plot_amp plot_peak_width(a_sim_info, ax, use_z_eff=z_eff, get_last_col=get_last_col, fp_comp=fp_comp, single=single, yrange=yrange, zs_cut=zs_cut) # linear prediction if single: sim = a_sim_info.sim zs_cut = max([x['z'] for x in a_sim_info.data["corr_func"]["par_peak"] if x["z"] != "init" and x["z"] < zs_cut]) zs = np.linspace(0, zs_cut, num=30) if plot_loc: idx = 1 elif plot_amp: idx = 0 elif plot_width: idx = 2 corr = [power.corr_func(sim, z=z, non_lin=False) for z in zs] corr_nl = [power.corr_func(sim, z=z, non_lin=True) for z in zs] data_lin = np.array([power.get_bao_peak(x)["popt"][idx] for x in corr]) data_nl = np.array([power.get_bao_peak(x)["popt"][idx] for x in corr_nl]) ax.plot(zs, data_lin/data_nl - 1, 'k-', label=r"$\Lambda$CDM (lin)") # plot from high redshift to 0 ax.invert_xaxis() # add vline if vline is not None: ax.axvline(x=vline, ls='--', c='k') # labels plt.xlabel(r"$z$") fig_suptitle(fig, "Relative BAO peak location and amplitude") # LEGEND manipulation
section. This quantity describes the normalized interaction rate over a solid angle area of the target, and varies with the energy and momentum of the measured particles. The nucleon's structure can be probed by measuring the cross sections which result from scattering a beam of electrons off of a fixed proton target ($ep$ scattering). Depending on the energy of the electron beam, a number of different particles may result from this as the proton's constituents may be knocked out and form new hadrons, but here we are concerned with inclusive electron scattering, in which only the scattered electron is measured, ignoring other products which may result. \begin{figure}[htb] \centering \begin{fmffile}{feyngraph} \begin{fmfgraph*}(320,160) \fmfleft{ip,il} \fmfright{x1,x2,K,x3,x4,o2,o3,o4,ol} \fmfset{arrow_len}{10} \fmf{fermion}{il,vl,ol} \marrow{ea}{ up }{top}{$k^\mu$}{il,vl} \marrow{eb}{ up }{top}{$k'^\mu$}{vl,ol} \fmflabel{\large $e^-$}{il} \fmflabel{\large $e^-$}{ol} \fmf{photon,tension=1,label=$\gamma^*$}{vl,vp} \marrow{ec}{ right }{ lrt }{$q^\mu$}{vl,vp} \fmf{phantom}{ip,vp,x1} \fmffreeze \fmf{phantom}{ip,vp} \fmfi{fermion}{vpath (__ip,__vp) scaled 1.01 shifted (-1.8, 6)} \marrowz{ed}{ up }{top}{$p^\mu$}{ip,vp} \fmfi{fermion}{vpath (__ip,__vp) scaled 1.01} \fmfi{fermion}{vpath (__ip,__vp) scaled 1.01 shifted ( 1.8,-6)} \fmfshift{15 left}{x1,x2,x3,x4} \fmf{phantom}{vp,x1} \fmf{phantom}{vp,x2} \fmf{phantom}{vp,x3} \fmf{phantom}{vp,x4} \fmfi{fermion}{vpath (__vp,__x1) scaled 1.02 shifted ( 0.0,-6.0)} \fmfi{fermion}{vpath (__vp,__x2) scaled 1.00 shifted ( 0.0,-2.0)} \fmfi{fermion}{vpath (__vp,__x3) scaled 0.98 shifted ( 0.0, 2.0)} \fmfi{fermion}{vpath (__vp,__x4) scaled 0.96 shifted ( 0.0, 6.0)} \fmfv{l.d=0,l.a=0,l={\myrbrace{96}{0}$\text{\large X}$}}{K} \fmflabel{\large $p$}{ip} \fmfblob{50}{vp} \end{fmfgraph*} \end{fmffile} \caption{$ep$ Scattering interaction} \label{fig:epscattering} \end{figure} A Feynman diagram for the electron-proton scattering interaction is shown in Figure~\ref{fig:epscattering}. In this process, an electron enters with four-momentum (in natural units) $k^\mu$ = (E,\textbf{k}), and exchanges a virtual photon with a proton in the target, which has initial four-momentum $p^\mu$ = ($\epsilon$,\textbf{p}). This virtual photon carries a four momentum of $q^\mu$ = ($\nu$,\textbf{q}), where $\nu$ is the energy transfer from the electron to the struck proton, and \textbf{q} is the momentum exchanged in the interaction. The electron then scatters at an angle $\theta$ from its initial path, with a four-momentum of $k'^\mu$ = (E$'$,\textbf{k$'$}), and some unknown products X are produced by the proton~\cite{Povh}. Since the electron is comparatively well understood, and we are only measuring the scattered electrons in inclusive scattering, we define everything in the rest frame of the proton where $\epsilon$ = $M_p$, and in terms of the initial and final state of the electron. The energy transfer $\nu$ is then defined simply as: \begin{equation} \label{eqn:nudef} \nu = E - E' \end{equation} with E and E$'$ the initial and final energies of the electron, respectively. We can also use these quantities in combination with the angle at which the scattered electron is measured to define the momentum transfer $Q^2$ of the interaction: \begin{equation} \label{eqn:q2def} Q^2 = -\textbf{q}^2 = 4 E E' \sin{\theta} \end{equation} This is an especially important kinematic variable because it defines the distance scale which the scattering interaction is probing. The de Broglie wavelength of a particle is considered to be~\cite{DeBroglie} \begin{equation} \label{eqn:debroglie} \lambda = \frac{h}{\textbf{q}} \end{equation} where h is the Planck constant. So it is plain to see that the distance scale associated with our interaction is inversely proportional to the momentum transfer. A high $Q^2$ probes the small-distance region of the proton where we are looking at the nucleon's individual constituents, while a low $Q^2$ conversely probes the large distance region and reveals information about how the proton's constituents work together to give the proton its observed properties. If we wish to test the aforementioned theories of QCD such as chiral perturbation theory, and to understand how the proton obtains its spin from the quarks and gluons that make it up, the latter is our kinematic region of interest~\cite{KarlThesis}. We must also define the invariant mass of the hadron, a frame-independent kinematic variable which quantifies the total mass-energy of the proton and its unknown products after scattering: \begin{equation} \label{eqn:invariantmass} W^2 = (p^\mu + q^\mu)^2 = M_p^2 + 2 M_p \nu - Q^2 \end{equation} Wherein $M_p$ $\approx 938.3$ MeV is the mass of the proton. Using the kinematic variables already defined, we can also define the \textit{Bjorken-x}, a quantity which can be identified with the fraction of the proton's momentum carried by the struck quark: \begin{equation} \label{eqn:bjorkenx} x_{bj} = \frac{Q^2}{2 M_p \nu} \end{equation} This quantity can also be considered to track the elasticity of the interaction, as discussed in the next section. Finally, we define the fractional energy loss of the electron simply as: \begin{equation} \label{eqn:fracloss} y = \frac{\nu}{E} \end{equation} These invariant quantities will be used to map out the response of the nucleon over the tested kinematic range. \section{Interaction Rates \& Cross Sections} To define the measurable cross section mentioned in the previous section, we will first need to discuss the interaction rate of the $ep$ scattering process. The interaction rate $W_{fi}$ describes the number of transitions from the initial state of $e + p$ to the final state of $e' + X$ per unit time. Fermi's Golden Rule states~\cite{HalzenMartin}: \begin{equation} \label{eqn:goldenrule1} W_{fi} = 2 \pi |V_{fi}|^2 \delta(k_f - k_i) \end{equation} where $V_{fi}$ is the interaction potential and the delta function enforces energy and momentum conservation. By adopting a covariant normalization, we can rewrite per unit volume in four dimensions for a generic scattering interaction A+B $\rightarrow$ C+D~\cite{HalzenMartin}: \begin{equation} \label{eqn:goldenrule2} W_{fi} = \bigg (\frac{2 \pi}{V}\bigg)^4 |\mathcal{T}|^2 \delta^{(4)} \big ([p_C + p_D] - [p_A + p_B] \big) \end{equation} Here, $\mathcal{T}$ is the probability amplitude of the interaction process, V is the volume over which the interactions take place, and $p_{A,B,C,D}$ are the four-momenta of the particles involved in scattering. The probability amplitude will be a combination of the electron part, which is easy to compute using the Feynman rules because Quantum Electrodynamics (QED) is well understood, and the hadronic part, which comprises both the initial proton state and the final unobserved products X~\cite{Srednicki}. Using the Feynman rules for spinor electrodynamics, we have one incoming and outgoing electron, one internal photon, and two vertices. Though the exact vertex on the hadronic side of the virtual photon is not known, we know it must be a QED vertex, and so add the associated vertex factor. We then find: \begin{equation} \label{eqn:spinorfeynman} \mathcal{T}_{Lepton} = i e^2 \gamma^\mu \gamma^\nu \frac{g^{\mu \nu}}{\textbf{q}^2 - i \epsilon} \overline{u}_{s'}(\textbf{k$'$}) u_{s}(\textbf{k}) \end{equation} Where $\gamma$ is the gamma-matrix, $u_s$ and $\overline{u}_s$ are the spinors, and e is the fundamental charge. Because we don't know the exact structure of the hadronic part of the process, we must instead write it generally as a transition amplitude from the initial state of a proton with momentum P and spin S to a final unknown state X~\cite{Zielinski:2017gwp}: \begin{equation} \label{eqn:hadronamplitude} \mathcal{T}_{Hadron} = <P,S|J^\mu|X> \end{equation} Plugging into Equation~\ref{eqn:goldenrule2} and simplifying by making use of gamma matrix identities to contract the metric with two of our gamma matrices we obtain: \begin{equation} \begin{split} \label{eqn:goldenrule3} W_{fi} = \bigg (\frac{2 \pi}{V}\bigg)^4 \bigg (\frac{e^4}{Q^4}\bigg) \overline{u}_{s'}(\textbf{k$'$}) u_{s}(\textbf{k}) \gamma^\mu \overline{u}_{s}(\textbf{k}) u_{s'}(\textbf{k$'$}) \gamma^\nu <P,S|J^\mu|X>\\ <X|J^\nu|P,S>\delta^{(4)} \big ([p + p_x] - [k + k'] \big) \end{split} \end{equation} We can define the cross section which can be experimentally measured in terms of the interaction rate~\cite{HalzenMartin}: \begin{equation} \label{eqn:xs} d\sigma = \frac{W_{fi}}{\phi}N_f \end{equation} Here $\phi$ is the initial flux of particles, and $N_f$ is the number of available final states. We will later associate these quantities with the number of incident electrons and the density of scattering centers in the target, respectively, to form an experimental cross section~\cite{Povh}. For now, the initial flux will be a product of the number of beam particles passing through unit area per unit time, $\frac{2E_e}{V}$ and the number of target particles per unit volume, $\frac{2E_p}{V}$~\cite{HalzenMartin}. Since the electron mass $m_e$ is much smaller than $M_p$, we neglect it, and in our lab frame, the protons are at rest, so we can simplify the initial flux as: \begin{equation} \label{eqn:initflux} \phi = \frac{4 E M_p}{V^2} \end{equation} The number of available final states for a given particle A in a volume V is limited by quantum theory to be~\cite{HalzenMartin}: \begin{equation} \label{eqn:nfgeneric} N_f = \frac{V d^3 p_A}{(2 \pi)^3 2 E_A} \end{equation} Our final states are the scattered electron with momentum $k'$ energy $E'$ and some unknown products with momenta $p_x$ and energy $E_x$. We will need to sum over all possible X states to obtain the full cross section, so combining equations~\ref{eqn:nfgeneric},~\ref{eqn:initflux} and~\ref{eqn:xs} we obtain the differential cross section: \begin{equation} \begin{split} \label{eqn:xs_full} d\sigma = \frac{2 \pi}{8 E E' M_p} \bigg (\frac{e^4}{Q^4}\bigg) d^3 k' \overline{u}_{s'}(\textbf{k$'$}) u_{s}(\textbf{k}) \gamma^\mu \overline{u}_{s}(\textbf{k}) u_{s'}(\textbf{k$'$}) \gamma^\nu \times \\ \displaystyle\sum_{X}\frac{d^3 p_x}{(2 \pi)^3 2 E_x}<P,S|J^\mu|X><X|J^\nu|P,S>\delta^{(4)} \big ([p + p_x] - [k
# Pendulum Counterclockwise Meaning where θ is the angle of the pendulum as measured counterclockwise from the vertical, ζ is the damping ratio, and u is the externally applied torque. If you received the answers and the pendulum didn't just remain still, you have a winner - time to break out that credit. Do not hurry the process, this is am important step. Simple pendulum swings in a smooth motion (see figure 1a) and is often modeled as !!!!!. Another form of dowsing uses a divining rod to search for water, metals or minerals underground. A Foucault pendulum always rotates clockwise in the Northern Hemisphere and counter-clockwise in the Southern Hemisphere. To begin using your pendulum, rest your elbow on the table and hold your pendulum so that it can swing freely. Of course, you have to learn how to use it properly if you must get accurate and reliable results. This restores the energy lost during the swing and keeps the pendulum from stopping. Keep the pendulum as stable as possible. The rate of rotation depends on the latitude. This may mean that the information provided corresponds to two different stages or situations. Imagine that you now want to “see” the pendulum move in the opposite direction, counter clockwise. Clockwise movement. Example: If Pendulum moves in an anti-clockwise movement when above the throat chakra (your lungs, larynx, alimentary canal, thyroid, or parathyroid might be out of balance. Short crankshaft exhibits periodic, but non-harmonic motion. How to Set a Regulator Clock. Moreover, the mean duration of unidirectional inverted pendulum sway reduced only slightly, remaining around 1 s. The pendulum swings back and forth, and over time, slowly seems to rotate over its arena or "pit". Hold the thread or chain by each end, with the weight suspended. Ways to Use A Pendulum Chart. Wednesday, 6:00 AM. Mechanical clocks typically require some manual input to maintain their accurate timekeeping processes. A simple pendulum suspended from a long wire and set into motion along a meridian. Hold the pendulum as still as possible. They are using a pendulum and have got used to it doing one thing for a yes answer and a different thing for a no answer. › Verified 2 days ago. If the pendulum moves in an elliptical swing, there may be a right- or left-side imbalance of energy flowing in the body. Counter-clockwise or circular left C. The period T of a pendulum: 2012-03-27: From Ashley: The period T of a pendulum is given in terms of its length, l, by T=2pi sqrt(l/g) where g is the acceleration due to gravity(a constant) a. A simple pendulum suspended from a long wire and set into motion along a meridian. The pendulum is centered away from the rotational axis. It may swing from forwards and backwards or side to side. Suits, Physics Dept, Michigan Tech University, 2005. Pendulums, which work from involuntary muscle movements, allow you to bypass your conscious mind and go directly to the unconscious or Higher Self, the source of boundless knowledge. Here we are observing a rotational Doppler shift. \item {\bf Oscillation-Unforced Pendulum} In our first experiment, we want to confirm our intuitive idea of how the unforced ideal pendulum behaves. Decide what each direction will mean. Pendulums are one of the oldest of dowsing, meaning, using something to detect the presence of water, minerals or anything not visible to the eye. In case your pendulum swings in a clockwise course means "sure", a counterclockwise course means "no". Clocks use a spirally wound torsion spring (a form of helical torsion spring in which the coils are wrapped around each other. The anchor and pendulum are in grey and the escape wheel in yellow. Counter-clockwise generally indicates a "no" or a negative answer. If one way doesn’t work, try another one. A pendulum moving clockwise over a specific chakra means that the energy center is well-balanced. Hold the pendulum using your thumb and forefinger and place it about four to six inches above each of your key chakra. The next thing to do is hold the pendulum about three quarters of the way up the chain and learn which direction is yes/no. If you'd like to also ask what your "maybe" answer looks like, you can go ahead and ask that, too. The pendulum is a tool that bridges the gap between the logical left brain and the intuitive right brain. The animation shows only three components. Using the Pendulum on the Chakras For this example of a reading, let's say that the pendulum spinning clockwise means yes/open and spinning counterclockwise means no/closed. Definition (4L40+S139)+L37. I read in my book that the period of of the pendulum starting from an angle of $\theta(0)=\theta_0$ is exactly,. 5) After the pendulum is moving quite well clockwise. Undamped pendulum example. A counterclockwise rotation indicates the second answer is correct. The pendulum is thus approximated as a point mass m=0. The rate of rotation depends on the latitude. Clockwise movement. An unbalanced movement of the spin can indicate problems. Now the arclength that the pendulum travels is 'q, where ' is the length of the arm of the pendulum and q is the angle from the vertical directions. Next, ask your pendulum some questions in order to determine what means “yes,” “no,” and “maybe. You’ll need another person to practice with. Basic pendulums for dowsing can be any pendant which is attached to a chain, necklace, or leather string. corresponds to the pendulum hanging straight down, and a value = ˇ 2 ˇ1:5 would have the pendulum pointing in the direction we might think of as 3 o'clock. You must determine the direction your pendulum will take for Yes and for No. Counterclockwise circle: you have found a portal that serves as an exit point for spirits to leave the physical plane and return to their own world. The Meaning of the Spiral's Direction The spiral is the symbol of the 5th Element of the Pentacle, Spirit, also referred to as "Akasha" meaning "coming into being". The two arms of the upper pendulum are fabricated from 1/4" thick aluminum and the lower pendulum from 1/2" thick aluminum. pendulum with a long wire; can swing in any direction; the change in the swing plane demonstrates the earth's rotation. Science Education and Careers. It should move in a counter-clockwise direction. " If your pendulum swings in a clockwise direction means "yes", a counterclockwise direction means. For the “no” answer, the pendulum moves in a counterclockwise manner or left and right swing to indicate a negative answer. The pendulum is a tool that bridges the gap between the logical left brain and the intuitive right brain. When a Foucault pendulum is suspended at the equator, the plane of oscillation remains fixed relative to Earth. Foucault Pendulum. They are using a pendulum and have got used to it doing one thing for a yes answer and a different thing for a no answer. 19c provides a "small θ " free-body diagram, yielding correspond to a counter clockwise rotation for θ. Any such combination will give radiesthetic (dowsing) reaction in the hand of a "sensitive" person. Jun 23, 2017 · The pendulum should swing in the opposite direction - side to side or counterclockwise, for example. How to Use a Pendulum to Answer Yes/No Questions. Stones are symbols of "permanence, stability, reliability and strength" and a cairn is also is a symbol of impermanence as these stones may not stay balanced forever thus the universal balance of opposites is existent. The nature of this motion is such that just one variable, the angle , is enough to fully describe it. Foucault used a 62-lb (28-kg) iron ball
local iteration time for group $i$, i.e., a lower bound: \begin{align} \label{t_bound1} \tau_{i,l}^u\geq c_i,~\text{a.s.},~\forall l,u. \end{align} Then, one gets a {\it maximum} number of local iterations: \begin{align} \label{t_bound2} t_{i}^{u} \leq t_{i}^{\max}\triangleq\ceil*{\frac{S}{c_i}},~\text{a.s.},~\forall u. \end{align} Based on the above bound, one can get the following global convergence guarantee. \begin{corollary} [\textit{Global Convergence Guarantee}] \label{corollary} For a given $\{t_i^{\max}\}$, setting $\alpha =\min\{\frac{1}{\sqrt{\mathcal{U}}},\frac{1}{L}\}$, the delay sensitive HFL algorithm achieves $ \frac{1}{\mathcal{U}} \sum_{u=1}^{\mathcal{U}} \mathbb{E}\left\|\nabla f\left(x^{u}\right)\right\|^2\leq\mathcal{O}(\frac{1}{\sqrt{\mathcal{U}}})$. \begin{proof} See Appendix \ref{appE}. \end{proof} \end{corollary} Therefore, for a finite sync time $S$, as the training time $T$ increases, the number of the global communication rounds $\mathcal{U}$ also increases, and hence Corollary~\ref{corollary} shows that the gradient converges to $0$ sublinearly. \section{Experiments}\label{experiments} In this section, we present some simulation results for the proposed delay sensitive HFL algorithm to verify the findings from the theoretical analysis. \noindent \textbf{Datasets and Model.} We consider an image classification supervised learning task on the CIFAR-10 dataset \cite{krizhevsky2009learning}. A convolution neural network (CNN) is adopted with two 5x5 convolution layers, two 2x2 max pooling layers, two fully connected layers with 120 and 84 units, respectively, ReLu activation, a final softmax output layer and cross entropy loss. \noindent\textbf{Federated Learning Setting.} Unless otherwise stated, we have 30 clients randomly distributed across 2 groups. The groups have similar data statistics. We consider shifted exponential delays \cite{shiftedexp1}: $\tau_{i,l}^u\sim\exp(c_i,10)$ and $\tau_g^u\sim\exp(c_g,10)$. \noindent \textbf{Discussion.} In Fig.~\ref{1}, we show the evolution of both groups' accuracies and the global accuracy across time. The zoomed-in version in Fig.~\ref{fig:1b} shows the high (SGD) variance in the performance of the two groups especially during the earlier phase of training. Then, with more averaging with the GPS, the variance is reduced. \begin{figure*}[htp] \centering \subfloat[Performance overall Training Time Budget]{% \includegraphics[width=0.5\linewidth]{global_group_accuracy.pdf}% \label{fig:1a}% }% \hfill% \subfloat[Performance during The Beginning of Training Time]{% \includegraphics[width=0.5\linewidth]{global_group_accuracy_zoomed.pdf}% \label{fig:1b}% }% \caption{HFL system with 10 clients per group with $c_1=c_2=1$, $c_g=5$ and $S=5$.} \label{1} \end{figure*} \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{cooperative_isolated_new2.pdf} \caption{Significance of group cooperation under non-i.i.d data.} \label{3} \end{figure} In Fig.~\ref{3}, the significance of collaborative learning is emphasized. We run three experiments, one for each group in an isolated fashion, and one under the HFL setting. First, while we do not conduct our theoretical analysis under heterogeneous data distribution, we consider a non-iid data distribution among the two groups in this setting, and we see that our proposed algorithm still \textit{converges}. Second, it is clear that the performances of the group with less number of clients under heterogeneous data distribution and isolated learning will be deteriorated. However, aided by HFL, its performance improves while the other group's performance is not severely decreased, which promotes {\it fairness} among the groups. \begin{figure*}[htp] \centering \subfloat[$c_1=1 \text{ and} \:\: c_2=7$]{% \includegraphics[width=0.5\linewidth]{n_user_association_1_7.pdf}% \label{fig:UA_a}% }% \hfill% \subfloat[$c_1=7 \text{ and} \:\: c_2=1$]{% \includegraphics[width=0.5\linewidth]{n_user_association_7_1.pdf}% \label{fig:UA_b}% }% \caption{Impact of the groups' shift parameters $c_1$ and $c_2$ on the group-client association under $S=8$ and $c_g=10$.} \label{userassociation} \end{figure*} In Fig.~\ref{userassociation}, the effect of the groups' shift parameters $c_1$ and $c_2$ on determining the optimal group-client association is investigated. The results show that it is not always optimal to cluster the clients evenly among the groups. In Fig.~\ref{fig:UA_b} for instance, we see that assigning less clients to a group with a relatively smaller shift parameter performs better than an equal assignment of clients among both groups; this is observation is reversed in Fig.~\ref{fig:UA_a}, in which a larger number of clients is assigned to the relatively slower LPS. \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{n_Global_Shift_Parameter_Imapct.pdf} \caption{The effect of global shift parameter $c_g$ under $S=10$.} \label{5} \end{figure} In Fig.~\ref{5}, the impact of global shift parameter $c_g$ on the global accuracy is shown. As the global shift delay parameter increases, the performance gets worse. This is mainly because the number of global communication rounds with the GPS, $\mathcal{U}$, is reduced, which hinders the clients from getting the benefit of accessing other clients' learning models. \begin{figure*}[htp] \centering \subfloat[$c_g=10$]{% \includegraphics[width=0.5\linewidth]{n_S_Parameter_choice_under_C_g=10.pdf}% \label{fig:5a}% }% \hfill% \subfloat[$c_g=30$]{% \includegraphics[width=0.5\linewidth]{n_S_Parameter_choice_under_C_g=30.pdf}% \label{fig:5b}% }% \caption{Impact of the global shift parameter $c_g$ on choosing the sync time $S$.} \label{6} \end{figure*} In Fig.~\ref{6}, we show impact of the sync time $S$ on the performance, by varying the GPS shift parameter $c_g$. We see that for $c_g=10$, $S=0$ outperforms $S=20$. Note that $S=0$ corresponds to a centralized system (non-hierarchical). Increasing the shift parameter to $C_g=30$, however, the situation is different. Although in both figures $S=5$ is the optimum choice, but in case the system has an additional constraint on communicating with the GPS, $S=20$ will be a better choice, especially that the accuracy gain will not be sacrificed much. It is also worth noticing that the training time budget $T$ plays a significant role in choosing $S$; in Fig.~\ref{fig:5b}, $S=0$ (always communicate with the GPS) outperforms $S=20$ as long as $T \leq 500$, and the opposite is true afterwards. This means that in some scenarios, the hierarchical setting may not be the optimal setting (which is different from the findings in \cite{Hfl_kh}); for instance, if the system has a hard time constraint in learning, it may prefer to make use of communicating with GPS more frequently to get the advantage of learning the resulting models from different data. \section{Conclusion and Outlook}\label{conclusion} A delay sensitive HFL algorithm has been proposed, in which the effects of wall-clock times and delays on the overall accuracy of FL is investigated. A sync time $S$ governs how many local iterations are allowed at LPSs before forwarding to the GPS, and a system time $T$ constrains the overall training period. Our theoretical and simulation findings reveal that the optimal $S$ depends on different factors such as the delays at the LPSs and the GPS, the number of clients per group, and the value of $T$. Multiple insights are drawn on the performance of HFL in time-restricted settings. \noindent\textbf{Future Investigation.} Guided by our understanding from the convergence bounds and the simulation results, we observe that it is better to make the parameter $S$ \textit{variable} especially during the first global communication rounds. For instance, instead of fixing $S=5$, we allow $S$ to increase gradually with each round from $1$ to $5$, and then fix it at $5$ for the remaining rounds. Our reasoning behind this is that the clients' models need to be {\it directed} towards global optimum, and not their local optima. Since this direction is done through the GPS, it is reasonable to communicate with it more frequently at the beginning of learning to push the local models towards the optimum direction. To investigate this setting, we train a logistic regression model over the MNIST dataset, and distribute it in a non-iid fashion over 500 clients per group. As shown in Fig.~\ref{svariable}, the variable $S$ approach achieves a higher accuracy than the fixed one, with the effect more pronounced as $S$ increases. \begin{figure*}[htp] \centering \subfloat[$S=5$]{% \includegraphics[width=0.33\linewidth]{n_SV_1_5.pdf}% \label{fig:a}% }% \hfill% \subfloat[$S=10$]{% \includegraphics[width=0.33\linewidth]{n_SV_1_10.pdf}% \label{fig:b}% }% \hfill% \subfloat[$S=20$]{\includegraphics[width=0.33\linewidth]{n_SV_1_20.pdf}% \label{fig:c}% } \caption{Comparison between variable and fixed $S$ with respect to the global learning accuracy.} \label{svariable} \end{figure*} \appendices \section{Preliminaries} We will rely on the following relationships throughout our proofs, and will be using them without explicit reference: For any $x, y \in \mathbb{R}^n$, we have: \begin{align} \langle x,y\rangle \leq \frac{1}{2}\left\|x\right\|^2+ \frac{1}{2}\left\|y\right\|^2. \end{align} By Jensen's inequality, for $x_{i} \in \mathbb{R}^n$, $i \in \{1,2,3,\dots,N\}$, we have \begin{align} \left\|\frac{1}{N}\sum_{i=1}^{N}x_i\right\|^2 \leq \frac{1}{N}\sum_{i=1}^{N}\left\|x_i\right\|^2, \end{align} which implies \begin{align} \left\|\sum_{i=1}^{N}x_i\right\|^2 \leq N\sum_{i=1}^{N}\left\|x_i\right\|^2. \end{align} \section{Proof of Lemma~\ref{lemma_1}}\label{appB} Conditioning on the number of local updates of group $i$ up to and including global round $u$, ${\bm t}_i^{u}$, we evaluate the expected difference between the aggregated global model and the latest local model at group $i$, by the end of global round $u$. Based on \eqref{eq_local-update} and \eqref{global_update}, the following holds: \begin{align} \mathbb{E}_{|\bm t_i^u}&\left\| x^{u+1}-x_{i}^{u,t_{i}^{u}}\right\|^2 \nonumber \\ =&\alpha^2 \mathbb{E}_{|\bm t_i^u}\left\| \frac{1}{|\mathcal{N}_i|} \sum_{k \in \mathcal{N}_i}\sum_{l=0}^{t_{i}^{u}-1}\Tilde{g}_{i,k}\left(x_{i}^{u,l}\right)- \frac{1}{\sum_{i \in \mathcal{N}_g}|\mathcal{N}_i|} \sum_{i \in \mathcal{N}_g} \frac{1}{t_{i}^{u}}\sum_{k \in \mathcal{N}_i} \sum_{l=0}^{t_{i}^{u}-1}\Tilde{g}_{i,k}\left(x_{i}^{u,l}\right)\right\|^2 \nonumber\\ \leq& 2\alpha^2 \mathbb{E}_{|\bm t_i^u}\!\!\left(\left\| \frac{1}{|\mathcal{N}_i|} \sum_{k \in \mathcal{N}_i} \sum_{l=0}^{t_{i}^{u}-1}\Tilde{g}_{i,k}\left(x_{i}^{u,l}\right)\right\|^2\!\! + \left\|\frac{1}{\sum_{i \in \mathcal{N}_g} |\mathcal{N}_{i}|} \sum_{i \in \mathcal{N}_g}\frac{1}{t_{i}^{u}}\sum_{k \in \mathcal{N}_{i}} \sum_{l=0}^{t_{i}^{u}-1}\Tilde{g}_{i,k}\left(x_{i}^{u,l}\right)\right\|^2 \right) \nonumber\\ \leq& 2 \alpha^2 \mathbb{E}_{|\bm t_i^u}\!\!\left( \!\!\frac{1}{|\mathcal{N}_{i}|^2} \left\| \sum_{k \in \mathcal{N}_i} \sum_{l=0}^{t_{i}^{u}-1}\Tilde{g}_{i,k}\left(x_{i}^{u,l}\right)\right\|^2\!\! \!+\!\frac{|\mathcal{N}_g|}{\left(\sum_{i \in \mathcal{N}_g}|\mathcal{N}_i|\right)^2}\!\sum_{i \in \mathcal{N}_g}\! \!\frac{1}{(t_{i}^{u})^2}\left\| \sum_{k \in \mathcal{N}_i}\! \sum_{l=0}^{t_{i}^{u}-1}\Tilde{g}_{i,k}\left(x_{i}^{u,l}\right)\right\|^2 \! \right) \nonumber\\ =&2 \alpha^2 \left( \frac{1}{|\mathcal{N}_{i}|^2} + \frac{|\mathcal{N}_g|}{\left(\sum_{i \in \mathcal{N}_g}|\mathcal{N}_i|\right)^2(t_i^{u})^2}\right) \mathbb{E}_{|\bm t_i^u} \left\| \sum_{k \in \mathcal{N}_i } \sum_{l=0}^{t_{i}^{u}-1}\Tilde{g}_{i,k}\left(x_{i}^{u,l}\right)\right\|^2 \nonumber \\ &+2 \alpha^2 \frac{|\mathcal{N}_g|}{\left(\sum_{i
# Ponytail A woman's ponytail from the side A woman's ponytail from the back A ponytail is a hairstyle in which some, most or all of the hair on the head is pulled away from the face, gathered and secured at the back of the head with a hair tie, clip, or other similar device and allowed to hang freely from that point. It gets its name from its resemblance to the tail of a pony. Ponytails are most commonly gathered at the middle of the back of the head, or the base of the neck. Depending on fashions, they may also be worn at the side of the head (which is sometimes considered formal) which is worn over one ear, or on the very top of the head (allowing the hair to fall down the back or one side of the head). If the hair is divided so that it hangs in two sections they are called "ponytails" or pigtails (or bunches), if left loose, and pigtails, plaits or braids if plaited. Unbraided ponytails worn above each ear are sometimes called dog-ears. ## Ponytails on women and girls Detail from an 18th-century engraving showing a girl (left) with a ponytail Women (as opposed to girls) complying with European fashion of the Georgian period and to the 20th century rarely were seen outside of the boudoir with their hair in such an informal style as a ponytail. Today, both women and girls commonly wear their hair in ponytails in informal and office settings or when exercising; they are likely to choose more elaborate styles (such as braids and those involving accessories) for formal occasions. It is a practical choice as it keeps hair out of the eyes. It will keep the hair off the neck as well. The ponytail is also popular with school-aged girls, partly because flowing hair is often associated with youth and because of its simplicity; a young girl is likely to be able to retie her own hair after a sports class, for example. A ponytail can also be a fashion statement; sometimes meaning sporty, other times a low pony tail sends signals of a chic personality. ## As a man's hairstyle Man's white-haired ponytail on a black background. ### Historical In Europe in the second half of the 18th century, most men wore their hair long and tied back with a ribbon into what we would now describe as a ponytail,[1] although it was sometimes gathered into a silk bag rather than allowed to hang freely. At that time, it was commonly known by the French word for "tail", queue. It continued as the mandatory hairstyle for men in all European armies until the early 19th century, after most civilians had stopped wearing queues. The British Army was the first to dispense with it, and by the end of the Napoleonic Wars most armies had changed their regulations to make short hair compulsory. In Asia, the queue was a specifically male hairstyle worn by the Manchu people from central Manchuria, and later imposed on the Han Chinese during the Qing dynasty. From 1645 until 1910 Chinese men wore this waist-length pigtail. ### Recent History In the 1970s, many men wore their hair long and in ponytails. This look was popularized by 1970s-era rock musicians. In the late 1980s, a short ponytail was seen as an impudent, edgy look for men who wanted to individualize, but keep their hair flat and functional (see mullet). Steven Seagal's ponytail in Marked for Death is an example. Men who wear their hair long, or sometimes in mullets, frequently tie it back into a ponytail, but avoid the top- or side-of-the-head variants,[citation needed] although these variants can be used for practical reasons for keeping it off the neck. ## Scientific studies Linda George with a ponytail. The first equation of state for hair was developed by C. F. van Wyk in 1946.[2] Scientists in the UK have formulated a mathematical model that predicts the shape of a ponytail given the length and random curvature (or curliness) of a sample of individual hairs. The Ponytail Shape Equation provides an understanding of how a ponytail is swelled by the outward pressure which arises from interactions between the component hairs.[3] The researchers developed a general continuum theory for a bundle of hairs, treating each hair as an elastic filament with random intrinsic curvature. From this they created a differential equation for the shape of the bundle relating the elasticity, gravity, and orientational disorder and extracted a simple equation of state to relate the swelling pressure to the measured random curvatures of individual hairs.[4][5] The equation itself is a fourth order non linear differential equation.[4] The Rapunzel number is a ratio used in this equation to calculate the effects of gravity on hair relative to its length.[4] This number determines whether a ponytail looks like a fan or whether it arcs over and becomes nearly vertical at the bottom. A short ponytail of springy hair with a low Rapunzel number, fans outward. A long ponytail with a high Rapunzel number, hangs down, as the pull of gravity overwhelms the springiness. It is now also known why jogger's ponytails swing side to side.[6] An up and down motion is too unstable: a ponytail cannot sway forward and backward because the jogger's head is in the way. Any slight jostling causes the up and down movement to become a side to side sway. The research on the shape of the ponytail won the authors the Ig Nobel for Physics in 2012.[7] ### Ponytail equation The ponytail equation is[4] ${\displaystyle l^{3}R_{ssss}-(L-s)R_{ss}+R_{s}-\pi (R)=0}$ where ${\displaystyle \pi (R)={\frac {4l^{3}P}{A\rho R}}}$ In these equations l is the length at which gravity bends the hair ${\displaystyle l=({\frac {A}{\lambda g}})^{\frac {1}{3}}}$ where g is the acceleration due to gravity, A is the bending modulus and λ is the density of the hair. A is defined to be ${\displaystyle A={\frac {E\pi d^{4}}{64}}}$ E is a constant equal to 4 gigaPascal and d is the average diameter of the hair. L is the length of the switch of hair in the ponytail, R is the ponytail radius, s is the arc length from the clamp on the ponytail, P is the pressure due to the clamp and ρ is the hair density ${\displaystyle \rho ={\frac {N}{\pi R^{2}}}}$ where N is the number of hairs in the ponytail. Rs is the partial derivative of R with respect to s. The Rapunzel number (Ra) is the ratio ${\displaystyle Ra={\frac {L}{l}}}$ The numerical values used in this equation were the average density of human hair (1.3 grams per cubic centimeter), a bending modulus ( A ) of 8 x 10−9 Newton meter2, a linear mass density of hair ( λ ) of 65 micrograms per centimeter and the major diameter of the hair (human hair is elliptical in cross section) of 79 +/- 16 micrometers. N was taken to be 100,000 - the average numbers of hairs on a human head. The constant E was that of nylon which is similar to hair in its bend and twist moduli. The length L was taken to be 25 centimeters. These values give a length at which gravity bends a hair of l = 5. The authors found empirically that by ignoring the second and fourth order derivatives of R with respect to s in the equation allowed for an exact solution to the equation that produced an excellent fit to the observed ponytails. The authors also found that the spread of the ponytail around the anteroposterior axis (front-back) was symmetrical and made an angle of ~17 degrees with this axis. This angle was approximately constant in all the lengths tested. ## Health issues It is common for those who wear tight ponytails to experience traction alopecia, a form of hair loss. Sometimes it can cause a headache.[8]:761[9]:645 ## References 1. ^ Sherrow, Victoria (2006). Encyclopedia of Hair: A Cultural History. Greenwood Publishing Group. p. 310. 2. ^
76 34 test_year_clr_v4 0cd0: 5d 20 26 26 20 5c 0d 0a 20 20 20 20 20 20 20 20 ] && \.. 0ce0: 20 20 20 20 20 20 5b 73 74 72 69 6e 67 20 6c 65 [string le 0cf0: 6e 67 74 68 20 24 3a 3a 74 65 73 74 5f 79 65 61 ngth $::test_yea 0d00: 72 5f 63 6c 72 5f 76 34 5d 20 3e 20 30 7d 20 74 r_clr_v4] > 0} t 0d10: 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 hen {.. 0d20: 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 20 20 #.. 0d30: 20 20 23 20 4e 4f 54 45 3a 20 55 73 65 20 74 68 # NOTE: Use th 0d40: 65 20 73 70 65 63 69 66 69 65 64 20 74 65 73 74 e specified test 0d50: 20 79 65 61 72 20 66 6f 72 20 74 68 65 20 43 4c year for the CL 0d60: 52 20 76 34 2e 30 2e 0d 0a 20 20 20 20 20 20 20 R v4.0... 0d70: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 #.. 0d80: 20 20 20 20 72 65 74 75 72 6e 20 24 3a 3a 74 65 return$::te 0d90: 73 74 5f 79 65 61 72 5f 63 6c 72 5f 76 34 0d 0a st_year_clr_v4.. 0da0: 20 20 20 20 20 20 20 20 20 20 7d 20 65 6c 73 65 } else 0db0: 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 {.. 0dc0: 23 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 23 #.. # 0dd0: 20 4e 4f 54 45 3a 20 55 73 65 20 74 68 65 20 64 NOTE: Use the d 0de0: 65 66 61 75 6c 74 20 74 65 73 74 20 79 65 61 72 efault test year 0df0: 20 66 6f 72 20 74 68 65 20 43 4c 52 20 76 34 2e for the CLR v4. 0e00: 30 2e 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 0... 0e10: 23 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 72 #.. r 0e20: 65 74 75 72 6e 20 32 30 31 30 3b 20 23 20 54 4f eturn 2010; # TO 0e30: 44 4f 3a 20 47 6f 6f 64 20 22 66 61 6c 6c 62 61 DO: Good "fallba 0e40: 63 6b 22 20 64 65 66 61 75 6c 74 3f 0d 0a 20 20 ck" default?.. 0e50: 20 20 20 20 20 20 20 20 7d 0d 0a 20 20 20 20 20 }.. 0e60: 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a 20 20 20 } else {.. 0e70: 20 20 20 20 20 20 20 69 66 20 7b 5b 69 6e 66 6f if {[info 0e80: 20 65 78 69 73 74 73 20 3a 3a 74 65 73 74 5f 79 exists ::test_y 0e90: 65 61 72 5f 63 6c 72 5f 76 32 5d 20 26 26 20 5c ear_clr_v2] && \ 0ea0: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 20 20 .. 0eb0: 5b 73 74 72 69 6e 67 20 6c 65 6e 67 74 68 20 24 [string length $0ec0: 3a 3a 74 65 73 74 5f 79 65 61 72 5f 63 6c 72 5f ::test_year_clr_ 0ed0: 76 32 5d 20 3e 20 30 7d 20 74 68 65 6e 20 7b 0d v2] > 0} then {. 0ee0: 0a 20 20 20 20 20 20 20 20 20 20 20 20 23 0d 0a . #.. 0ef0: 20 20 20 20 20 20 20 20 20 20 20 20 23 20 4e 4f # NO 0f00: 54 45 3a 20 55 73 65 20 74 68 65 20 73 70 65 63 TE: Use the spec 0f10: 69 66 69 65 64 20 74 65 73 74 20 79 65 61 72 20 ified test year 0f20: 66 6f 72 20 74 68 65 20 43 4c 52 20 76 32 2e 30 for the CLR v2.0 0f30: 2e 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 23 ... # 0f40: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 72 65 .. re 0f50: 74 75 72 6e 20 24 3a 3a 74 65 73 74 5f 79 65 61 turn$::test_yea 0f60: 72 5f 63 6c 72 5f 76 32 0d 0a 20 20 20 20 20 20 r_clr_v2.. 0f70: 20 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a 20 20 } else {.. 0f80: 20 20 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 #.. 0f90: 20 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a # NOTE: 0fa0: 20 55 73 65 20 74 68 65 20 64 65 66 61 75 6c 74 Use the default 0fb0: 20 74 65 73 74 20 79 65 61 72 20 66 6f 72 20 74 test year for t 0fc0: 68 65 20 43 4c 52 20 76 32 2e 30 2e 0d 0a 20 20 he CLR v2.0... 0fd0: 20 20 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 #.. 0fe0: 20 20 20 20 20 20 20 20 20 72 65 74 75 72 6e 20 return 0ff0: 32 30 30 38 3b 20 23 20 54 4f 44 4f 3a 20 47 6f 2008; # TODO: Go 1000: 6f 64 20 22 66 61 6c 6c 62 61 63 6b 22 20 64 65 od "fallback" de 1010: 66 61 75 6c 74 3f 0d 0a 20 20 20 20 20 20 20 20 fault?.. 1020: 20 20 7d 0d 0a 20 20 20 20 20 20 20 20 7d 0d 0a }.. }.. 1030: 20 20 20 20 20 20 7d 0d 0a 20 20 20 20 7d 0d 0a }.. }.. 1040: 0c 0d 0a 20 20 20 20 23 0d 0a 20 20 20 20 23 20 ... #.. # 1050: 4e 4f 54 45 3a 20 54 68 69 73 20 70 72 6f 63 65 NOTE: This proce 1060: 64 75 72 65 20 69 73 20 6f 6e 6c 79 20 75 73 65 dure is only use 1070: 64 20 77 68 65 6e 20 61 64 64 69 6e 67 20 73 68 d when adding sh 1080: 69 6d 6d 65 64 20 74 65 73 74 20 63 6f 6e 73 74 immed test const 1090: 72 61 69 6e 74 73 2e 0d 0a 20 20 20 20 23 0d 0a raints... #.. 10a0: 20 20 20 20 70 72 6f 63 20 67 65 74 42 75 69 6c proc getBuil 10b0: 64 43 6c 72 56 65 72 73 69 6f 6e 20 7b 7d 20 7b dClrVersion {} { 10c0: 0d 0a 20 20 20 20 20 20 69 66 20 7b 5b 69 6e 66 .. if {[inf 10d0: 6f 20 65 78 69 73 74 73 20 3a 3a 74 65 73 74 5f o exists ::test_ 10e0: 63 6c 72 5d 20 26 26 20 5b 73 74 72 69 6e 67 20 clr] && [string 10f0: 6c 65 6e 67 74 68 20 24 3a 3a 74 65 73 74 5f 63 length $::test_c 1100: 6c 72 5d 20 3e 20 30 7d 20 74 68 65 6e 20 7b 0d lr] > 0} then {.
paths of $B$ in an obvious way. This map $f^*$ preserves not only path lengths, but also weight sequences if $f$ does. For fibrations we can say more; using the lifting property, one can prove by induction that: \begin{theorem} \label{thm:bij} If $f: G \to B$ is an epimorphic fibration between weighted graphs, then for every two nodes $j \in N_G$ and $k \in N_B$ the map $f^*$ is a bijection between $\cup_{i \in f^{-1}(k)} G^*(i,j)$ and $B^*(k,f(j))$. \end{theorem} Now, for every $t\geq 0$, $G^t$ is the matrix whose $ij$ entry contains a summation of contributions, one for each path $\pi \in G^*(i,j)$, and the contribution is given by the product of the arc weights found along the way; hence, by Theorem~\ref{thm:bij}, under convergence assumptions we have that for all $\beta$ and all $i \in N_G$ and $k \in N_B$ \[ \sum_{i \in f^{-1}(k)}\left(\sum_{t \geq 0} \beta^t G^t\right)_{ij} = \left(\sum_{t \geq 0} \beta^t B^t\right)_{kf(j)}, \] or equivalently \[ \sum_{i \in f^{-1}(k)}\left((1-\beta G)^{-1}\right)_{ij} = \left((1-\beta B)^{-1}\right)_{kf(j)}. \] Now, for every vector\footnote{All vectors in this paper are row vectors.} $\bm u$ of size $n_B$, define its \emph{lifting along $f$} as the vector $\bm u^f$ of size $n_G$ given by \[ \left(u^f\right)_i=u_{f(i)}. \] For every $j$, we have \begin{multline*} \left(\bm u^f (1-\beta G)^{-1}\right)_j =\sum_{i\in N_G} u^f_i \left((1-\beta G)^{-1}\right)_{ij} =\\ =\sum_{k\in N_B}\sum_{i \in f^{-1}(k)} u_{f(i)} \left((1-\beta G)^{-1}\right)_{ij} =\sum_{k\in N_B}u_k\left(\sum_{i \in f^{-1}(k)} \left((1-\beta G)^{-1}\right)\right)_{ij} =\\ =\sum_{k\in N_B}u_k\left((1-\beta B)^{-1}\right)_{kf(j)} =\left(\bm u (1-\beta B)^{-1}\right)_{f(j)} \end{multline*} which can be more compactly written as \begin{equation} \label{eqn:resumedsp} \bm u^f (1-\beta G)^{-1}=\left(\bm u (1-\beta B)^{-1}\right)^f. \end{equation} Equation (\ref{eqn:resumedsp}) essentially states that if we want to compute the damped spectral ranking of the weighted graph $G$, for a preference vector that is constant along the fibers of an epimorphic fibration $f: G \to B$, and thus of the form $\bm u^f$, we can compute the damped spectral ranking of the weighted base $B$ using $\bm u$ as preference vector, and then lift along $f$ the result. For example, in the case of Katz's index a simple fibration between graphs is sufficient, as in that case there are no weights to deal with. \paragraph{Implications for PageRank.} PageRank~\cite{PBMPCR} can be defined as \[ (1-\alpha)\bm v\sum_{i=0}^\infty \alpha^i\bar G^i = (1-\alpha) \bm v(1-\alpha \bar G)^{-1}, \] where $\alpha\in[0\..1)$ is the damping factor, $\bm v$ is a non-negative preference vector with unit $\ell_1$-norm, and $\bar G$ is the row-normalized version\footnote{Here we are assuming that $G$ has no \emph{dangling nodes} (i.e., nodes with outdegree $0$). If dangling nodes are present, you can still use this definition (null rows are left untouched in $\bar G$), but then to obtain PageRank you need to normalize the resulting vector~\cite{BSVPFD,DCGRFPCSLS}. So all our discussion can also be applied to graphs with dangling nodes, up to $\ell_1$-normalization.} of $G$. Then $\bar G$ is just the (adjacency matrix of the) weighted version of $G$ defined by letting $w(a)=1/d^+_G(s_G(a))$. Hence, if you have a weighted graph $B$, an epimorphic weight-preserving fibration $f: \bar G \to B$, and a vector $\bm u$ of size $n_B$ such that $\bm u^f$ has unit $\ell_1$-norm, you can deduce from (\ref{eqn:resumedsp}) that \begin{equation} \label{eqn:resumepr} (1-\alpha) \bm u^f (1-\alpha \bar G)^{-1}=\left(\bm (1-\alpha) \bm u (1-\alpha B)^{-1}\right)^f. \end{equation} On the left-hand side you have the actual PageRank of $G$ for a preference vector that is fiberwise constant; on the right-hand side you have a spectral ranking of $B$ for the projected preference vector. Note that $B$ is not row-stochastic, and $\bm u$ has not unit $\ell_1$-norm, so technically the right-hand side of equation (\ref{eqn:resumepr}) is not PageRank anymore, but it is still a damped spectral ranking. \section{PageRank} Armed with the results of the previous section, we attack the case of PageRank, which is the most interesting. The first observation is that \begin{theorem} Given an undirected graph $G$ there is a value of $\alpha$ for which PageRank is strictly rank monotone on $G$. The same is true for score monotonicity, except when $G$ is formed by a star graph and by one or more additional isolated vertices. \end{theorem} \begin{proof} We know that for $\alpha\to 1$, PageRank tends to Seeley's index~\cite{BSVPFDF}. Since Seeley's index is strictly rank monotone, for each non-adjacent pair of vertices $x$ and $y$ there is a value $\alpha_{xy}$ such that for $\alpha\geq\alpha_{xy}$ adding the edge $x\scalebox{0.5}[1.0]{-} y$ is strictly rank monotone. The proof is completed by taking $\alpha$ larger than all $\alpha_{xy}$'s. The result for score monotonicity is similar. \ifmmode\mbox{ }\fi\rule[-.05em]{.3em}{.7em}\setcounter{noqed}{0} \end{proof} On the other hand, we will now show that \emph{for every possible value of the damping factor $\alpha$} there is a graph on which PageRank violates rank and score monotonicity. The basic intuition of our proof is that when you connect a high-degree node $x$ with a low-degree node $y$, $y$ will pass to $x$ a much greater fraction of its score than in the opposite direction. This phenomenon is caused by the stochastic normalization of the adjacency matrix: the arc from $x$ to $y$ will have a low coefficient, due to the high degree of $x$, whereas the arc from $y$ to $x$ will have a high coefficient, due to the low degree of $y$. We are interested in a parametric example, so that we can tune it for different values of $\alpha$. At the same time, we want to make the example analytic, and avoid resorting to numerical computations, as that approach would make it impossible to prove a result valid for every $\alpha$---we would just, for example, prove it for a set of samples in the unit interval. \begin{figure} \centering \begin{tabular}{cc} \raisebox{.5cm}{$G_k$\qquad}&\includegraphics{albiro-201-mps.eps}\\ \raisebox{.5cm}{$B_k$\qquad}&\includegraphics{albiro-202-mps.eps} \end{tabular} \caption{\label{fig:pr}The parametric counterexample graph for PageRank. The two $k$-cliques are represented here as $5$-cliques for simplicity. Arc labels represent multiplicity; weights are induced by the uniform distribution on the upper graph.} \end{figure} We thus resort to fibrations, using equation (\ref{eqn:resumepr}). In Figure~\ref{fig:pr} we show a parametric graph $G_k$ comprising two $k$-cliques (in the figure, $k=5$). Below, we show the graph $B_k$ onto which $G_k$ can be fibred by mapping nodes following their labels. The dashed edge is the addition that we will study: the fibration exists whether the edge exists or not (in both graphs). While $G_k$ has $2k+4$ vertices, $B_k$ has $9$ vertices, independently of $k$, and thus its PageRank can be computed analytically as rational functions of $\alpha$ whose coefficients are rational functions in $k$ (as the number of arcs of each $B_k$ is different). The adjacency matrix of $B_k$ without the dashed arc, considering multiplicities, is\footnote{Note that in the published version of this paper~\cite{BFVSRMUN} the denominators of the second row are $k-1$, mistakenly, instead of $k+1$.} \[ \left(\begin{matrix} \frac{k-2}{k-1}& \frac{k-1}{k-1}& 0& 0& 0& 0& 0& 0& 0\\ \frac1{k+1}& 0& \frac1{k+1}& 0& \frac1{k+1}& 0& 0& 0& 0\\ 0& \frac12& 0& \frac12& 0& 0& 0& 0& 0\\ 0& 0& \frac12& 0& 0& 0& \frac12& 0& 0\\ 0& \frac12& 0& 0& 0& \frac12& 0& 0& 0\\ 0& 0& 0& 0& 1& 0& 0 &\fcolorbox{gray}{gray}{0}& 0\\ 0& 0& 0& \frac1k& 0& 0& 0& \frac1k& \frac1k\\ 0& 0& 0& 0& 0& \fcolorbox{gray}{gray}{0}& \frac1{k-1}& 0& \frac1{k-1}\\ 0& 0& 0& 0& 0& 0& \frac{k-2}{k-1}& \frac{k-2}{k-1}& \frac{k-3}{k-1}\\ \end{matrix}\right) \] After adding the edge between $5$ and $7$ we must modify the matrix by setting the two grayed entries to one and fix normalization accordingly. We will denote with $\operatorname{pre}_\alpha(x)$ the rational function returning the PageRank of node $x$ with damping factor $\alpha$ before the addition of the dashed arc, and with $\operatorname{post}_\alpha(x)$ the rational function returning the PageRank of node $x$ with damping factor $\alpha$ after the addition of the dashed arc. We use the Sage computational engine~\cite{Sage} to perform all computations, as the resulting rational functions are quite formidable.\footnote{The Sage worksheet can be found at \url{https://vigna.di.unimi.it/pagerank.ipynb}.} We start by considering node $5$: evaluating $\operatorname{post}_\alpha(5)-\operatorname{pre}_\alpha(5)$ in $\alpha=2/3$ we obtain a negative value for all $k\geq 12$, showing there is always a value of $\alpha$ for which node $5$ violates weak score monotonicity, as long as $k\geq 12$. To strengthen our results, we are now going to show that for \emph{every} $\alpha$ there is a $k$ such that weak score monotonicity is violated. We use Sturm polynomials~\cite{RaSATP} to compute the number of sign changes of the numerator $p(\alpha)$ of $\operatorname{post}_\alpha(5)-\operatorname{pre}_\alpha(5)$ for $\alpha\in[0\..1]$, as the denominator cannot have zeros. Sage reports that there are two sign changes for $k\geq 12$, which means that $p(\alpha)$ is initially positive; then, somewhere before $2/3$ it becomes negative; and finally
j\,|\, \exists p_i'' \in F.\, p_i \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} p_i'' \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} p_i'\}| \le |\{i \le j\,|\, \exists q_i'' \in F.\, q_i \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} q_i'' \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} q_i'\}|$. We say that an initial $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$-jumping trace on $w$ is {\em $i$-good} iff it does not jump within the first $i$ steps. We show, by induction on $i$, the following property (P): For every $i$ and every infinite $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$-jumping initial trace $\pi = p_0 \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} p_0' \goesto {\sigma_0} {p_{1}} \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} p_{1}' \goesto {\sigma_{1}} \cdots$ on $w$ there exists an initial $i$-good trace $\pi^i = q_0 \goesto {\sigma_0} q_1 \goesto {\sigma_1} \cdots \goesto {\sigma_i} q_i \cdots$ on $w$ s.t. $\mathcal C^c_i(\pi, \pi^i)$ and the suffixes of the traces are identical, i.e., $q_i = p_i$ and $\suffix \pi i = \suffix {\pi^i} i$. For the case base $i=0$ we take $\pi^0 = \pi$. Now we consider the induction step. By induction hypothesis we get an initial $i$-good trace $\pi^i$ s.t. $\mathcal C^c_i(\pi, \pi^i)$ and $q_i = p_i$ and $\suffix \pi i = \suffix {\pi^i} i$. If $\pi^i$ is $(i+1)$-good then we can take $\pi^{i+1} = \pi^{i}$. Otherwise, $\pi^i$ contains a step $q_i \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} q_i' \goesto {\sigma_i} {q_{i+1}}$. First we consider the case where there exists a $q_i'' \in F$ s.t. $q_i \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} q_i'' \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} q_i'$. (Note that the $i$-th step in $\pi^i$ can count as accepting in $\mathcal C^c$ because $q_i'' \in F$, even if $q_i$ and $q_i'$ are not accepting.) By def. of $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$ there exists an initial trace $\pi''$ on a prefix of $w$ that ends in $q_i''$ and visits accepting states at least as often as the non-jumping prefix of $\pi^i$ that ends in $q_i$. Again by definition of $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$ there exists an initial trace $\pi'$ on a prefix of $w$ that ends in $q_i'$ and visits accepting states at least as often as $\pi''$. Thus $\pi'$ visits accepting states at least as often as the {\em jumping} prefix of $\pi^i$ that ends in $q_i'$ (by the definition of $\mathcal C^c$). By composing the traces we get $\pi^{i+1} = \pi' (q_i' \goesto {\sigma_i} {q_{i+1}}) \suffix {\pi^i} {i+1}$. Thus $\pi^{i+1}$ is an $(i+1)$-good initial trace on $w$ and $\suffix \pi {i+1} = \suffix {\pi^i} {i+1} = \suffix {\pi^{i+1}} {i+1}$ and $\mathcal C^c_{i+1}(\pi^i, \pi^{i+1})$ and $\mathcal C^c_{i+1}(\pi, \pi^{i+1})$. The other case where there is no $q_i'' \in F$ s.t. $q_i \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} q_i'' \xincl{\mathsf{bw}\textrm{-}\mathsf{c}} q_i'$ is similar, but simpler. Let $\pi$ be an initial $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$-jumping fair trace on $w$. By property (P) and K\"onig's Lemma there exists an infinite initial non-jumping fair trace $\pi'$ on $w$. Thus $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$ is jumping-safe. % \ignore{ Now, we show that $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$-jumping $k$-lookahead fair simulation is GFI. Consider two B\"uchi automata $\mathcal A$ and $\mathcal B$ s.t. for every $p \in I_\mathcal A$ there exists $q \in I_\mathcal B$ s.t. $q$ can $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$-jumping $k$-lookahead fair simulate $p$. If $w \in \lang{\mathcal A}$ then there is an infinite fair trace on $w$ from some $p \in I_\mathcal A$. It follows that there is an infinite fair $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$-jumping trace on $w$ from some $q \in I_\mathcal B$. Since $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$ is jumping-safe, there exists an infinite fair initial trace on $w$ in $\mathcal B$, i.e., an infinite fair trace on $w$ from some $q' \in I_\mathcal B$. Thus, $w \in \lang{\mathcal B}$ as required. } \end{proof} \noindent As a direct consequence, $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$-jumping $k$-lookahead fair simulation is GFI. Since $\xincl{\mathsf{bw}\textrm{-}\mathsf{c}}$ is difficult to compute, we approximate it by a corresponding lookahead-simulation $\sqsubseteq^{k\textrm{-}\mathsf{bw}\textrm{-}\mathsf{c}}$ which, in the same spirit, counts and compares the number of visits to accepting states in every round of the $k$-lookahead backward simulation game. Let $\preceq^{k\textrm{-}\mathsf{bw}\textrm{-}\mathsf{c}}$ be the transitive closure of $\sqsubseteq^{k\textrm{-}\mathsf{bw}\textrm{-}\mathsf{c}}$. \begin{corollary} $\preceq^{k\textrm{-}\mathsf{bw}\textrm{-}\mathsf{c}}$-jumping $k$-lookahead fair sim. is GFI. \end{corollary} \section{Lookahead Simulations}\label{sec:lookahead} While trace inclusions are theoretically appealing as GFQ/GFI preorders coarser than simulations, it is not feasible to use them in practice, because they are too hard to compute (even their membership problem is PSPACE-complete). As a first attempt at achieving a better trade-off between complexity and size we recall \emph{multipebble simulations} \cite{etessami:hierarchy02}, which are obtained by providing Duplicator with several pebbles, instead of one. However, computing multipebble simulations is not feasible in practice either, on automata of nontrivial size. Therefore, we explore yet another way of obtaining good under-approximations of trace inclusion: We introduce \emph{lookahead simulations}, which are obtained by providing Duplicator with a limited amount of information about Spoiler's future moves. While lookahead itself is a classic concept (e.g., in parsing) it can be defined in several different ways in the context of adversarial games like in simulation. We compare different variants for computational efficiency and approximation quality. \xparagraph{$k$-pebble simulation.} Simulation preorder can be generalized by allowing Duplicator to control several pebbles instead of just one. In $k$-pebble simulation, $k>0$, Duplicator's position is a set of at most $k$ states (while Spoiler still controls exactly 1 state), which allows Duplicator to `hedge her bets' in the simulation game. The direct, delayed, fair and backward winning conditions can be generalized to the multipebble framework \cite{etessami:hierarchy02}. For $x\in\{\mathrm{di, de, f, bw}\}$ and $k > 0$, $k$-pebble $x$-simulation is coarser than $x$-simulation and it implies $x$-containment; by increasing $k$, one can control the quality of the approximation to trace inclusion. Direct, delayed, fair and backward $k$-pebble simulations are not transitive in general, but their transitive closures are GFI preorders; the direct, delayed and backward variants are also GFQ. However, computing $k$-pebble simulations is infeasible, even for modest values for $k$. In fact, for a BA with $n$ states, computing $k$-pebble simulation requires solving a game of size $n \cdot n^k$. Even in the simplest case of $k=2$ this means at least cubic space, which is not practical for large $n$. For this reason, we consider a different way to extend Duplicator's power, i.e., by using \emph{lookahead} on the moves of Spoiler. \xparagraph{$k$-step simulation.} We generalize simulation by having the players select sequences of transitions of length $k > 0$ instead of single transitions: This gives Duplicator more information, and thus yields a larger simulation relation. In general, $k$-step simulation and $k$-pebble simulation are incomparable, but $k$-step simulation is strictly contained in $n$-pebble simulation. However, the rigid use of lookahead in big-steps causes at least two issues: \begin{inparaenum}[1)] % \item For a BA with $n$ states, we need to store only $n^2$ configurations $(p,q)$ (which is much less than $k$-pebble simulation). However, in {\em every round} we have to explore up-to $d^k$ different moves for each player (where $d$ is the maximal out-degree of the automaton). In practice (e.g., $d=4$, $k=12$) this is still too large. % \item Duplicator's lookahead varies between $1$ and $k$, depending where she is in her response to Spoiler's long move. Thus, Duplicator might lack lookahead where it is most needed, while having a large lookahead in other situations where it is not useful. In the next notion, we attempt at ameliorating this. \end{inparaenum} \xparagraph{$k$-continuous simulation.} Duplicator is continuously kept informed about Spoiler's next $k$ moves, i.e., she always has lookahead $k$. Formally, a configuration of the simulation game consists in a pair $(\rho_i, q_i)$, where $\rho_i$ is the sequence of the next $k-1$ moves from $p_i$ that Spoiler has already committed to. In every round of the game, Spoiler reveals another move $k$ steps in the future, and then makes the first of her announced $k$ moves, to which Duplicator responds as usual. A pair of states $(p,q)$ is in $k$-continuous simulation if Duplicator can win this game from every configuration $(\rho,q)$, where $\rho$ is a sequence of $k-1$ moves from $p$. ($k=1$ is ordinary simulation.) $k$-continuous simulation is strictly contained in $n$-pebble simulation (but incomparable with $k$-pebble simulation), and larger than $k$-step simulation. While this is arguably the strongest way of giving lookahead to Duplicator, it requires storing $n^2 \cdot d^{k-1}$ configurations, which is infeasible for nontrivial $n$ and $k$ (e.g., $n=10000$, $d=4$, $k=12$). \xparagraph{$k$-lookahead simulation.} We introduce $k$-lookahead simulation as an optimal compromise between $k$-step and $k$-continuous simulation. Intuitively, we put the lookahead under Duplicator's control, who can choose at each round how much lookahead she needs (up to $k$). Formally, configurations are pairs $(p_i, q_i)$ of states. In every round of the game, Spoiler chooses a sequence of $k$ consecutive transitions $p_i \goesto {\sigma_i} {p_{i+1}} \goesto {\sigma_{i+1}} \cdots \goesto {\sigma_{i+k-1}} p_{i+k}$. Duplicator then chooses a number $1 \leq m \leq k$ and responds with a matching sequence of $m$ transitions $q_i \goesto {\sigma_i} {q_{i+1}} \goesto {\sigma_{i+1}} \cdots \goesto {\sigma_{i+m-1}} q_{i+m}$. The remaining $k-m$ moves of Spoiler are forgotten, and the next round of the game starts at $(p_{i+m}, q_{i+m})$. In this way, the players build
B$ so that $H$ is given by \eqref{eq:HF1}. The condition $[H] \in H^3(M,\mathbb{Z})$ implies that the closed $\mathfrak{t}^*$-valued two-form $\hat c$ represents an integral cohomology class in $H^2(B,\mathfrak{t}^*)$. Consider the dual torus $\hat T^k$ with Lie algebra $\mathfrak{t}^*$ and let $\hat M$ be a principal torus bundle over $B$ with Chern class $[\hat c]$. Let $\hat \theta$ be a connection on $\hat p \colon \hat M \to B$ such that $d \hat \theta = \hat c$, and define $$ \hat H = \hat{p}^*h - \langle c \wedge \hat \theta \rangle $$ where $c = d\theta$. We leave as an \textbf{exercise} to check that $d \hat H = 0$ and that $(M,[H])$ and $(\hat M,[\hat H])$ are T-dual, taking $\overline{B} = - \langle \theta \wedge \hat \theta \rangle$ in \eqref{eq:relationTdual}. \end{proof} We are ready to present two basic examples of topological T-duality, for a positive dimensional base (cf. Example \ref{ex:torus}). The first illustrates the topology change on the manifold $M$ under T-duality. \begin{ex} \label{e:TdualHopf1} Consider the Hopf fibration $p\colon M = S^3 \to B = S^2$ with $H = 0$. Choose a connection $\theta$ such that $d \theta = \omega_{S^2}$ is the standard symplectic structure on $S^2$ (suitably normalized). We claim that $(M,0)$ is T-dual to $(S^2 \times S^1,[\hat H])$, where $\hat H = - \hat \theta \wedge \omega_{S^2}$, where $\hat \theta$ is the line element on $S^1$. To see this, take $ \overline{B} = - q^*\theta \wedge \hat q^*\hat \theta $ and calculate $$ d \overline{B} = - q^*d\theta \wedge \hat q^*\hat \theta = - q^*p^* \omega_{S^2} \wedge \hat q^*\hat \theta = \hat q^* \hat H. $$ \end{ex} The next example shows that T-duality is sensitive to the choice of cohomology class of the three-form $H$, as is implicit in the proof of Proposition \ref{prop:BHM}. \begin{ex} \label{e:TdualHopf2} Consider again the Hopf fibration in the previous example, but regarding $M$ as the compact Lie group $M = SU(2)$. Consider generators for the Lie algebra $\mathfrak{su}(2) = \langle e_1,e_2,e_3 \rangle$ as in Example \ref{ex:GRicciflatHopf} and the Cartan three-form $$ H = - e^{123}. $$ We claim that $(M,[H])$ is self T-dual. Notice that $e_1$ generates the circle action on the fibers of $p \colon M \to S^2$ and hence $e^1$ is a connection on $M$ with $de^1 = e^{23} = p^*\omega_{S^2}$. Consider $$ \overline{B} = - q^* e^1 \wedge \hat q^* \hat e^1 $$ where $\hat e^1$ is a left-invariant one-form dual to the generator of the circle in $\hat M = SU(2)$. Then, $$ d \overline{B} = - q^*p^*\omega_{S^2}\wedge \hat q^* \hat e^1 + q^* e^1 \wedge \hat q^* \hat p^*\omega_{S^2} = \hat p^* \hat H - p^*H, $$ and hence $\overline{B}$ satisfies the conditions in \ref{def:toptdual}. More generally, following the previous argument one can prove that any semi-simple Lie group equipped with the cohomology class of its Cartan 3-form is self T-dual (see \cite[Example 2.4]{GualtieriTdual}). \end{ex} \section{T-duality and Courant algebroids} Having described the most basic features about T-duality in the previous section, we now recall the relevant implications of the T-duality relation for the theory of generalized geometry \cite{GualtieriTdual}. For this, let us first interpret the data in Definition \ref{def:toptdual} in the language of Courant algebroids. \begin{defn} An \emph{equivariant Courant algebroid} is given by a triple $(E,M,T^k)$, where $T^k$ is a $k$-dimensional torus acting freely and properly on a smooth manifold $M$, so that $M$ is a principal $T^k$-bundle over a base $M/T^k = B$, and $E$ is an exact Courant algebroid over $M$ with a $T^k$-action, lifting the $T^k$-action on $M$ and preserving the Courant algebroid structure. \end{defn} Effectively, given an equivariant Courant algebroid $(E,M,T^k)$, since $T^k$ is compact we can choose an equivariant isotropic splitting $\sigma \colon T \to E$ and hence the previous definition reduces to having the twisted Courant algebroid $(T \oplus T^*,\IP{,},[,]_H)$ over $M$ with a $T^k$-action on $T \oplus T^*$ and a $T^k$-invariant three-form $H \in \Gamma(\Lambda^3T^*M)^{T^k}$. Thus, any triple $(E,M,T^k)$ determines a $T^k$-invariant \v Severa class $$ \tau = [E] \in H^3(M,\mathbb{R})^{T^k}, $$ leading to a pair $(M,\tau)$ as in Definition \ref{def:toptdual}. Conversely, given a pair $(M,\tau)$ as in Definition \ref{def:toptdual} we can consider the equivariant Courant algebroid $(T \oplus T^*,\IP{,},[,]_H)$ over $M$, endowed with the canonical $T^k$-action on $T \oplus T^*$, for a choice of $T^k$-invariant element $H \in \tau$. Our goal here is to explain a theorem due to Cavalcanti and Gualtieri \cite{GualtieriTdual}, which states that for any T-dual pairs one can associate an isomorphism of equivariant Courant algebroids, upon reduction to the base $B$. The preliminaries for this result are contained in \S \ref{s:GCGmanifold}, where the Dorfman bracket on $E$ is expressed as a \emph{derived bracket} using the twisted de Rham differential \eqref{eq:twisteddiff}. In the situation of our interest, assume that $(T \oplus T^*,\IP{,},[,]_H)$ is equivariant over a $T^k$-bundle $p \colon M \to B$. Consider the twisted de Rham complex of $T^k$-invariant differential forms $(\Gamma(\mathbb{S})^{T^k},d_H)$ given by the $\mathbb{Z}_2$-graded vector space $$ \Gamma(\mathbb{S})^{T^k} = \Gamma(\Lambda^{\mbox{\tiny{even}}}T^*M)^{T^k} \oplus \Gamma(\Lambda^{\mbox{\tiny{odd}}}T^*M)^{T^k} $$ with differential $d_H$ in \eqref{eq:twisteddiff}. The following result is due to Bouwknegt, Evslin, and Mathai \cite{BEM}. \begin{thm}[\cite{BEM}]\label{thm:BEM} Let $(M, \tau)$ and $(\hat M, \hat \tau)$ be T-dual. Let $H\in \tau$ and $\hat H \in \hat \tau$ be representatives as in Lemma \ref{lem:Hmodel} with $\hat q^* \hat H - q^* H = d \overline{B}$. Then there is an isomorphism of differential complexes \begin{gather*} \begin{split} \tau : (\Gamma(\mathbb{S})^{T^k},d_H) \to (\Gamma(\hat{\mathbb{S}})^{T^k},d_{\hat H}), \qquad \tau(\rho) = \int_{T^k} e^{\overline{B}} \wedge q^*\rho, \end{split} \end{gather*} where the integration is along the fibers of $\hat q \colon \overline{M} \to \hat M$. \end{thm} \begin{proof} First, let us make more explicit the definition of $\tau$. Notice that $e^{\overline{B}} \wedge q^*\rho$ is linear in $\rho$, and hence it suffices to define the fiber-wise integral above for a degree-$m$ element $\rho \in \Gamma(\Lambda^mT^* M)^{T^k}$. For this choice of $\rho$, consider a connection $\theta$ on $M$ and using \eqref{eq:difformssplit} we decompose $$ \rho = \sum_{j=0}^m \rho_j \in \bigoplus_{j=0}^{m}\Gamma(\Lambda^j T^*M \otimes \wedge^{m-j}\mathfrak{t}^{*})^{T^{k}}. $$ Then, $\tau$ is defined taking the component of $e^{\overline{B}} \wedge q^*\rho $ which has degree $k$ along the fiber of $\hat p$ and integrating, that is, $$ \tau(\rho) = \sum_{j = 0}^m \frac{1}{(k-m+j)!}\int_{T^k} \overline{B}^{k-m+j} \wedge q^*\rho_j. $$ To write this expression, we have used that $H$ and $\hat H $ are as in Lemma \ref{lem:Hmodel}, and therefore $\overline{B}$ belongs to $\mathcal{F}^1$ in the filtration \eqref{eq:filtration} for $p \colon M \to B$. We prove next that $\tau$ is a morphism of differential complexes: \begin{align*} \mathrm{d}_{\hat H}(\tau(\rho)) & =\int_{T^{k}} \mathrm{d}_{\hat H}(e^{\overline{B}}\wedge \rho)\\ & = \int_{T^{k}} (\hat H - H)\wedge e^{\overline{B}} \wedge \rho + e^{\overline{B}}\wedge \mathrm{d}\rho - \hat H\wedge e^{\overline{B}}\wedge \rho\\ & =\int_{T^{k}} e^{\overline{B}}\wedge \mathrm{d}\rho - H\wedge e^{\overline{B}}\wedge \rho\\ & = \tau (\mathrm{d}_{H}\rho) \end{align*} where, for simplicity, we have omitted pull-backs in the formula above. It remains to show that $\tau$ so defined is an isomorphism. For this, one can work on a local coordinate patch $U$ on the base manifold $B$, so that $\overline{M}_{|U} = U \times T^k \times \hat T^k$, and take a global frame for $T^*T^k$ and $T^*\hat T^k$. We leave the details as an \textbf{exercise}. \end{proof} We are ready for the main result of this section, which states that T-duality induces an isomorphism of equivariant Courant algebroids, upon reduction to the base. Let $(E,M,T^k)$ be an equivariant Courant algebroid with base $B$. Consider the vector bundle $$ E/T^k \to B, $$ whose sheaf of sections is given by the invariant sections of $E$, that is, $\Gamma(E/T^k) = \Gamma(E)^{T^k}$. We can endow $E/T^k$ with a natural Courant algebroid $(E/T^k,\IP{,},[,],\pi_{E/T^k})$ as in Definition \ref{d:CA}, with anchor $\pi_{E/T^k} \colon E/T^k \to TB$ and pairing and Dorfman bracket given by the restriction of the neutral pairing and the Dorfman bracket on $E$ to $\Gamma(E)^{T^k}$. Observe that $E/T^k$ has the same rank as $E$, and hence $E/T^k$ is not an exact Courant algebroid over $B$ (see Definition \ref{d:CAex}). We will call $(E/T^k,\IP{,},[,],\pi_{E/T^k})$ the \emph{simple reduction} of $E$ by $T^k$. \begin{thm}[\cite{GualtieriTdual}]\label{th:Tduality} Let $(E,M,T^k)$ and $(\hat E, \hat M, \hat T^k)$ be equivariant exact Courant algebroids over the same base $M/T^k = B = \hat M/\hat T^k$. Assume that $(M,[E])$ is T-dual to $(\hat M,[\hat E])$. Then, there exists a Courant algebroid isomorphism $$ \psi \colon E/T^k \to \hat E/\hat T^k $$ which
### vepifanov's blog By vepifanov, 22 months ago, translation, , Intel Code Challenge Final Round will take place on Saturday, 8 October 15:05 MSK. All users of the Codeforces can participate in it as in an usual round. The round will be rated and both divisions can participate. Pay attention to the time of the beginning of the round. There will be 7 problems with statements in english and russian languages and 3 hours to solve them. Scoring distribution will be announced closer to the start of the round. The problems were prepared by me — Vladislav Epifanov (vepifanov). I want to say thanks to Alexey Shmelev (ashmelev), Alexander Fetisov (alexfetisov) and Vladislav Isenbaev (winger) for testing the problems, coordinator of the Codeforces Gleb Evstropov (GlebsHP) for his help with the contest preparation, and Mike Mirzayanov (MikeMirzayanov) for the Codeforces and Polygon systems. UPD. Scoring distribution: 500-1000-1500-1500-2500-2500-2500. Since the round will be 3 hours long, the number of points for each task will decrease slower than in usual round. UPD. 2 Due to some issues on one of sites for official participation in Intel Code Challenge we shifted the beginning of the round for 10 minutes. Sorry for the inconvenience. UPD. 3 Final rating changes will be available after removing unfair participants. Editorial will be published on Sunday, 9 October. Editorial UPD. 6 Photos from the onsite event. ### Nizhny Novgorod ### Saint Petersburg #### Winner of the Intel Code Challenge — Evgeny Kapun (eatmore) • • +316 • » 22 months ago, # |   +98 Hope better English statements:D • » » 22 months ago, # ^ |   +29 Hope I can keep my purple!Thanks! • » » » 22 months ago, # ^ |   +18 I wish for you ! • » » » » 22 months ago, # ^ |   +3 Thank you for you wish. • » » » 22 months ago, # ^ |   +10 I get it. • » » » » 22 months ago, # ^ |   -21 NO... wrong English... say... "I did it"... By the way congratulations • » » » » » 22 months ago, # ^ |   0 wow • » » 22 months ago, # ^ |   +89 Hope better English comments • » » » 22 months ago, # ^ |   -21 her English comment is not correct . But everyone has understood her expression . I will not downvote her for her bad english . • » » 22 months ago, # ^ |   0 yeah not like the last round please!! » 22 months ago, # | ← Rev. 6 →   -45 how can (all users of the codeforces) participate in round (with div_1 and div_2 participants and 7 problems and unusual time of the beginning) as in an usual round? » 22 months ago, # |   +35 please provide better problem statement in english. » 22 months ago, # | ← Rev. 2 →   -10 Hope the scoring distribution can be announced a little earlier。:D » 22 months ago, # |   +83 Will the problems be harder than the problems from the intel elimination round ? • » » 22 months ago, # ^ |   +157 Problemset is designed to satisfy participants of all skill levels. » 22 months ago, # |   -16 Hope I solved the problem better! » 22 months ago, # |   0 Hope it will be an awesome contest with some awesome problems. » 22 months ago, # |   +3 i have a doubt ... As div1 contestants are participating along with div2 contestants...will this affect ratings for ppl in div2 differently as compared to when div1 ppl participate out of the competition ... Thanks in advance :) :) :) • » » 22 months ago, # ^ | ← Rev. 6 →   +8 you can make virtual participation later. the problemset is good for all contestants so every one would compite in this kind of rounds , do not worry about rating just enjoy and Good luck for all .. sorry for bad english ! • » » » 22 months ago, # ^ |   0 I m participating but just wanted to know about rating system... • » » » » 22 months ago, # ^ |   +8 Don't worry man Everybody will satisfied after the contest.. you will get what you deserve. I think it will be better of solving problem rather than thinking about rating changing process :) » 22 months ago, # |   +47 Why does every comment in the form "Hope [noun/pronoun] verb ..." get downvoted? I don't understand... • » » 22 months ago, # ^ |   +57 Does anyone ever understand the reason for downvotes on codeforces? • » » » 22 months ago, # ^ | ← Rev. 3 →   +59 Red coders get Upvote By the way • » » 22 months ago, # ^ |   +127 Hope this comment won't get downvoted :) • » » » 22 months ago, # ^ | ← Rev. 2 →   +30 Red Coder _ / \ _. Comment upvoted without even reading • » » » 22 months ago, # ^ |   +29 The "Red coders get Upvote" theory has been proven. » 22 months ago, # |   +8 Good luck for all (: » 22 months ago, # |   +7 Hope to become cyan finally, I need +22 good luck and have a good contest ... • » » 22 months ago, # ^ |   +5 hoping same just need +31 • » » » 22 months ago, # ^ |   -15 Advice from little bro- Try solving from reverse. like , if you solve a,b,c thn solve today c,b,a :D » 22 months ago, # |   +28 Timings are getting bad for me. Will we have a contest on usual time any day soon? • » » 22 months ago, # ^ |   0 me too :( » 22 months ago, # | ← Rev. 3 →   -8 glad to know that all divisions coder will be participant. thanks a lot for the rated contest :) hope so, it will be a great contest as elimination round :) all the best to all :) » 22 months ago, # |   +13 Will the problems be significantly harder than the first round?? • » » 22 months ago, # ^ |   0 I think not so. • » » » 22 months ago, # ^ |   +3 And you were wrong. :PAwesome problemset! • » » » » 22 months ago, # ^ | ← Rev. 2 →   0 I can't solve problem C during the contest and I got the idea 10 hours later after referring to JIBANCANYANG's code ;)Very interesting problems though. • » » » » » 22 months ago, # ^ |   0 A simple but good idea. » 22 months ago, # | ← Rev. 2 →   +5 hope same kind of problem but of course with better statement. last round was awesome with the problem statement..... __ » 22 months ago, # |   -26 I hope to raise my rating » 22 months ago, # |   -134 give me some downvotes » 22 months ago, # | ← Rev. 2 →   -41 Good Opportunity to gain something new... :) • » » 22 months ago, # ^ |   -15 why downvote his comment just because he is Asian.Or he is telling about a useful competition that dosent concern you.Or people just love to downvote if someone is not red. Now downvote me. » 22 months ago, # |   +21 This is my last codeforces round before flying to Dalian,wish to learn new skills!Thank you,vepifanov! • »
mean 1: outputs a two-component row vector [P, a], where P is the default output and a is an element expressed on a root of the polynomial P , whose minimal polynomial is equal to x. 4: gives all polynomials of minimal T2 norm (of the two polynomials P (x) and P (-x), only one is given). 16: possibly use a suborder of the maximal order. The primes dividing the index of the order chosen are larger than primelimit or divide integers stored in the addprimes table. In that case it may happen that the output polynomial does not have minimal T2 norm. The library syntax is polredabs0(x, flag). 3.6.109 polredord(x): finds polynomials with reasonably small coefficients and of the same degree as that of x defining suborders of the order defined by x. One of the polynomials always defines Q (hence is equal to (x - 1)n , where n is the degree), and another always defines the same order as x if x is irreducible. The library syntax is ordred(x). 3.6.110 poltschirnhaus(x): applies a random Tschirnhausen transformation to the polynomial x, which is assumed to be non-constant and separable, so as to obtain a new equation for the tale e algebra defined by x. This is for instance useful when computing resolvents, hence is used by the polgalois function. The library syntax is tschirnhaus(x). 3.6.111 rnfalgtobasis(rnf , x): expresses x on the relative integral basis. Here, rnf is a relative number field extension L/K as output by rnfinit, and x an element of L in absolute form, i.e. expressed as a polynomial or polmod with polmod coefficients, not on the relative integral basis. The library syntax is rnf algtobasis(rnf , x). 3.6.112 rnfbasis(bnf , L): gives either a true bnf -basis of ZL if it exists, or an n + 1-element generating set of L if not, where n is the rank of L over bnf . Here, bnf is as output by bnfinit; L is either a polynomial with coefficients in bnf defining a relative extension of bnf , or a pseudo-basis of such an extension, as ouput by rnfpseudobasis. The library syntax is rnf basis(bnf , x). 3.6.113 rnfbasistoalg(rnf , x): computes the representation of x as a polmod with polmods coefficients. Here, rnf is a relative number field extension L/K as output by rnfinit, and x an element of L expressed on the relative integral basis. The library syntax is rnf basistoalg(rnf , x). 3.6.114 rnfcharpoly(nf , T, a, {v = x}): characteristic polynomial of a over nf , where a belongs to the algebra defined by T over nf , i.e. nf [X]/(T ). Returns a polynomial in variable v (x by default). The library syntax is rnf charpoly(nf , T, a, v), where v is a variable number. 127 3.6.115 rnfconductor(bnf , pol , {flag = 0}): given bnf as output by bnfinit, and pol a relative polynomial defining an Abelian extension, computes the class field theory conductor of this Abelian extension. The result is a 3-component vector [conductor , rayclgp, subgroup], where conductor is the conductor of the extension given as a 2-component row vector [f0 , f ], rayclgp is the full ray class group corresponding to the conductor given as a 3-component vector [h,cyc,gen] as usual for a group, and subgroup is a matrix in HNF defining the subgroup of the ray class group on the given generators gen. If flag is non-zero, check that pol indeed defines an Abelian extension, return 0 if it does not. The library syntax is rnf conductor(rnf , pol , flag). 3.6.116 rnfdedekind(nf , pol , pr ): given a number field nf as output by nfinit and a polynomial pol with coefficients in nf defining a relative extension L of nf , evaluates the relative Dedekind criterion over the order defined by a root of pol for the prime ideal pr and outputs a 3-component vector as the result. The first component is a flag equal to 1 if the enlarged order could be proven to be pr -maximal and to 0 otherwise (it may be maximal in the latter case if pr is ramified in L), the second component is a pseudo-basis of the enlarged order and the third component is the valuation at pr of the order discriminant. The library syntax is rnf dedekind(nf , pol , pr ). 3.6.117 rnfdet(nf , M ): given a pseudo-matrix M over the maximal order of nf , computes its determinant. The library syntax is rnf det(nf , M ). 3.6.118 rnfdisc(nf , pol ): given a number field nf as output by nfinit and a polynomial pol with coefficients in nf defining a relative extension L of nf , computes the relative discriminant of L. This is a two-element row vector [D, d], where D is the relative ideal discriminant and d is the 2 relative discriminant considered as an element of nf /nf . The main variable of nf must be of lower priority than that of pol , see Section 2.6.2. The library syntax is rnf discf (bnf , pol ). 3.6.119 rnfeltabstorel(rnf , x): rnf being a relative number field extension L/K as output by rnfinit and x being an element of L expressed as a polynomial modulo the absolute equation rnf .pol, computes x as an element of the relative extension L/K as a polmod with polmod coefficients. The library syntax is rnf elementabstorel(rnf , x). 3.6.120 rnfeltdown(rnf , x): rnf being a relative number field extension L/K as output by rnfinit and x being an element of L expressed as a polynomial or polmod with polmod coefficients, computes x as an element of K as a polmod, assuming x is in K (otherwise an error will occur). If x is given on the relative integral basis, apply rnfbasistoalg first, otherwise PARI will believe you are dealing with a vector. The library syntax is rnf elementdown(rnf , x). 128 3.6.121 rnfeltreltoabs(rnf , x): rnf being a relative number field extension L/K as output by rnfinit and x being an element of L expressed as a polynomial or polmod with polmod coefficients, computes x as an element of the absolute extension L/Q as a polynomial modulo the absolute equation rnf .pol. If x is given on the relative integral basis, apply rnfbasistoalg first, otherwise PARI will believe you are dealing with a vector. The library syntax is rnf elementreltoabs(rnf , x). 3.6.122 rnfeltup(rnf , x): rnf being a relative number field extension L/K as output by rnfinit and x being an element of K expressed as a polynomial or polmod, computes x as an element of the absolute extension L/Q as a polynomial modulo the absolute equation rnf .pol. If x is given on the integral basis of K, apply nfbasistoalg first, otherwise PARI will believe you are dealing with a vector. The library syntax is rnf elementup(rnf , x). 3.6.123 rnfequation(nf , pol , {flag = 0}): given a number field nf as output by nfinit (or simply a polynomial) and a polynomial pol with coefficients in nf defining a relative extension L of nf , computes the absolute equation of L over Q. If flag is non-zero, outputs a 3-component row vector [z, a, k], where z is the absolute equation of L over Q, as in the default behaviour, a expresses as an element of L a root of the polynomial defining the base field nf , and k is a small integer such that = + k where is a root of z and a root of pol . The main variable of nf must be of lower priority than that of pol (see Section 2.6.2).
Q458. If you are doing a thesis this year or have a general topic in mind for one in the future, pick two theorists, each from a different section of the course, and show what you have learned about their ideas by "applying" them to your research topic. Q317. Match the name with the projection. South Polar Azimuthal | Mercator | Albers Equal Area Conic | Mollweide | Lambert Conformal Conic | North Polar Azimuthal Q314. Show how to set snapping options to make vertex matching while drawing polygons by hand easier. Q105. Following on problem 104, suppose the test is not painless or without its own risks. Suppose the "cost" of the test is 5. And suppose the treatment is also not so nice and the cost of the treatment is 15. But if you have the disease and you are not treated, the results are nasty : 50. Do we have enough information to recommend a course of action? What should we do? Q417. In the context of game theory/prisoner's dilemma, what does "words are cheap" mean? Q236. In a prisoner’s dilemma game, the rational thing for both players is to defect. This makes mutual defection an equilibrium, though it is not a preferred one (the collective would be better off with another outcome). In other words, in a single game of prisoner’s dilemma, cooperation is “impossible.” But cooperation does happen in the world. Demonstrate your understanding of Axelrod's ideas by describing the mechanism and conditions under which this can happen without assuming anything “social” about the agents. Explain how this works and how it adds to or modifies Smith’s and Hayek’s story about how markets can be a source of social order. Q84. If the weather is nice, plant a garden. Otherwise paint the office. For the garden, make a decision between flowers and vegetables. If you go for vegetables, buy compost, seeds, and stakes; till the soil, and hook up the irrigation. If it's flowers this year, go to the garden store and if they have 4 inch plants buy enough for the plot and plant them. If they don't then get flats of smaller plants and bring them home and let them get acclimated for a week and then plant them next week. To till the soil, if the ox is healthy, do it with the animal plow, otherwise get out the rototiller. Q120. My bathtub fills at 10 gallons per minute. It has a leak, though, whereby it loses 10% of it's volume per minute. It's a neat rectangular tub in which each 10 gallons is 2 inches of depth. How does it behave over time? Q80. If A: until B do C and then, do D if E, otherwise do F while G. Otherwise if H, then if I do J else do K. Do L. Q101. If the farmer plants early and the spring is warm, she can get a 20% increase in her harvest. But if she plants early and there's a late frost she can lose 50% of her harvest. Historically, these late frosts happen one year in four (25% of the time). Use a decision tree to determine how much she would be willing to invest in a perfect forecast. Q374. "Social order triumphs over the state of nature by coercion…." Attribute and explain. Q366. A storyboard is a technique for graphically organizing the telling of a story. Think about how you would explain Mead's theory of the social self as a theory of how the human individual is a vehicle for the generation of social order. Imagine how you would represent the theory visually and how you would explain it textually. Q127. Consider the singles bar scene. Develop a model along the lines of the market for lemons (Wikipedia), that would suggest that information asymmetries could possibly kill the scene. What institutional interventions prevent this from happening?. Q2 Sketch, anew, the decision tree for the embassy party described in the text book. "The officer in charge of a United States Embassy recreation program has decided to replenish the employees club funds by arranging a dinner. It rains nine days out of ten at the post and he must decide whether to hold the dinner indoors or out. An enclosed pavilion is available but uncomfortable, and past experience has shown turnout to be low at indoor functions, resulting in a 60 per cent chance of gaining $100 from a dinner held in the pavilion and a 40 per cent chance of losing$20. On the other hand, an outdoor dinner could be expected to earn $500 unless it rains, in which case the dinner would lose about$10" (Stokey & Zeckhauser 1977, 202). Q394. What does Weber mean by "the routinization of charisma"? (118ff) Q221. For each of the problems described below, say whether it is best thought of as an analog to diet, transport, activity, or assignment as outlined above. 1. S&Z problem #1 Incinerators DIET TRANSPORT ACTIVITY ASSIGNMENT 2. S&Z problem #2 Police Shifts DIET TRANSPORT ACTIVITY ASSIGNMENT 3. S&Z problem #3 Hospitals and disasters DIET TRANSPORT ACTIVITY ASSIGNMENT 4. S&Z problem #4 Electricity generation and pollution DIET TRANSPORT ACTIVITY ASSIGNMENT 5. S&Z text example – transit maintenance DIET TRANSPORT ACTIVITY ASSIGNMENT Q231. A transport company has two types of trucks, Type A and Type B. Type A has a refrigerated capacity of 20 m3 and a non-refrigerated capacity of 40 m3 while Type B has the same overall volume with equal sections for refrigerated and non-refrigerated stock. A grocer needs to hire trucks for the transport of 3,000 m3 of refrigerated stock and 4 000 m3 of non-refrigerated stock. The cost per kilometer of a Type A is $30, and$40 for Type B. How many trucks of each type should the grocer rent to achieve the minimum total cost? Alternatively A school district has two types of lower division schools, type A and type B. Type A school buildings have capacity for 200 little kids and 400 big kids. Type B buildings have capacity for 300 little kids and 300 big kids. Next year the district expects enrollments of 3000 little kids and 4000 big kids. Type A buildings cost 30,000 per year to maintain while type B buildings cost 40,000. What mix of school buildings will allow the district to handle the expected enrollment at the lowest maintenance cost? (From VITutor) Q182. Which threshold frequency distribution corresponds to the following description: The community has a few people who will join no matter what, a few more who will join if some others have joined, still more who will join if a goodly number are on board and so on all the way up to a hesitant few but even they will join if it appears everyone else has. A. B. C. D. E. Q180. Which of the cumulative frequency distributions below corresponds to this frequency distribution A. B. C. D. E. Q328. Have a look at this recent release from Bureau of Labor Statistics (BLS). The data separates those without a job into unemployed but "in the labor force" and "marginally attached to the labor force" and a subset of these called "discouraged" - the former would like to work but have not looked in the last four weeks and so are not counted as unemployed. The latter are not actively looking for work having given up on the idea that its possible to find. These groups are not included in the denominator when the unemployment rate is calculated. The simple version of the unemployment rate is, then, (1) \begin{align} UR = \frac {Unemployed} {Employed + Unemployed} \end{align} Some recent op-eds have counseled caution about optimism that the overall unemployment rate has been going down because it might reflect growth in the number of people no longer looking for work. We'll think about that with a Markov model. We'll simplify the
from cmr import CollectionQuery, GranuleQuery from urllib import request import pygrib import numpy as np #from datetime import timedelta, fromtimestamp import datetime #from dateutil.parser import parse #import re #import os #import glob from pathlib import Path import json print('hello', flush=True) # Set the bounding box of West Virginia in GPS coordinates max_wv_lat = 40.638801 min_wv_lat = 37.201483 max_wv_lon = -77.719519 min_wv_lon = -82.644739 # Get username and password from secrets file with open('/run/secrets/nldas-username', 'r') as file: username = file.read().replace('\n', '') with open('/run/secrets/nldas-password', 'r') as file: password = file.read().replace('\n', '') # Create handlers for downloading files if username is not None and password is not None: redirectHandler = request.HTTPRedirectHandler() cookieProcessor = request.HTTPCookieProcessor() passwordManager = request.HTTPPasswordMgrWithDefaultRealm() passwordManager.add_password(None, "https://urs.earthdata.nasa.gov", username, password) authHandler = request.HTTPBasicAuthHandler(passwordManager) opener = request.build_opener(redirectHandler,cookieProcessor,authHandler) request.install_opener(opener) # Define the evaporation route. def handleEvaporation(task): print('Inside Evaporation request', flush=True) # # Pull the start/end dates from the form variables # start_date = parse(t.start_date) # end_date = parse(t.end_date) - datetime.timedelta(minutes=1) # # Print the variables to the console (view in Docker Dashboard) # print('Request received for range: {} to {}'.format(start_date,end_date)) # # Create a evaporation dictionary to hold the data # evaporation = {} # count = 0 # evaporation['average'] = 0 # evaporation['history'] = [] # # api = GranuleQuery() # granules = api.short_name("NLDAS_FORA0125_H") \ # .temporal(start_date, end_date) \ # .get() # # # Make sure the granules cache directory exists # Path('./cache').mkdir(parents=True, exist_ok=True) # # # Loop through the granules from CMR # for granule in granules: # # Get the URL of the granule # url = granule["links"][0]["href"] # # Create the filename of the granule by combining NLDAS with the id # filename = Path('./cache/') / granule["producer_granule_id"] # # Test if the file already exists # if Path(filename).is_file(): # print('Pulling from cache: {}'.format(filename)) # elif username is not None and password is not None: # # Announce filename for diagnostic purposes (goes to Docker Dashboard) # print('Downloading {}'.format(filename)) # # Create the file and open it # with request.urlopen(url) as response, open(filename, 'wb') as file: # # read in contents of downloaded file # data = response.read() # # Write the contents to the local file # file.write(data) # else: # print('ERROR: Set environment variables USERNAME/PASSWORD for NLDAS2 download or prebuild cache') # return '{error:"Set environment variables USERNAME/PASSWORD for NLDAS2 download or prebuild cache"}' # # # Open the file with pyGRIB # grbs = pygrib.open(str(filename)) # # Select parameter 228 (potential evaporation) # grb = grbs.select(parameterName='228')[0] # # Identify the indexes corresponding to the edges of the WV bounding box # x1 = np.searchsorted(grb.distinctLongitudes, min_wv_lon) # y1 = np.searchsorted(grb.distinctLatitudes, min_wv_lat) # x2 = np.searchsorted(grb.distinctLongitudes, max_wv_lon, side='right') # y2 = np.searchsorted(grb.distinctLatitudes, max_wv_lat, side='right') # # Print those indexes for diagnostic purposes # print("Latitude range: {} - {}".format(grb.distinctLatitudes[y1], grb.distinctLatitudes[y2])) # print("Longitude range: {} - {}".format(grb.distinctLongitudes[x1], grb.distinctLongitudes[x2])) # # Pull the numpy subset of values inside the bounding box # data = grb.values[y1:y2, x1:x2] # # Calculate average using numpy (speed boost!) # avg = np.average(data) # # Append this history event to the output variable # evaporation['history'].append({ # 'date': grb.dataDate, # 'time': grb.dataTime, # 'value': avg # }) # # Add average the variable # evaporation['average'] += float(avg) # # Increase count for later division # count += 1 # # # Divide 'average' by number of granules to produce real average # if count > 0: # evaporation['average'] /= count # # # Return data in JSON format # return jsonify(evaporation) # Define the evaporation route. def handleEvaporation(r, t): print('Inside Evaporation request', flush=True) try: # Pull the start/end dates from the form variables start_date = datetime.datetime.fromtimestamp(t['start_date']) end_date = datetime.datetime.fromtimestamp(t['end_date']) # Print the variables to the console (view in Docker Dashboard) print('Request received for range: {} to {}'.format(start_date,end_date)) key = 'evaporation-{}-{}'.format(int(start_date.timestamp()),int(end_date.timestamp())) # Check if work has already been done result = r.get(key) print('Result is {}'.format(type(result)), flush=True) if ( result is not None ): print('Already did job {}'.format(key), flush=True) return # Create a evaporation dictionary to hold the data evaporation = {} count = 0 evaporation['average'] = 0 evaporation['history'] = [] api = GranuleQuery() granules = api.short_name("NLDAS_FORA0125_H") \ .temporal(start_date, end_date) \ .get() # Make sure the granules cache directory exists Path('./cache').mkdir(parents=True, exist_ok=True) # Loop through the granules from CMR for granule in granules: # Get the URL of the granule url = granule["links"][0]["href"] # Create the filename of the granule by combining NLDAS with the id filename = Path('./cache/') / granule["producer_granule_id"] # Test if the file already exists if Path(filename).is_file(): print('Pulling from cache: {}'.format(filename)) elif username is not None and password is not None: # Announce filename for diagnostic purposes (goes to Docker Dashboard) print('Downloading {}'.format(filename)) # Create the file and open it with request.urlopen(url) as response, open(filename, 'wb') as file: # read in contents of downloaded file data = response.read() # Write the contents to the local file file.write(data) else: print('ERROR: Set environment variables USERNAME/PASSWORD for NLDAS2 download or prebuild cache') return '{error:"Set environment variables USERNAME/PASSWORD for NLDAS2 download or prebuild cache"}' # Open the file with pyGRIB grbs = pygrib.open(str(filename)) # Select parameter 228 (potential evaporation) grb = grbs.select(parameterName='228')[0] # Identify the indexes corresponding to the edges of the WV bounding box x1 = np.searchsorted(grb.distinctLongitudes, min_wv_lon) y1 = np.searchsorted(grb.distinctLatitudes, min_wv_lat) x2 = np.searchsorted(grb.distinctLongitudes, max_wv_lon, side='right') y2 = np.searchsorted(grb.distinctLatitudes, max_wv_lat, side='right') # Print those indexes for diagnostic purposes print("Latitude range: {} - {}".format(grb.distinctLatitudes[y1], grb.distinctLatitudes[y2]), flush=True) print("Longitude range: {} - {}".format(grb.distinctLongitudes[x1], grb.distinctLongitudes[x2]), flush=True) # Pull the numpy subset of values inside the bounding box data = grb.values[y1:y2, x1:x2] # Calculate average using numpy (speed boost!) avg = np.average(data) # Append this history event to the output variable evaporation['history'].append({ 'date': grb.dataDate, 'time': grb.dataTime, 'value': avg }) # Add average the variable evaporation['average'] += float(avg) # Increase count for later division count += 1 # Divide 'average' by number of granules to produce real average if count > 0: evaporation['average'] /= count # Return data in JSON format r.set(key,json.dumps(evaporation)) # print(json.dumps(evaporation, indent=2), flush=True) except Exception as e: print('A daggone exception occurred') print(e) except: print('A daggone exceptional exception occurred') # Define the precipitation route. def handlePrecipitation(r, t): print('Inside Precipitation request', flush=True) try: # Pull the start/end dates from the form variables start_date = datetime.datetime.fromtimestamp(t['start_date']) end_date = datetime.datetime.fromtimestamp(t['end_date']) # Print the variables to the console (view in Docker Dashboard) print('Request received for range: {} to {}'.format(start_date,end_date)) key = 'precipitation-{}-{}'.format(int(start_date.timestamp()),int(end_date.timestamp())) # Check if work has already been done result = r.get(key) print('Result is {}'.format(type(result)), flush=True) if ( result is not None ): print('Already did job {}'.format(key), flush=True) return # Create a precipitation dictionary to hold the data precipitation = {} count = 0 precipitation['average'] = 0 precipitation['history'] = [] api = GranuleQuery() granules = api.short_name("NLDAS_FORA0125_H") \ .temporal(start_date, end_date) \ .get() # Make sure the granules cache directory exists Path('./cache').mkdir(parents=True, exist_ok=True) # Loop through the granules from CMR for granule in granules: # Get the URL of the granule url = granule["links"][0]["href"] # Create the filename of the granule by combining NLDAS with the id filename = Path('./cache/') / granule["producer_granule_id"] # Test if the file already exists if Path(filename).is_file(): print('Pulling from cache: {}'.format(filename)) elif username is not None and password is not None: # Announce filename for diagnostic purposes (goes to Docker Dashboard) print('Downloading {}'.format(filename)) # Create the file and open it with request.urlopen(url) as response, open(filename, 'wb') as file: # read in contents of downloaded file data = response.read() # Write the contents to the local file file.write(data) else: print('ERROR: Set environment variables USERNAME/PASSWORD for NLDAS2 download or prebuild cache') return '{error:"Set environment variables USERNAME/PASSWORD for NLDAS2 download or prebuild cache"}' # Open the file with pyGRIB grbs = pygrib.open(str(filename)) # Select parameter 61 (precipitation hourly total) grb = grbs.select(parameterName='61')[0] # Identify the indexes corresponding to the edges of the WV bounding box x1 = np.searchsorted(grb.distinctLongitudes, min_wv_lon) y1 = np.searchsorted(grb.distinctLatitudes, min_wv_lat) x2 = np.searchsorted(grb.distinctLongitudes, max_wv_lon, side='right') y2 = np.searchsorted(grb.distinctLatitudes, max_wv_lat, side='right') # Print those indexes for diagnostic purposes print("Latitude range: {} - {}".format(grb.distinctLatitudes[y1], grb.distinctLatitudes[y2]), flush=True) print("Longitude range: {} - {}".format(grb.distinctLongitudes[x1], grb.distinctLongitudes[x2]), flush=True) # Pull the numpy subset of values inside the bounding box data = grb.values[y1:y2, x1:x2] # Calculate average using numpy (speed boost!) avg = np.average(data) # Append this history event to the output variable precipitation['history'].append({ 'date': grb.dataDate, 'time': grb.dataTime, 'value': avg }) # Add average the variable precipitation['average'] += float(avg) # Increase count for later division count += 1 # Divide 'average' by number of granules
) d q 2 c m + ( p + θ c ) q ˆ q m V ′′ ( R 2 ( q ) ) ( p q ˆ + θ q ˆ + c ( q q ˆ ) ) f ( q ) d q ) d β ( 1 β ) ( p 0 q ˆ V ′′ ( R 1 ( q ) ) f ( q ) d q + ( p + θ c ) q ˆ q m V ′′ ( R 2 ( q ) ) f ( q ) d q ) d α = 0 Thus, we have a system of equations given by eqs [ 36] and [ 37], that we can write in a compact form as [38] d α + Γ d q ˆ + Λ d β = 0 Φ d α Ψ d q ˆ + Υ d β = 0. where we use the following notation: Γ β p 0 q ˆ f ( q ) d q + ( p + θ c ) q ˆ q m f ( q ) d q > 0 Λ 0 q ˆ ( p q ˆ + θ q ) f ( q ) d q + q ˆ q m ( p q ˆ + θ q + c ( q q ˆ ) ) f ( q ) d q > 0 Φ ( 1 β ) p 0 q ˆ V ′′ ( R 1 ( q ) ) f ( q ) d q + ( p + θ c ) q ˆ q m V ′′ ( R 2 ( q ) ) f ( q ) d q > 0 Ψ 2 W q ˆ 2 > 0 ϒ p 0 q ^ V ( R 1 ( q ) ) f ( q ) d q + ( p + θ c ) q ^ q m V ( R 2 ( q ) ) f ( q ) d q ( 1 β ) ( p 0 q ^ V ( R 1 ( q ) ) ( p q ^ + θ q ) f ( q ) d q + ( p + θ c ) q ˆ q m V ′′ ( R 2 ( q ) ) ( p q ˆ + θ q ˆ + c ( q q ˆ ) ) f ( q ) d q ) > 0 Note that if we assume risk neutrality, the system simplifies to [39] d α + Γ d q ˆ + Λ d β = 0 [40] Ψ ˆ d q ˆ + Υ ˆ d β = 0. where Ψ ˆ and Υ ˆ represent the corresponding values Ψ and Υ when V ′′ ( ) = 0 . Note that eq. [ 40] tells us that d q ˆ / d β > 0 , and eq. [ 39] tells us that α adjusts accordingly to satisfy the equation. ## Appendix B ### B.1 Technical analysis of Remark 4 The first-order condition for the hospital is given by, [41] W q ˆ = f ( q ˆ ) Δ U ( b ) p 0 q ˆ V ( R p q ˆ θ q ) f ( q ) d q ( p + θ c ) q ˆ q m V ( R p q ˆ θ q ˆ c ( q q ˆ ) ) f ( q ) d q = 0. To obtain the impact of the policy change on technology adoption (that is, on q ˆ ), we totally differentiate eq. [ 41] with respect to q ˆ , p , and θ , and impose d θ = λ d p , where λ = q ˆ / 0 q ˆ q f ( q ) d q . Total differentiation of the first-order condition yields, [42] 2 W q ^ 2 d q ^ ( 0 q ^ V ( R p q ^ θ q ) f ( q ) d q ) d p + ( p q ^ 0 q ^ V ' ( R p q ^ θ q ) f ( q ) d q ) d p + ( ( p q ^ 0 q ^ V ' ( R p q ^ θ q ) f ( q ) d q θ ( q ^ q m V ( R p q ^ θ q ^ c ( q q ^ ) ) f ( q ) d q ) d p + ( ( p + θ c ) q ^ q ^ q m V ' ( R p q ^ θ q ^ c ( q q ^ ) ) f ( q ) d q ) d p ( q ^ q m V ( R p q ^ θ q ^ c ( q q ^ ) ) f ( q ) d q ) d θ + ( ( p + θ c ) q ^ q ^ q m V ' ( R p q ^ θ q ^ c ( q q ^ ) ) f ( q ) d q ) d θ = 0 Substituting d θ = λ d p and collecting terms we can rewrite eq. [ 42] as [43] 2 W q ˆ 2 d q ˆ = [ 0 q ˆ V ( R p q ˆ θ q ) f ( q ) d q ] d p [ p q ˆ 0 q ˆ V ′′ ( R p q ˆ θ q ) f ( q ) d q p 0 q ˆ V ′′ ( R p q ˆ θ q ) q f ( q ) d q λ ] d p + [ ( 1 λ ) q ˆ q m V ( R p q ˆ θ q ˆ c ( q q ˆ ) ) f ( q ) d q ] d p + [ ( λ 1 ) q ˆ ( p + θ c ) q ˉ q ˆ q m V ′′ ( R p q ˆ θ q ˆ c ( q q ˆ ) ) f ( q ) d q ] d p , and further collecting terms, eq. [ 43] becomes, [44] 2 W q ˆ 2 d q ˆ = 0 q ˆ V ( R p q ˆ θ q ) f ( q ) d q d p p q ˆ 0 q ˆ V ′′ ( R p q ˆ θ q ) 1 q 0 q ˆ q f ( q ) d q f ( q ) d q d p + ( 1 λ ) q ˆ q m V ( R p q ˆ θ q ˆ c ( q q ˆ ) ) f ( q ) d q d p + ( λ 1 ) q ˆ ( p + θ c ) q ˆ q ˆ q m V ′′ ( R p q ˆ θ q ˆ c ( q q ˆ ) ) f ( q ) d q d p The first two terms in square brackets in the right-hand side are positive, while the third and fourth terms have negative signs. Therefore the impact on q ˆ will be ambiguous. This can be made clearer in the special case of risk neutrality, that is V = 1 and V ′′ = 0 . Then hospital decision makers care about expected profits from hospital activity and patient health gains. Under these assumptions, the right-hand side of eq. [44] can be rewritten as, [45] 0 q ˆ ( R
\section{Introduction} \label{intro} Throughout the paper $P$ is a Markov kernel on a measurable space $(\mathbb{X},\mbox{$\cal X$})$, and $\{\widehat{P}_k\}_{k\geq1}$ is a sequence of nonnegative sub-Markov kernels on $(\mathbb{X},\mbox{$\cal X$})$. For any positive measure $\mu$ on $\mathbb{X}$ and any $\mu$-integrable function $f : \mathbb{X}\rightarrow\C$, $\mu(f)$ denotes the integral $\int fd\mu$. Let $V : \mathbb{X}\rightarrow[1,+\infty)$ be a measurable function such that $V(x_0)=1$ for some $x_0\in\mathbb{X}$. Let $(\mbox{$\cal B$}_1,\|\cdot\|_1)$ denote the weighted-supremum Banach space $$\mbox{$\cal B$}_1 := \big\{ \ f : \mathbb{X}\rightarrow\C, \text{ measurable }: \|f\|_1 := \sup_{x\in\mathbb{X}} |f(x)| V(x)^{-1} < \infty\ \big\}.$$ In the sequel, $PV/V$ and each $\widehat{P}_k V/V$ are assumed to be bounded on $\mathbb{X}$, so that both $P$ and $\widehat{P}_k$ are bounded linear operators on $\mbox{$\cal B$}_1$. Moreover $P$ (resp.~every $\widehat{P}_k$) is assumed to have an invariant probability measure $\pi$ (resp.~an invariant bounded positive measure $\widehat\pi_k$) on $(\mathbb{X},\mbox{$\cal X$})$ such that $\pi(V)< \infty$ (resp. $\widehat\pi_k(V)< \infty$). We suppose that there exists a bounded measurable function $\phi_k:\mathbb{X}\rightarrow[0,+\infty)$ such that $\widehat{P}_k\phi_k = \phi_k$ and $\widehat\pi_k(\phi_k) = 1$. Throughout the paper, every $\widehat{P}_k$ is of finite rank on $\mbox{$\cal B$}_1$ and the sequence $\{\widehat{P}_k\}_{k\ge 1}$ converges to $P$ in the weak sense (\ref{C01}) below. Finally we assume that \begin{equation} \label{cond-phi-k} \lim_{k\rightarrow+\infty} \pi(\phi_k) = 1 \quad \text{and} \quad\lim_{k\rightarrow+\infty} \widehat\pi_k(1_\mathbb{X}) = 1 . \end{equation} These two conditions are necessary for the sequence $\{\widehat\pi_k\}_{k\ge 1}$ to converge to $\pi$ in total variation distance since $\pi(\phi_k) - 1 = (\pi - \widehat\pi_k)(\phi_k)$ and $\widehat\pi_k(1_\mathbb{X}) - 1 = (\widehat\pi_k-\pi)(1_\mathbb{X})$. When $P$ is a Markov kernel on $\mathbb{X}:=\N$, a typical instance of sequence $\{\widehat{P}_k\}_{k\ge 1}$ is obtained by considering the extended sub-Markov kernel $\widehat{P}_k$ derived from the linear augmentation (in the last column) of the $(k+1)\times (k+1)$ northwest corner truncation $P_k$ of $P$ (e.g. see \cite{Twe98}). Then $\widehat\pi_k$ is the (extended) probability measure on $\N$ derived from the $P_k$-invariant probability measure and $\phi_k = 1_{B_k}$ with $B_k := \{0,\ldots,k\}$. In this work, the connection between the $V$-geometrical ergodicity of $P$ and that of $\widehat{P}_k$ is investigated. These properties are defined as follows. \leftmargini 0em {\it \begin{itemize} \item[] \textbf{ $P$ (resp.~$\widehat{P}_k$) is said to be $V$-geometrically ergodic} if there exist some rate $\rho\in(0,1)$ (resp.~$\rho_k\in(0,1)$) and constant $C>0$ (resp.~$C_k>0$) such that \begin{gather} \forall n\geq 0,\ \sup_{f\in{\cal B}_1,\|f\|_1\leq 1} \|P^nf- \pi(f)1_\mathbb{X}\|_1 \leq C\, \rho^n \tag{$V$}\label{V-geo-cond} \\ respectively: \quad \forall n\geq 0,\ \sup_{f\in{\cal B}_1,\|f\|_1\leq 1}\|{\widehat{P}_k}^{\;n} f - \widehat\pi_k(f)\phi_k\|_1 \leq C_k\, {\rho_k}^n. \qquad \qquad \tag{$V_k$} \label{Vk-geo-cond} \end{gather} \end{itemize}} \noindent Specifically, the two following issues are studied. \leftmargini 2.4em \begin{enumerate}[(Q1)] {\it \item \label{1} Suppose that $P$ is $V$-geometrically ergodic. \begin{enumerate}[(a)] \item Is $\widehat{P}_k$ a $V$-geometrically ergodic kernel for $k$ large enough? \item When $(\rho,C)$ is known in \emph{(\ref{V-geo-cond})}, can we deduce explicit $(\rho_k,C_k)$ in \emph{(\ref{Vk-geo-cond})} from $(\rho,C)$? \item Does the total variation distance $\| \widehat\pi_k - \pi \|_{TV}$ go to $0$ when $k\rightarrow+\infty$, and can we obtain an explicit bound for $\| \widehat\pi_k - \pi \|_{TV}$ when $(\rho,C)$ is known in~\emph{(\ref{V-geo-cond})}? \end{enumerate} \item \label{2} Suppose that $\widehat{P}_k$ is $V$-geometrically ergodic for every $k$. \begin{enumerate}[(a)] \item Is $P$ a $V$-geometrically ergodic kernel? \item When $(\rho_k,C_k)$ is known in \emph{(\ref{Vk-geo-cond})}, can we deduce explicit $(\rho,C)$ in \emph{(\ref{V-geo-cond})} from $(\rho_k,C_k)$, and consequently obtain a bound for $\| \widehat\pi_k - \pi \|_{TV}$ using the last part of (Q\ref{1}$(c)$)? \end{enumerate}} \end{enumerate} A natural way to solve (Q\ref{1}) is to see $\widehat{P}_k$ as a perturbed operator of $P$, and vice versa for (Q\ref{2}). The standard perturbation theory requires that $\{\widehat{P}_k\}_{k\ge 1}$ converges to $P$ in operator norm on $\mbox{$\cal B$}_1$. Unfortunately this condition may be very restrictive even if $\mathbb{X}$ is discrete (e.g.~see \cite{ShaStu00,FerHerLed11}): for instance, in our application to truncation of discrete kernels in Section~\ref{sec-appli}, this condition never holds (see Remark~\ref{Annexe_Continuity}). Here we use the weak perturbation theory due to Keller and Liverani \cite{KelLiv99,Liv01} (see also \cite{Bal00}) which invokes the weakened convergence property \begin{equation} \label{C01} \|\widehat{P}_k - P\|_{0,1} : = \sup_{f\in{\cal B}_0,\|f\|_0\leq 1} \|\widehat{P}_kf- Pf\|_1 \xrightarrow[k\rightarrow +\infty ]{} 0, \tag{$C_{0,1}$} \end{equation} where $\mbox{$\cal B$}_0$ is the Banach space of bounded measurable $\C$-valued functions on $\mathbb{X}$ equipped with its usual norm $\|f\|_0:=\sup_{x\in\mathbb{X}}|f(x)|$. In the truncation context of Section~\ref{sec-appli} where $\mathbb{X}:=\N$, Condition (\ref{C01}) holds provided that $\lim_k V(k)= +\infty$. The price to pay for using~(\ref{C01}) is that two functional assumptions are needed. The first one involves the Doeblin-Fortet inequalities: such dual inequalities can be derived for $P$ and every $\widehat{P}_k$ from Condition (\ref{drift-gene-cond-gene}) below. The second one requires to have an accurate bound of the essential spectral radius $r_{ess}(P)$ of $P$ acting on $\mbox{$\cal B$}_1$. Issues (Q\ref{1}) and (Q\ref{2}) are solved in Sections~\ref{sec-V-to-Vk} and \ref{sec-Vk-to-V}. The question in (Q\ref{1})-(Q\ref{2}) concerning $\| \widehat\pi_k - \pi \|_{TV}$ is solved by arguments inspired by \cite[Lem.~7.1]{Liv01} (see Proposition~\ref{pro-P-hatP-en3fois}). The other questions in both (Q\ref{1})-(Q\ref{2}) are addressed using \cite{Liv01}. Theorems~\ref{pro-bv} and \ref{cor-Vk-to-V} provide a positive and explicit answer to the issues (Q\ref{1}) and (Q\ref{2}) under Conditions~(\ref{V-geo-cond}), (\ref{Vk-geo-cond}), (\ref{C01}) and the following uniform weak drift condition: \begin{equation} \label{drift-gene-cond-gene} \exists\, \delta\in(0,1),\ \exists L>0,\ \forall k\in\N^*\cup\{\infty\}, \quad \widehat{P}_kV \leq \delta V + L\, 1_{\mathbb{X}} \tag{\text{UWD}} \end{equation} where by convention $\widehat{P}_\infty := P$. The notion of essential spectral radius and the material on quasi-compactness used for bounding $r_{ess}(P)$ are reported in Section~\ref{sec-mino}. Our results are applied in Section~\ref{sec-appli} to truncation of discrete Markov kernels. Some useful theoretical complements on the weak perturbation theory are postponed to Section~\ref{WP_truncation}. The estimates of Theorems~\ref{pro-bv} and \ref{cor-Vk-to-V} are all the more precise that the real number $\delta$ in (\ref{drift-gene-cond-gene}) and the bound for $r_{ess}(P)$ are accurate. The weak drift inequality used in (\ref{drift-gene-cond-gene}) is a simple and well-known condition introduced in \cite{MeyTwe93} for studying $V$-geometric ergodicity. Managing $r_{ess}(P)$ is more delicate, even in discrete case. To that effect, it is important not to confuse $r_{ess}(P)$ with $\rho$ in (\ref{V-geo-cond}). Actually Inequality~(\ref{V-geo-cond}) gives\footnote{ (See e.g.~\cite{KonMey03} or apply Definition~\ref{def-q-c} with $H:=\{f\in\mbox{$\cal B$}_1 : \pi(f)=0\}$.)} $r_{ess}(P)\leq \rho$, but this bound may be very inaccurate since there exist Markov kernels $P$ such that $\rho-r_{ess}(P)$ is close to one (see Definition~\ref{def-q-c}). Moreover note that no precise rate $\rho$ in (\ref{V-geo-cond}) is known a priori in Issue (Q\ref{2}). Accordingly, our study requires to obtain a specific control of the essential spectral radius $r_{ess}(P)$ of a general Markov kernel $P$ acting on $\mbox{$\cal B$}_1$. This study, which has its own interest, is presented in Section~\ref{sec-mino} where the two following results are presented. \leftmargini 3.8em \begin{enumerate}[(i)] {\it \item If $P$ satisfies some classical drift/minorization conditions, then $r_{ess}(P)$ can be bounded in terms of the constants of these drift/minorization conditions (see Theorem~\ref{main}). \item If $PV \leq \delta V + L\, 1_{\mathbb{X}}$ for some $\delta\in(0,1)$ and $L>0$, and if $P : \mbox{$\cal B$}_0\rightarrow\mbox{$\cal B$}_1$ is compact, then $r_{ess}(P)\leq \delta$ (see Proposition~\ref{pro-qc-bis}).} \end{enumerate} To the best of our knowledge, the general result~$(i)$ is not known in the literature. Result~$(ii)$ is a simplified version of \cite[Th.~3.11]{Wu04}. For convenience we present a direct and short proof using \cite[Cor.~1]{Hen93}. The bound $r_{ess}(P) \leq \delta$ obtained in $(ii)$ is more precise than that obtained in $(i)$ (and sometimes is optimal). The compactness property in $(ii)$ must not be confused with that of $P:\mbox{$\cal B$}_1\rightarrow\mbox{$\cal B$}_1$ or $P:\mbox{$\cal B$}_0\rightarrow\mbox{$\cal B$}_0$, which are much stronger conditions. If $P$ is an infinite matrix ($\mathbb{X}:=\N$), then $P:\mbox{$\cal B$}_0\rightarrow\mbox{$\cal B$}_1$ is compact when $\lim_{k}V(k)=+\infty$, while in general $P$ is compact neither on $\mbox{$\cal B$}_1$ nor on $\mbox{$\cal B$}_0$. Let us give a brief review of previous works related to Issues (Q\ref{1}) or (Q\ref{2}). Various probabilistic methods have been developed to derive explicit rate and constant $(\rho,C$) in Inequality~(\ref{V-geo-cond}) from the constants of drift conditions (see \cite{MeyTwe94,LunTwe96,Bax05} and the references therein). To the best of our knowledge, these methods, which are not concerned with approximation issues, provide a computable rate $\rho$ which is often unsatisfactory, except for reversible or stochastically monotone~$P$. Convergence of $\{\widehat\pi_k\}_{k\geq1}$ to $\pi$ has been studied in \cite{Twe98} for truncation approximations of $V$-geometrically ergodic Markov kernels with discrete $\mathbb{X}$. Specifically it is proved that $\sup_{|f|\leq V}|\widehat\pi_k(f) - \pi(f)|$ goes to $0$ when $k\rightarrow+\infty$. In particular
write the bit that works out which n-gram should come next! This bit goes inside the while loop we created above (as you might suspect). First, let's fetch a list of n-grams that would actually make sense coming next. // The substring that the next ngram in the chain needs to start with string nextStartsWith = lastNgram.Substring(1); // Get a list of possible n-grams we could choose from next List<string> nextNgrams = ngrams.FindAll(gram => gram.StartsWith(nextStartsWith)); With a bit of Linq (Language-INtrgrated Query), that isn't too tough :-) If you haven't seen linq before, then I'd highly recommend you check it out! It makes sorting and searching datasets much easier. The above is quite simple - I just filter our list of n-grams through a function that extracts all the ones that start with the appropriate letter. It's at this point that we can insert the second of our two stopping conditions. If there aren't any possible n-grams to pick from, then we can't continue. // If there aren't any choices left, we can't exactly keep adding to the new string any more :-( if(nextNgrams.Count == 0) break; With our list of possible n-grams, we're now in a position to pick one at random to add to the word. It's LINQ to the rescue again: // Pick a random n-gram from the list string nextNgram = nextNgrams.ElementAt(rand.Next(0, nextNgrams.Count)); This is another simple one - it just extract the element in the list at a random location in the list. In hindsight I could have used the array operator syntax here ([]), but it doesn't really matter :-) Now that we've picked the next n-gram, we can add it to the word we're building: // Add the last character from the n-gram to the string we're building result += nextNgram[nextNgram.Length - 1]; and that's the markov chain practically done! Oh, we mustn't forget to update the lastNgram variable (I forgot this when building it :P): lastNgram = nextNgram; And that wraps up our unweighted markov chain. Here's the whole class in full: using System; using System.Collections.Generic; using System.Linq; namespace SBRL.Algorithms.MarkovGrams { /// <summary> /// An unweighted character-based markov chain. /// </summary> public class UnweightedMarkovChain { /// <summary> /// The random number generator /// </summary> Random rand = new Random(); /// <summary> /// The ngrams that this markov chain currently contains. /// </summary> List<string> ngrams; /// <summary> /// Creates a new character-based markov chain. /// </summary> /// <param name="inNgrams">The ngrams to populate the new markov chain with.</param> public UnweightedMarkovChain(IEnumerable<string> inNgrams) { ngrams = new List<string>(inNgrams); } /// <summary> /// Returns a random ngram that's currently loaded into this UnweightedMarkovChain. /// </summary> /// <returns>A random ngram from this UnweightMarkovChain's cache of ngrams.</returns> public string RandomNgram() { return ngrams[rand.Next(0, ngrams.Count)]; } /// <summary> /// Generates a new random string from the currently stored ngrams. /// </summary> /// <param name="length"> /// The length of ngram to generate. /// Note that this is a target, not a fixed value - e.g. passing 2 when the n-gram order is 3 will /// result in a string of length 3. Also, depending on the current ngrams this markov chain contains, /// it may end up being cut short. /// </param> /// <returns>A new random string.</returns> public string Generate(int length) { string result = RandomNgram(); string lastNgram = result; while(result.Length < length) { // The substring that the next ngram in the chain needs to start with string nextStartsWith = lastNgram.Substring(1); // Get a list of possible n-grams we could choose from next List<string> nextNgrams = ngrams.FindAll(gram => gram.StartsWith(nextStartsWith)); // If there aren't any choices left, we can't exactly keep adding to the new string any more :-( if(nextNgrams.Count == 0) break; // Pick a random n-gram from the list string nextNgram = nextNgrams.ElementAt(rand.Next(0, nextNgrams.Count)); // Add the last character from the n-gram to the string we're building result += nextNgram[nextNgram.Length - 1]; lastNgram = nextNgram; } return result; } } } I've released the full code for my markov generator (with a complete command line interface!) on my personal git server. The repository can be found here: sbrl/MarkovGrams. To finish this post off, I'll leave you with a few more words that I've generated using it :D 1 2 3 4 5 mecuc uipes jeraq acrin nnvit blerbopt drsacoqu yphortag roirrcai elurucon pnsemophiqub omuayplisshi udaisponctec mocaltepraua rcyptheticys eoigemmmpntartrc rattismemaxthotr hoaxtancurextudu rrgtryseumaqutrc hrpiniglucurutaj ## Markov Chains Part 1: N-Grams After wanting to create a markov chain to generate random words for ages, I've recently had the time to actually write one :D Since I had a lot of fun writing it, I thought I'd share it here. A markov chain, in simple terms, is an algorithm to take a bunch of input, and generate a virtually unlimited amount of output in the style of the input. If I put my 166 strong wordlist of sciencey words through a markov chain, I get a bunch of words like this: a b c d raccession bstrolaneu aticl lonicretiv mpliquadri tagnetecta subinverti catorp ssignatten attrotemic surspertiv tecommultr ndui coiseceivi horinversp icreflerat landargeog eograuxila omplecessu ginverceng evertionde chartianua spliqui ydritangt grajecubst ngintagorp ombintrepe mbithretec trounicabl ombitagnai ccensorbit holialinai cessurspec dui mperaneuma yptintivid ectru llatividet imaccellat siondl tru coo treptinver gnatiartia nictrivide pneumagori entansplan uatellonic Obviously, some of the above aren't particularly readable, but the majority are ok (I could do with a longer input wordlist, I think). To create our very own markov chain that can output words like the above, we need 2 parts: An n-gram generator, to take in the word list and convert it into a form that we can feed into the second part - the markov chain itself. In this post, I'm going to just look at the n-gram generator - I'll cover the markov chain itself in the second part of this mini-series. An n-gram is best explained by example. Take the word refractive, for example. Let's split it up into chunks: ref efr fra rac act cti tiv ive See what I've done? I've taken the original word and split it into chunks of 3, but I've only moved along the word by 1 character at a time, so some characters have been duplicated. These are n-grams of order 3. The order, in the case of an n-gram, is the number of characters per chunk. We could use any order we like: refra efrac fract racti activ ctive The order of the above is 5. If you're wondering how this could possibly be useful - don't worry: All will be explained in due time :-) For now though, writing all these n-grams out manually is rather annoying and tedious. Let's write some code! Generating n-grams from a single word like we did above is actually pretty simple. Here's what I came up with: /// <summary> /// Generates a unique list of n-grams from the given string. /// </summary> /// <param name="str">The string to n-gram-ise.</param> /// <param name="order">The order of n-gram to generate.</param> /// <returns>A unique list of n-grams found in the specified string.</returns> public static IEnumerable<string> GenerateFlat(string str, int order) { List<string> results = new List<string>(); for(int i = 0; i < str.Length - order; i++) { } return results.Distinct(); } I'm using C♯ here, but you can use whatever language you like. Basically, I enter a loop and crawl along the word, adding the n-grams I find to a list, which I then de-duplicate and return. Generating n-grams for just one word is nice, but we need to process a whole bunch of words. Thankfully, that's easy to automate too with a sneaky overload: /// <summary> /// Generates a unique list of n-grams that the given list of words. /// </summary> /// <param name="words">The words to turn into n-grams.</param> /// <param name="order">The order of n-gram to generate..</param> /// <returns>A unique list of n-grams found in the given list of words.</returns> public static IEnumerable<string> GenerateFlat(IEnumerable<string> words, int order) { List<string> results = new List<string>(); foreach(string word in words) { } return results.Distinct(); } All the above does is take a list of words, run them all through the n-gram generation method we wrote above and return the de-duplicated results. Here's a few that it generated from the same wordlist I used above in order 3: 1 2 3 4 5 6 7 8 hor sig ign gna str tre ren ngt sol old lde oli sor sou oun tel lla sub ubs bst tem emp mpe atu tur err ert thr hre dim ime men nsi ack cki kin raj aje jec tor ans nsa sat nsf sfe nsl sla slu luc uce nsm smi nsp are nsu tan Next time, I'll show you my (unweighted) markov chain I've written that uses the n-grams generated
was still awake at this point in the show: dan.awake <- c( TRUE,TRUE,TRUE,TRUE,TRUE,FALSE,FALSE,FALSE,FALSE,FALSE ) Now that we’ve got all three variables in the workspace (assuming you loaded the nightgarden.Rdata data earlier in the chapter) we can construct our three way cross-tabulation, using the table() function. xtab.3d <- table( speaker, utterance, dan.awake ) xtab.3d ## , , dan.awake = FALSE ## ## utterance ## speaker ee onk oo pip ## makka-pakka 0 2 0 2 ## tombliboo 0 0 1 0 ## upsy-daisy 0 0 0 0 ## ## , , dan.awake = TRUE ## ## utterance ## speaker ee onk oo pip ## makka-pakka 0 0 0 0 ## tombliboo 1 0 0 0 ## upsy-daisy 0 2 0 2 Hopefully this output is fairly straightforward: because R can’t print out text in three dimensions, what it does is show a sequence of 2D slices through the 3D table. That is, the , , dan.awake = FALSE part indicates that the 2D table that follows below shows the 2D cross-tabulation of speaker against utterance only for the dan.awake = FALSE instances, and so on.128 ## Ordered factors One topic that I neglected to mention when discussing factors previously (Section 4.7 is that there are actually two different types of factor in R, unordered factors and ordered factors. An unordered factor corresponds to a nominal scale variable, and all of the factors we’ve discussed so far in this book have been unordered (as will all the factors used anywhere else except in this section). However, it’s often very useful to explicitly tell R that your variable is ordinal scale, and if so you need to declare it to be an ordered factor. For instance, earlier in this chapter we made use of a variable consisting of Likert scale data, which we represented as the likert.raw variable: likert.raw ## [1] 1 7 3 4 4 4 2 6 5 5 We can declare this to be an ordered factor in by using the factor() function, and setting ordered = TRUE. To illustrate how this works, let’s create an ordered factor called likert.ordinal and have a look at it: likert.ordinal <- factor( x = likert.raw, # the raw data levels = seq(7,1,-1), # strongest agreement is 1, weakest is 7 ordered = TRUE ) # and it's ordered print( likert.ordinal ) ## [1] 1 7 3 4 4 4 2 6 5 5 ## Levels: 7 < 6 < 5 < 4 < 3 < 2 < 1 Notice that when we print out the ordered factor, R explicitly tells us what order the levels come in. Because I wanted to order my levels in terms of increasing strength of agreement, and because a response of 1 corresponded to the strongest agreement and 7 to the strongest disagreement, it was important that I tell R to encode 7 as the lowest value and 1 as the largest. Always check this when creating an ordered factor: it’s very easy to accidentally encode your data “upside down” if you’re not paying attention. In any case, note that we can (and should) attach meaningful names to these factor levels by using the levels() function, like this: levels( likert.ordinal ) <- c( "strong.disagree", "disagree", "weak.disagree", "neutral", "weak.agree", "agree", "strong.agree" ) print( likert.ordinal ) ## [1] strong.agree strong.disagree weak.agree neutral ## [5] neutral neutral agree disagree ## [9] weak.disagree weak.disagree ## 7 Levels: strong.disagree < disagree < weak.disagree < ... < strong.agree One nice thing about using ordered factors is that there are a lot of analyses for which R automatically treats ordered factors differently from unordered factors, and generally in a way that is more appropriate for ordinal data. However, since I don’t discuss that in this book, I won’t go into details. Like so many things in this chapter, my main goal here is to make you aware that R has this capability built into it; so if you ever need to start thinking about ordinal scale variables in more detail, you have at least some idea where to start looking! ## Dates and times Times and dates are very annoying types of data. To a first approximation we can say that there are 365 days in a year, 24 hours in a day, 60 minutes in an hour and 60 seconds in a minute, but that’s not quite correct. The length of the solar day is not exactly 24 hours, and the length of solar year is not exactly 365 days, so we have a complicated system of corrections that have to be made to keep the time and date system working. On top of that, the measurement of time is usually taken relative to a local time zone, and most (but not all) time zones have both a standard time and a daylight savings time, though the date at which the switch occurs is not at all standardised. So, as a form of data, times and dates suck. Unfortunately, they’re also important. Sometimes it’s possible to avoid having to use any complicated system for dealing with times and dates. Often you just want to know what year something happened in, so you can just use numeric data: in quite a lot of situations something as simple as this.year <- 2011 works just fine. If you can get away with that for your application, this is probably the best thing to do. However, sometimes you really do need to know the actual date. Or, even worse, the actual time. In this section, I’ll very briefly introduce you to the basics of how R deals with date and time data. As with a lot of things in this chapter, I won’t go into details because I don’t use this kind of data anywhere else in the book. The goal here is to show you the basics of what you need to do if you ever encounter this kind of data in real life. And then we’ll all agree never to speak of it again. To start with, let’s talk about the date. As it happens, modern operating systems are very good at keeping track of the time and date, and can even handle all those annoying timezone issues and daylight savings pretty well. So R takes the quite sensible view that it can just ask the operating system what the date is. We can pull the date using the Sys.Date() function: today <- Sys.Date() # ask the operating system for the date print(today) # display the date ## [1] "2018-12-30" Okay, that seems straightforward. But, it does rather look like today is just a character string, doesn’t it? That would be a problem, because dates really do have a numeric character to them, and it would be nice to be able to do basic addition and subtraction to them. Well, fear not. If you type in class(today), R will tell you that the class of the today variable is "Date". What this means is that, hidden underneath this text string that prints out an actual date, R actually has a numeric representation.129 What that means is that you actually can add and subtract days. For instance, if we add 1 to today, R will print out the date for tomorrow: today + 1 ## [1] "2018-12-31" Let’s see what happens when we add 365 days: today + 365 ## [1] "2019-12-30" This is particularly handy if you forget that a year is a leap year since in that case you’d probably get it wrong is doing this in your head. R provides a number of functions for working with dates, but I don’t want to talk about them in any detail. I will, however, make passing mention of the weekdays() function which will tell you what day of the week a particular date corresponded to, which is extremely convenient in some situations: weekdays( today ) ## [1] "Sunday" I’ll also point out that you can use the as.Date() to convert various different kinds of data into dates. If the data happen to
\in M^\sigma$ there \red{exists} a subvariety $\red{Z'_{x,y}}\subset Z_x$ such that $\theta$ induces isomorphism: $$U\cap M_y^+ \simeq (U\cap M_x^+)\times \red{Z'_{x,y}} \,.$$ \end{enumerate} \end{adf} \begin{pro} Suppose that a projective smooth \red{$\C^*$}-variety $M$ satisfies the local product condition. Then the cotangent variety $T^*M$ satisfies the $(\star)$ condition. \end{pro} \begin{proof} It is enough to prove that for every fixed point $F_0 \in M^\sigma$ there is an inclusion $$\overline{\nu^*(M_{F_0}^+\subset M)}\subset \bigsqcup_{F\in M^\sigma}\nu^*(M_{F}^+\subset M) \,. $$ It is equivalent to claim that for arbitrary fixed points $F_0,F$ $$\overline{\nu^*(M_{F_0}^+\subset M)}\cap T^*M_{|M_F^+}\subset \nu^*(M_{F}^+\subset M) \,. $$ Denote by $U$ the neighbourhood of $F$ from \red{the} definition of the local product condition. All of the subsets in the above formula are $\sigma$ equivariant. Thus\red{,} it is enough to prove that $$\overline{\nu^*(M_{F_0}^+\cap U\subset U)}\cap T^*U_{|M_F^+}\subset \nu^*(M_{F}^+\cap U\subset U) \,. $$ The \red{local} product property implies existence of isomorphisms $$ U\simeq M_F^+\times Z, \ \ M_F^+ \simeq M_F^+\times \{pt\}, \ \ M_{F_0}^+ \simeq M_F^+\times Z' \,, $$ for some subvariety $Z'\subset Z$ and point $pt\in Z$. Denote by $E$ the subbundle $$M_F^+\times T^*Z \subset T^*U.$$ Note that \begin{align*} &\nu^*(M_{F}^+\cap U\subset U)=E_{|M_{F}^+} \\ &\nu^*(M_{F_0}^+\cap U\subset U)=M_F^+\times \nu^*(Z'\subset Z) \subset E_{|M^+_{F_0}} \end{align*} Thus \begin{multline*} \overline{\nu^*(M_{F_0}^+\cap U\subset U)}\cap T^*U_{|M_F^+}\subset \overline{E_{|M^+_{F_0}}} \cap T^*U_{|M_F^+} \subset \\ \subset E \cap T^*U_{|M_F^+}=E_{|M_F^+}= \nu^*(M_{F}^+\cap U\subset U) \,. \end{multline*} \end{proof} \red{\section{Motivic Chern class} The motivic Chern class is defined in \cite{BSY}. The equivariant version is due to \cite[section 4]{AMSS} and \cite[section 2]{FRW}. Here we recall the definition of the torus equivariant motivic Chern class. Consult \cite{AMSS,FRW} for a detailed account.} \red{\begin{adf}[after {\cite[section 2.3]{FRW}}] Let $A$ be an algebraic torus. The motivic Chern class assigns to every $A$-equivariant map of quasi-projective $A$-varieties $f:X \to M$ an element $$mC_y^A(f)=mC_y^A(X \xto{f} M) \in G^A(M)[y]$$ such that the following properties are satisfied \begin{description} \item[1. Additivity] If a $A$-variety $X$ decomposes as a union of closed and open invariant subvarieties $X=Y\sqcup U$, then $$mC_y^A(X\xto{f} M)=mC_y^A(Y\xto{f_{|Y}} M)+mC_y^A(U\xto{f_{|U}} M)\,.$$ \item[2. Functoriality] For an equivariant proper map $f:M\to M'$ we have $$mC_y^A(X\stackrel{f\circ g}\to M')=f_*mC_y^A(X\stackrel{g}\to M)\,.$$ \item[3. Normalization] For a smooth $A$-variety $M$ we have $$mC_y^A(id_M)=\lambda_y(T^*M):=\sum_{i=0}^{\rank T^*M}[\Lambda^iT^*M]y^i \,.$$ \end{description} The motivic Chern class is the unique assignment satisfying the above properties. \end{adf}} \section{Comparison with the motivic Chern classes} \label{s:mC} In this section we aim to compare the stable envelopes for \red{the} trivial slope with the motivic Chern classes of BB-cells. Our main results are \begin{pro} \label{pro:mC} Let $M$ be a projective, smooth variety \red{equipped} with an action of an algebraic torus $A$. \red{Suppose that the fixed point set $M^A$ is finite.} \old{with a finite number of fixed points.} Consider the variety $X=T^*M$ \red{equipped} with the action of the torus $\T=\C^*\times A$. Choose any weight chamber $\mathfrak{C}$ of the torus $A$, polarization $T^{1/2}=TM$ and the trivial line bundle $\theta$ as a slope. Then\red{,} the elements $$ \frac{mC_{-y}^A(M^+_F \to M)}{y^{\dim M_F^+}} \in K^A(X)[y,y^{-1}] \simeq K^\T(X) \simeq K^\T(T^*X) $$ satisfy the axioms {\bf b)} and {\bf c)} of the stable envelope \red{$Stab^\theta(F)$.} \end{pro} \begin{rem} In this proposition we \red{do not} assume that $M$ \red{satisfies} the local product condition or even that $X$ satisfies the $(\star)$ condition. \end{rem} \begin{atw} \label{tw:mC} Consider the situation such as in proposition \ref{pro:mC}. Suppose that the variety $M$ with the action of a one dimensional torus $\sigma \in \mathfrak{C}$ satisfies the local product condition (definition \ref{df:prd}). Then \red{the element} $$ \frac{mC_{-y}^A(M^+_F \to M)}{y^{\dim M_F^+}} \in K^A(X)[y,y^{-1}] \simeq K^\T(X) \simeq K^\T(T^*X) $$ \old{determine}\red{is equal to} the $K$-theoretic stable envelope \red{$Stab^\theta(F)$.} \old{$y^{-\frac{1}{2}\dim M_F^+}Stab^\theta_{\mathfrak{C},T^{1/2}}(F)$.} \end{atw} \red{ Our main examples of varieties satisfying the local product property are homogenous spaces (see appendix \red{\hyperref[s:G/P]{B}}). Let $G$ be a reductive, complex Lie group with a chosen maximal torus $A$. Let $B$ be a Borel subgroup and $P$ a parabolic subgroup. We consider the action of the torus $A$ on the variety $G/P$.} \red{ A choice of weight chamber $\mathfrak{C} \subset\mathfrak{a}$ induces a choice of Borel subgroup $B_\mathfrak{C}\subset G$. Let $F\in (G/P)^A$ be a fixed point. It is a classical fact that the BB-cell $(G/P)_F^+$ (with respect to the chamber $\mathfrak{C}$) coincides with the $B_\mathfrak{C}$-orbit of $F$. These orbits are called Schubert cells. } \begin{cor} \red{In the situation presented above} the stable envelopes for \red{the} trivial slope are equal to the motivic Chern classes of Schubert cells $$\frac{mC_{-y}^A((G/P)^+_F \to G/P)}{y^{\dim (G/P)^+_F}} =y^{-\frac{1}{2}\dim (G/P)^+_F}Stab^\theta_{\mathfrak{C},T(G/P)}(1_{F}).$$ \end{cor} \begin{proof} Theorem \ref{tw:prod} \red{implies} that homogenous varieties satisfy the local product condition. Thus, the corollary follows from theorem \ref{tw:mC}. \end{proof} \begin{rem} In the case of flag varieties $G/B$\red{,} our results for \red{the} trivial slope agree with the previous results of \cite{AMSS} (theorem 7.5) for a small anti-ample slope up to \red{a} change of $y$ to $y^{-1}$. \red{This} difference is a consequence of the fact that in \cite{AMSS} the inverse action of $\C^*$ on the fibers of cotangent bundle is considered. \end{rem} Before the proof of theorem \ref{tw:mC} we make several simple observations. Let $\tilde{\nu}_F \simeq TM_{|F}$ denote the normal space to the fixed point $F$ in $M$. Denote by $\tilde{\nu}^-_F$ and $\tilde{\nu}^+_F$ its decomposition into the positive and negative part induced by the weight chamber $\mathfrak{C}$. Let $$\nu_F\red{\simeq TX_{|F}} \simeq TM_{|F}\oplus (T^*M_{|F}\red{\otimes \C_y})$$ denote the normal space to the fixed point $F$ in the variety $X=T^*M$. It is a straightforward observation that \begin{align*} & \nu_F^-\simeq\tilde{\nu}^-_F\oplus y(\tilde{\nu}^+_F)^* \red{\,,} \\ & \nu_F^+\simeq\tilde{\nu}^+_F\oplus y(\tilde{\nu}^-_F)^* \red{\,,} \\ &T^{1/2}_{F,>0}= \tilde{\nu}^+_F. \end{align*} In the course of proofs we use the following computation: \begin{alemat} \label{lem:comp} Let $V$ be a $\T$-vector space. We have an equality $$ \frac{\lambda_{-1}\left(y^{-1}V\right)}{\det V} =\frac{\lambda_{-y}(V^*)}{(-y)^{\dim V}} $$ in the $\T$-equivariant $K$-theory of a point. \end{alemat} \begin{proof} Both sides of the formula are multiplicative with respect to the direct sums of $\T$-vector spaces. Every $\T$-vector space decomposes as a sum of one dimensional spaces, so it is enough to check the equality for $\dim V=1$. Then it simplifies to trivial form: $$\frac{1-\frac{\alpha}{y}}{\alpha}=\frac{1-\frac{y}{\alpha}}{-y} ,$$ where $\alpha$ is \red{the} character of the action of the torus $\T$ on the linear space $V$. \end{proof} \begin{proof}[Proof of proposition \ref{pro:mC}] We start the proof by checking the axiom {\bf b)}. We need to show that \begin{align*} \frac{mC_{-y}^A(M^+_F \to M)_{|F}}{y^{\dim M_F^+}}= eu(\nu^-_F)\frac{(-1)^{\rank T^{1/2}_{F,>0}}}{\det T^{1/2}_{F,>0}} \,. \end{align*} which is equivalent to \begin{align} \label{wyr:b} \frac{mC_{-y}^A(M^+_F \to M)_{|F}} {(-y)^{\dim \tilde{\nu}^+_F}}= eu(\tilde{\nu}^{-}_F) \frac{\lambda_{-1}(y^{-1}\tilde{\nu}^{+}_F)}{\det \tilde{\nu}^+_F} \,. \end{align} The BB-cell $M_F^+$ is a locally closed subvariety. Choose an open neighbourhood $U$ of the fixed point $F$ in $M$ such that the morphism $M_F^+\cap U \subset U$ is a closed immersion. The functorial properties of the motivic Chern class (cf. paragraph 2.3 of \cite{FRW}, or theorem 4.2 from \cite{AMSS}) imply that: \begin{align*} &mC_{-y}^A\left(M^+_F \xto{i} M\right)_{|F}= mC_{-y}^A\left(M^+_F \cap U \xto{i} U\right)_{|F}= \\ &=i_*mC_{-y}^A\left(id_{M^+_F \cap U}\right)_{|F}= i_*\left(\lambda_{-y}\left(T^*(M^+_F \cap U)\right)\right)_{|F}= \lambda_{-y}\left(\tilde{\nu}^{+*}_F\right)eu(\tilde{\nu}^-_F) \,. \end{align*} So the left hand side of expression (\ref{wyr:b}) is equal to $$eu(\tilde{\nu}^-_F) \frac{\lambda_{-y}\left(\tilde{\nu}^{+*}_F\right)}{(-y)^{\dim \tilde{\nu}^{+}_F}}.$$ Lemma \ref{lem:comp} implies that the right hand \red{side} is also of this form. We proceed to the axiom {\bf c)}. Consider a pair of fixed points $F,F'$ such that $F'<F$. We need to prove the inclusion: \begin{align} \label{wyr:Ninc} N^A\left(\frac{mC_{-y}^A(M^+_F \to M)_{|F'}}{y^{\dim M_F^+}}\right) \subseteq N^A(eu(\nu^-_{F'}))-\det T^{1/2}_{F',>0} \end{align} and take care of the distinguished point \begin{align} \label{wyr:Npt} -\det T^{1/2}_{F',>0} \notin N^\red{A}\left(\frac{mC_{-y}^A(M^+_F \to M)_{|F'}}{y^{\dim M_F^+}}\right). \end{align} Let's concentrate on the inclusion (\ref{wyr:Ninc}). There is an equality of polytopes $$ N^A(eu(\nu^-_{F'}))-\det T^{1/2}_{F',>0}= N^A\left(eu(\tilde{\nu}^{-}_{\red{F'}}) \frac{\lambda_{-1}(y^{-1}\tilde{\nu}^{+}_{\red{F'}})}{\det \tilde{\nu}^+_{\red{F'}}}\right)= N^A\left(eu(\tilde{\nu}^{-}_{F'})\lambda_{-y}\left(\tilde{\nu}^{+*}_{F'}\right)\right), $$ where the second equality follows from lemma \ref{lem:comp}. After substitution of $y=1$ \red{into} the class $eu(\tilde{\nu}^{-}_{F'})\lambda_{-y}\left(\tilde{\nu}^{+*}_{F'}\right)$ we obtain the class $eu(\tilde{\nu}_{F'})$. Thus, proposition \ref{lem:New} (e) implies that \old{there is an inclusion} $$ N^A(eu(\tilde{\nu}_{F'})) \subseteq N^A\left(eu(\tilde{\nu}^{-}_{F'})\lambda_{-y}\left(\tilde{\nu}^{+*}_{F'}\right)\right)\,. $$ Moreover $$N^A\left(\frac{mC_{-y}^A(M^+_F \to M)_{|F'}}{y^{\dim M_F^+}}\right)= N^A\left(mC_{-y}^A(M^+_F \to M)_{|F'}\right)= N^A\left(mC_{y}^A(M^+_F \to M)_{|F'}\right) \,.$$ Theorem 4.2 from \cite{FRW} implies that there is an inclusion $$ N^A\left(mC_{y}^A(M^+_F \to M)_{|F'}\right) \subseteq N^A(eu(\tilde{\nu}_{F'})) \,. $$ To conclude\red{,} we have proven inclusions \begin{multline*} N^A\left(mC_{y}^A(M^+_F \to M)_{|F'}\right) \subseteq N^A(eu(\tilde{\nu}_{F'}))\subseteq \\ \subseteq N^A\left(eu(\tilde{\nu}^{-}_{F'})\lambda_{-y}\left(\tilde{\nu}^{+*}_{F'}\right)\right)= N^A(eu(\nu^-_{F'}))-\det T^{1/2}_{F',>0} \,. \end{multline*} The next step is the proof of the formula (\ref{wyr:Npt}). We proceed in a manner similar to the proof of corollary 4.5 in \cite{FRW}. Consider a general enough one dimensional subtorus $\sigma\in \mathfrak{C}$. Proposition \ref{lem:Npi} implies that \begin{align} \label{wyr:gens} N^\sigma\left(mC_{y}^A(M^+_F \to M)_{|F'}|_\sigma\right)=\pi_\sigma\left(N^A\left(mC_{y}^A(M^+_F \to M)_{|F'}\right)\right). \end{align} Theorem 4.2 from \cite{FRW} (cf. also theorem 10 from \cite{WeBB}), with the limit in $\infty$ changed to the limit in $0$ to get positive BB-cell instead of negative one, implies that $$ \lim_{\xi\to 0}\left(\left.\frac{mC_y^A(M_F^+\subset M)_{|F'}}{eu(\red{\tilde{\nu}}_{F'})}\right|_\sigma\right)=\;\chi_y(M_F^+\cap M^+_{F'})=\chi_y(\varnothing)=0\,, $$ \red{where $\xi$ is the chosen primitive character of the torus $\sigma$} and the class $\chi_y$ is the Hirzebruch genus (cf. \cite{chiy, BSY}). Thus\red{,} the lowest term of line segment $N^\sigma\left(mC_{y}^A(M^+_F \to X)_{|F'}|_\sigma\right)$ is greater than the lowest term of line segment $ N^\sigma\left(eu(\red{\tilde{\nu}}_{F'})|_\sigma\right)$, which is equal to $\pi_\sigma(-\det T^{1/2}_{F',>0})$ by proposition \ref{lem:ver}. Thus $$\pi_\sigma(-\det T^{1/2}_{F',>0}) \notin \pi_\sigma\left(N^A\left(mC_{y}^A(M^+_F \to X)_{|F'}\right)\right).$$ \red{This} implies $$-\det T^{1/2}_{F',>0} \notin N^A\left(\frac{mC_{-y}^A(M^+_F \to X)_{|F'}}{y^{\dim M_F^+}}\right),$$ as demanded in (\ref{wyr:Npt}). \end{proof} To prove theorem \ref{tw:mC} we need the following technical lemma. \begin{alemat}[cf. {\cite[Remark after Theorem 3.1]{RTV'}} {\cite[Lemma 5.2-4]{RTV}}] \label{lem:supp} Let
something similar, basically $$Q\equiv I\cdot t = C\cdot V$$ The current multiplied by the time for which the capacitor is capable of producing it is equal to the capacitance times the voltage at the beginning, before it gets discharged. We want to know how the components of the circuits influence currents and voltages because these are the basic quantities circuits work with. Currents go through wires and voltages are provided e.g. by batteries. Resistors affect the behavior of circuits according to their own rules and the constants $R,C$ describe how. Inductance of inductors (coils etc.) is similar except that the time appears in the opposite way: $V=L \cdot dI/dt$. The voltage of the inductor is proportional to the time derivative of the current (the rate at which the current is changing with time), and the coefficient is known as inductance. So components of circuits have some effect on voltages and currents – the only major "intrinsically electromagnetic quantities" that are relevant in a current – and the circuits also operate in time which means that we may want to know how the currents or voltages are changing or how these changes are correlated with other things. A circuit achieves a certain job and capacitors and inductors (and especially transistors!) may be shrunk while the functionality of the circuit stays the same. That's why we need to know the relevant or required parameters to "keep the functionality the same". We Use $C=Q/V$ because those were useful things to measure. It's often easy to forget, but many of the equations we use are chosen because the work, and because other equations didn't work. Never underestimate that part of the reality. We don't use "charge per unit volume" because that number is not constant. You can charge a capacitor up without changing its volume. Charge divided by voltage is constant. I think the most important question you asked is: Or, according to the equation $C=\frac{Q}{V}$, why would increasing voltage, while keeping charge constant, have any effect on the ability of a body to store charge. I like this question because its slightly backwards, suggesting you're thinking about it in a different way. I like when people think about something backwards, because its show's they're really thinking, and willing to take a stab at trying to figure out what's going on! The trick to this is that you will find you can't increase the voltage across the capacitor while keeping the charge constant, without doing some physical modifications to the capacitor itself. Reality simply wont let you. If you try to increase the voltage, you will find exactly enough charge will flow into the capacitor to balance the voltage out. More interestingly, consider the case where you instantaneously change the voltage, say from 1V to 10V. In theory, that should "increase the voltage without increasing the charge," because there hasn't been any time for current to flow. You could draw this up in a circuit simulator, like PSPICE, and change the voltage at t=0. It looks like you have to be changing the capacitance. In reality, we see a different effect. What we see is that, even though we increased the voltage over the system, the voltage across the capacitor will actually remain exactly the same! This makes sense from the equation, because we know the charge and capacitance didn't change, so voltage can't change. But now it looks like we have a broken circuit: somehow we have 10V on the input, but only 1V over the capacitor! Everyone knows that doesn't add up. What we find happens in reality is that there are "parasitic resistances" in every device we use. The battery has a resistance, the capacitor has a resistance, even those wires you use to connect them have a resistance. So your real circuit isn't just a voltage source and a capacitor, it's a voltage source, a capacitor, and a bunch of small resistors. In 99% of circumstances, we can ignore these resistors because they just don't change the circuit all that much. However, in this slightly pathological situation, they actually matter a lot. They are what "soak up" that extra voltage. You'll end up with 1V across the capacitor and 9V across the sum total of all of those resistors. Now the fun begins. because current through a resistor uses $V=IR$, we can calculate the current going through the system. The more ideal the wires and batteries were, the more current we're going to have to use to account for 9V. That current is a flow of charge. Where does it flow to? The capacitor. You will immediately start seeing the charge on the capacitor go up, as current flows through it, until eventually there's enough charge on the capacitor to generate 10V of potential across it. At that point, there's no more voltage to flow across the resistors, so the current drops to 0, and the circuit stays constant. (Realistically there's some exponential terms in there, and it never technically gets to 10V exactly, but in realistic scenarios, we tend to get close enough to handwave away that set of extra complexities) • Another aspect: One can pull a charged capacitor apart, without changing the charge. But as different spacial dimensions mean different capacity, th eformula suggests that the voltag eshould increas by this, wven without any external power source connected - and it really does! (Actually, the power source is your muscles pulling the thing apart) Apr 28 '16 at 21:40 A capacitor is used to store energy in form of electric fields. This electric field is created by charges on plates of capacitor. So, basically you are storing charge on capacitors. Clearly , you reply " I may store 1mC or 100mC, depending on Potential difference you apply across capacitor. " So, you need a standard to tell how much charge you can store at some universal condition. The standard is 1V. Hence, the charge stored in capacitor at the standard of 1V is called capacitance of capacitor. Why standard was 1V is because calculations become easy. Why don't measure the ability to store something by the volume it takes so why not charge per unit volume. $C = \epsilon°\frac{A}{d} = \epsilon°\frac{Ad}{d^2} = \epsilon°\frac{V}{d^2}$ So, there is relationship for volume too. But relation is not too direct . If you keep d constant and increase V charge you can store increases. Instead if you keep A constant and then change V , it decreases. Charge stored per unit volume, it can be actually given other names like charge density (or name it Smith :-) as you want). This term may be useful to calculate size of capacitor required in any device. But , more direct use is of potential difference across capacitor. Changing V for storing charge is much easier than changing volume of capacitors. Or, according to the equation $C=\frac{Q}{V}$, why would increasing voltage, while keeping charge constant, have any effect on the ability of a body to store charge. You are storing charge in capacitor. If you apply more PD, you can store more charge (I need not explain it). If you can store more charge and hence more energy for same PD applied, won't it make you happy? So, capacitance is charge stored, and if you can store more charge for same PD of 1V, you say it has more capacitance. • I my self like my answer. +1 to me. Apr 28 '16 at 15:47 I understand that capacitance is the ability of a body to store an electrical charge and the formula is $C = {Q \over V}$ Perhaps you just need to top thinking of capacitance as that. "Capacitance" sounds like "capacity", which leads to an intuitive trap like this: If I have a basket with a capacity of 2 apples, then a basket with more capacity can hold more than 2 apples. So if
compared with Scenario I (See Figure~\ref{peak2}). We can also deduce that ToU-based tariffs perform worst as DER is progressively added compared with flat tariffs (\textit{Flat} and \textit{FlatD}). This is due to the creation of new peaks when all batteries charge at off-peak times to minimise customers' electricity costs. \subsection{Annual electricity cost} \par In this section, we analyse the annual electricity costs for all scenarios using the results from Section~\ref{annualcost}, as illustrated in Figure~\ref{annual_cost}. Overall, customers pay less for electricity as DER is progressively added. While \textit{demand-based tariffs} result in a lower electricity cost compared to \textit{energy-based tariffs} in Scenario I, this slightly levels off in Scenarios II and III. This is because when prosumers' grid power import is clipped due to demand charges, they compensate for this by exporting more power to the grid. Nevertheless, the FiT rates are small compared to the retail rates so the net savings are minimal. With PV and batteries (Scenario III), however, large power export pays off uner a \textit{ToU} tariff, which results in the least annual electricity cost for consumers, but this might not be most beneficial for DNSPs. Generally, we can conclude that customers are likely to be indifferent between these tariff types, since the annual costs values are quite close. \subsection{Effects of network tariffs on line loading} \par In this section, we analyse the feeder head loading for the different PV-battery penetration levels (Figure~\ref{loading_voltage_profiles_a}). The loading levels are generally high because we have shown the phases with the highest loading (other phases follow similar pattern) for each feeder and also examined the maximum feeder head loading over the year for each MC simulation. The results show that \textit{ToU} tariff perform worst as the battery penetration level increases, which is in conformity with the results in~\cite{pimm2018time}. This is due to the batteries' response to ToU pricing by charging at off-peak times, thereby creating new peaks. Furthermore, ToU-based tariffs (\textit{ToU} and \textit{ToUD}), can adversely affect line loading due to large grid imports at off-peak times and reverse power flows resulting from power export. This can be mitigated by adding a demand charge (\textit{ToUD}) to at least clip the grid import levels, with the aid of batteries. As observed, line loading increased with higher battery penetration with \textit{ToU} tariff, while it reduced with \textit{ToUD} tariff. Contrarily, \textit{Flat} tariff results in lower line loading for all feeders. By including a demand charge to the flat tariff (\textit{FlatD}), line loading is reduced even further as seen in all three feeders. This works well with increasing battery penetration in both fairly balanced (Feeders 1 and 2) and unbalanced LV networks (Feeder 3) since there are no incentives for large grid power exports as with ToU tariffs. \subsection{Effects of network tariffs on customer voltage level} In terms of customer voltage profiles, Figure~\ref{loading_voltage_profiles_b} shows that \textit{ToU} tariff results in higher voltage problems in all three feeders compared to the other tariffs. This is particularly obvious in the case of the unbalanced feeder (Feeder 3), but can be mitigated by adding a demand charge to the ToU tariff (\textit{ToUD}). In this case, batteries are useful in reducing voltage problems. \textit{Flat} tariff, on the other hand, performs better than ToU-based tariffs in keeping customer voltage at the right levels. And again, by adding a demand charge to the flat tariff (\textit{FlatD}), there is a slight improvement in the customer voltage profiles \section{Conclusions and further work} \label{conclusion} \par In this research, we have shown that in the presence of DER, adding a peak demand charge to either a Flat or ToU tariff effectively reduces peak demand and subsequently line loading. \par To reduce a customer's peak demand, we have proposed a computationally efficient optimisation formulation that avoids the computationally expensive min-max formulation used in alternative approaches. We have demonstrated that the novel formulation, which can be seamlessly integrated into a customer’s HEMS, can be used in conjunction with DER-specific tariffs to achieve better network management and more equitable of network charges. Generally, flat tariffs perform better than ToU tariffs for mitigating voltage and alleviating line congestion problems. We conclude that, in the context of reducing network peaks, flat tariffs with a peak demand charge will be most beneficial for DNSPs. With respect to customer economic benefits, the best tariff depends on the amount of DER a customer possesses. However, the cost savings achieved by switching to another tariff type is marginal. Moreover, with reference to our previous work (all customers without EWH)~\cite{azuatalam2017impacts}, we can also conclude that the EWH has equal impacts across all tariff types in terms of line loading. However, with EWH, the line loading is generally higher. \par In this study, we have not explicitly tested these tariffs for cost-reflectivity, although this is implicit in the results. In this regard, our next task will focus on the design of these tariffs using established principles in economic theory rather than using already published tariffs from DNSPs. \bibliographystyle{elsarticle-num} \subsection*{Sets} \nomenclature[A]{$\mathcal{D}$}{Set of days, $d \in \mathcal{D}$ in a year, $\mathcal{D} = \{1,...,365\}$} \nomenclature[A]{$\mathcal{D}'$}{Set of days, $d' \in \mathcal{D}'$ in a month, $\mathcal{D}' \subset \mathcal{D}$} \nomenclature[A]{$\mathcal{H}$}{Set of half-hour time-slots, $h \in \mathcal{H}$ in a day, \\ $\mathcal{H} = \{1,...,48\}$} \nomenclature[A]{$\mathcal{M}$}{Set of months, $m \in \mathcal{M}$ in a year, $\mathcal{M} = \{1,...,12\}$} \nomenclature[B]{$\hat{p}$}{Dummy variable for modelling demand-based tariffs} \nomenclature[B]{$p^\mathrm{g+/-}$}{Power flowing from/to grid} \nomenclature[B]{$p^\mathrm{b+/-}$}{Battery charge/discharge power} \nomenclature[B]{$e^\mathrm{b}$}{Battery state of charge} \nomenclature[B]{$s^\mathrm{b}$}{Battery charging status (0: discharge, 1: charge)} \nomenclature[B]{$d^\mathrm{g}$}{direction of grid power flow (0: demand to grid, 1: grid to demand)} \nomenclature[C]{$\eta^\mathrm{b+/-}$}{Battery charging/discharging efficiency} \nomenclature[C]{$\bar{p}^\mathrm{b+/-}$}{Maximum battery charge/discharge power} \nomenclature[C]{$\eta^\mathrm{b+/-}$}{Battery charging/discharging efficiency} \nomenclature[C]{$\bar{e}^\mathrm{b}$}{Battery maximum state of charge} \nomenclature[C]{$\barbelow{e}^\mathrm{b}$}{Battery minimum state of charge} \nomenclature[C]{$p^\mathrm{pv}$}{Power from solar PV} \nomenclature[C]{$\Delta h$}{Half hourly time steps} \nomenclature[C]{$p^\mathrm{d}$}{Total customer demand} \nomenclature[C]{$p^\mathrm{res}$}{Net demand} \nomenclature[C]{$\bar{p}^\mathrm{g}$}{Maximum power taken from/to grid} \nomenclature[D]{LV}{Low voltage} \nomenclature[D]{PV}{Photovoltaic} \nomenclature[D]{DER}{Distributed energy resources} \nomenclature[D]{DNSP}{Distribution network service provider} \nomenclature[D]{FiT}{Feed in tariff} \nomenclature[D]{ToU}{Time of use} \nomenclature[D]{MILP}{Mixed integer linear programming} \nomenclature[D]{HEMS}{Home energy management system} \nomenclature[D]{EWH}{Electric water heater} \nomenclature[E]{$T^\mathrm{flt}$}{Flat energy charge} \nomenclature[E]{$T^\mathrm{tou}$}{Time-of-use energy charge} \nomenclature[E]{$T^\mathrm{fix}$}{Fixed daily charge} \nomenclature[E]{$T^\mathrm{fit}$}{Feed-in-tariff (FiT)} \nomenclature[E]{$p^\mathrm{pk}$}{monthly peak} \nomenclature[E]{$T^\mathrm{pk}$}{Monthly Peak demand charge} \section{Introduction} Investment in customer-owned PV-battery systems is growing rapidly across the globe, as they become cost-effective in certain jurisdictions. For example, the total installed capacity of residential PV-battery systems in Australia is projected to increase from 5~\si{GW} in 2017 to 19.7~\si{GW} in 2037~\cite{aemosmall,aemosolar}. In Germany, the total installed capacity of PV systems alone currently stands at 43~\si{GW}, and projected to increase to 150~\si{GW} by 2050~\cite{isefraunhofer,wirth2018recent}; while battery storage systems are expected to follow suit, with currently 100,000 installations (approx. 6~\si{GWh}) and projections for this to double within the next two years~\cite{bswsolar}. The trend towards more residential PV-battery systems is being driven by two major factors. On one hand, average household electricity prices in OECD countries have increased by over 33\% between 2006 and 2017 (using purchasing power parity). In particular, in Australia and Germany, prices have risen to about 20.4 and 39.17~\si{US. c/kWh}, respectively, from roughly 12.52~\si{US. c/kWh} (in Australia) and 20.83~\si{US. c/kWh} (in Germany) in the year 2006~\cite{energytaxes}; while feed-in-tariff (FiT) rates for PV generation have been simultaneously reduced in these countries. On the other hand, costs of PV and battery systems have seen precipitous falls in recent times. These energy price hikes and asset cost reductions are driving customers to increase their levels of energy self-consumption by investing in energy storage technology, to complement rooftop PV systems. This presents a dilemma to distribution nework service providers (DNSPs) and vertically-integrated electricity utilities --- how to design tariffs that reflect the long-run marginal cost of electricity network assets, so that all consumers receive a price signal indicating the extent to which they each contribute to network peak demand, while (i) not encouraging customers with DER to defect from the grid, and (ii) without unfairly apportioning network costs on customers without PV or other DER. This has proven to be a difficult task that has recieved much attention in the professional and academic literature~\cite{aemcrule,energy2014towards,lu2018designing,eutariff}. More broadly, recent studies have considered the economic impacts of \textit{energy-} and \textit{demand-based tariffs} on residential customers and on utilities'
# tf.nn.bias_add(conv_acts, split_biases[0]) if len(split_weights) > 1: # Create conv_b conv_b_name = op.name+'_b' conv_b_weights = np.array(split_weights[1]).reshape(split_conv_b_w_shape).transpose(2, 3, 1, 0) conv_b_w = tf.Variable(initial_value=conv_b_weights, name=conv_b_name+'_w', dtype=tf.float32) logger.debug('%s weight shape: %s', conv_b_name, str(conv_b_weights.shape)) # pylint: disable=no-member conv_acts = tf.nn.conv2d(conv_acts, conv_b_w, strides=strides, data_format=data_format, padding=pad_mode, name=conv_b_name) #dilation_rate=dilation_rate if bias_op: conv_b_bias = tf.Variable(initial_value=split_biases[1], name=conv_b_name+'_bias', dtype=tf.float32) conv_acts = conv_acts + conv_b_bias # tf.nn.bias_add(conv_acts, split_biases[1]) ratio = self._compute_per_layer_compression_ratio([conv_a_w.shape, conv_b_w.shape], conv_acts.shape, w_shape, "Conv2D") # Only create a third conv layer when performing successive SVD if len(split_weights) > 2 and len(svd_ranks) >= 2 and attr.mode == pymo.TYPE_SUCCESSIVE: # Create conv_c, using default strides (1,1) conv_c_name = op.name+'_c' conv_c_weights = np.array(split_weights[2]).reshape(split_conv_c_w_shape).transpose(2, 3, 1, 0) conv_c_w = tf.Variable(initial_value=conv_c_weights, name=conv_c_name+'_w', dtype=tf.float32) logger.debug('%s weight shape: %s', conv_c_name, str(conv_c_weights.shape)) # pylint: disable=no-member conv_acts = tf.nn.conv2d(conv_acts, conv_c_w, strides=[1, 1, 1, 1], data_format=data_format, padding=pad_mode, name=conv_c_name) if bias_op: conv_c_bias = tf.Variable(initial_value=split_biases[2], name=conv_c_name+'_bias', dtype=tf.float32) conv_acts = conv_acts + conv_c_bias # tf.nn.bias_add(conv_acts, split_biases[2]) consumers = [] rerouted_inputs = [bias_op.outputs[0]] if bias_op else [op.outputs[0]] for inp in rerouted_inputs: for consumer in inp.consumers(): consumers.append(consumer) _ = graph_editor.reroute_ts(conv_acts, rerouted_inputs, can_modify=consumers) return ratio def _split_fc_layer(self, sess, svd_ranks, op_name, bias_op_name=None): """ Split a given conv layer given a rank :param sess: tf.compat.v1.Session :param svd_ranks: Rank to split the layer with (two ranks in case of SSVD) :param op_name: Name of the op to split :param bias_op_name: Name of the corresponding bias op (if any) :return: None """ # pylint: disable=too-many-statements, too-many-locals logger.info('Splitting fully connected op: %s', op_name) # Retrieve the op(s) from the current graph op = sess.graph.get_operation_by_name(op_name) bias_op = None if bias_op_name: bias_op = sess.graph.get_operation_by_name(bias_op_name) # Print current conv weight shape query = core.OpQuery(sess.graph) w_shape = query.get_weights_for_op(op).get_shape().as_list() logger.debug('Original %s weight shape: %s', op.name, str(w_shape)) split_weights, weight_sizes = [], [] split_biases, bias_sizes = [], [] # FC weights are: [w_shape[2],svd_ranks[0]] in [I,O] order. # We must reshape the split weights to SVD format [O,I] and then transpose to NHWC split_fc_a_w_shape = (svd_ranks[0], w_shape[0]) fc_a_weights = np.zeros(split_fc_a_w_shape) fc_a_bias = np.zeros(svd_ranks[0]) split_weights.append(fc_a_weights.flatten().tolist()) weight_sizes.append(fc_a_weights.size) if bias_op: split_biases.append(fc_a_bias.flatten().tolist()) bias_sizes.append(fc_a_bias.size) # FC b weights are: [svd_ranks[0],num_filters] in [H,W,I,O] order. # We must reshape the split weights to SVD format [O,I,H,W] and then transpose to NHWC split_fc_b_w_shape = (w_shape[1], svd_ranks[0]) fc_b_weights = np.zeros(split_fc_b_w_shape) split_weights.append(fc_b_weights.flatten().tolist()) weight_sizes.append(fc_b_weights.size) if bias_op: fc_b_bias = np.zeros(w_shape[1]) split_biases.append(fc_b_bias.flatten().tolist()) bias_sizes.append(fc_b_bias.size) # Split the weights and biases according to the number of layers and ranks split_weights = self._svd.SplitLayerWeights(op.name, split_weights, weight_sizes, svd_ranks) split_biases = self._svd.SplitLayerBiases(op.name, split_biases, bias_sizes, svd_ranks) if split_weights: fc_a_name = op.name+'_a' fc_a_weights = np.array(split_weights[0]).reshape(split_fc_a_w_shape).transpose(1, 0) fc_a_w = tf.Variable(initial_value=fc_a_weights, name=fc_a_name+'_w', dtype=tf.float32) logger.debug('%s weight shape: %s', fc_a_name, str(fc_a_weights.shape)) # Create fc_a using default strides (1,1) fc_acts = tf.matmul(op.inputs[0], fc_a_w, name=fc_a_name) if bias_op: fc_a_bias = tf.Variable(initial_value=split_biases[0], name=fc_a_name+'_bias', dtype=tf.float32) fc_acts = fc_acts + fc_a_bias if len(split_weights) > 1: # Create fc_b fc_b_name = op.name+'_b' fc_b_weights = np.array(split_weights[1]).reshape(split_fc_b_w_shape).transpose(1, 0) fc_b_w = tf.Variable(initial_value=fc_b_weights, name=fc_b_name+'_w', dtype=tf.float32) logger.debug('%s weight shape: %s', fc_b_name, str(fc_b_weights.shape)) fc_acts = tf.matmul(fc_acts, fc_b_w, name=fc_b_name) if bias_op: fc_b_bias = tf.Variable(initial_value=split_biases[1], name=fc_b_name+'_bias', dtype=tf.float32) fc_acts = fc_acts + fc_b_bias ratio = self._compute_per_layer_compression_ratio([fc_a_w.shape, fc_b_w.shape], fc_acts.shape, w_shape, 'MatMul') consumers = [] rerouted_inputs = [bias_op.outputs[0]] if bias_op else [op.outputs[0]] for inp in rerouted_inputs: for consumer in inp.consumers(): consumers.append(consumer) _ = graph_editor.reroute_ts(fc_acts, rerouted_inputs, can_modify=consumers) return ratio def _split_layers(self, sess, rank_index, use_best_ranks): """ Split all the selected layers given a rank index :param sess: tf.compat.v1.Session :param rank_index: Rank index to use for finding the ranks :param use_best_ranks: Use the best rank index (for final compressed network) :return: None """ layer_stats = list() for i, op in enumerate(self._compressible_ops): # If op is not a selected layer, skip if not any(op is layer.layer_ref for layer in self._selected_layers): continue # Bias is taken care of as part of the Conv/FC op if op.type in ['Add', 'BiasAdd']: continue # Get the stored attributes for this op attr = self._svd.GetLayerAttributes(op.name) if not attr: raise RuntimeError("Layer attributes not available for layer"+op.name) if use_best_ranks: svd_ranks = attr.bestRanks else: svd_ranks = self._svd.GetCandidateRanks(op.name, rank_index) if svd_ranks: bias_op = None if i+1 < len(self._compressible_ops): bias_op = self._compressible_ops[i+1] bias_op = bias_op.name if bias_op.type in ['Add', 'BiasAdd'] else None if op.type in ['Conv2D']: ratio = self._split_conv_layer(sess, svd_ranks, attr, op.name, bias_op) elif op.type in ['MatMul']: ratio = self._split_fc_layer(sess, svd_ranks, op.name, bias_op) per_layer_stats = stats_u.SvdStatistics.PerSelectedLayer(op.name, svd_ranks, ratio) layer_stats.append(per_layer_stats) return layer_stats def _create_compressed_network(self, sess, rank_index, use_best_ranks): """ Create a compressed network for a given rank index :param sess: tf.compat.v1.Session :param rank_index: Rank index to use for finding the ranks :param use_best_ranks: Use the best rank index (for final compressed network) :return: None """ # Split the network layers and update the connections per_layer_stats = self._split_layers(sess, rank_index, use_best_ranks) return per_layer_stats def _perform_rank_selection(self): """ Perform rank selection procedure :return: None """ # pylint: disable=too-many-locals stats_per_rank_index = list() self._svd.ComputeNetworkCost() self._num_ranks = self._svd.SetCandidateRanks(self._num_ranks) if not self._num_ranks: raise RuntimeError('No good candidate ranks found for compressing specified layers.') # Ranks are in order from least compression to highest best_index = -1 optimal_score = 0.0 for rank_index in range(self._num_ranks): g = tf.Graph() with g.as_default(): # Create a new network for each rank_index self._svd.PrintCandidateRanks(rank_index, False) # Load the default graph so we are operating on a fresh copy of the original graph sess, saver = self._load_graph(g, self._default_meta_graph, self._default_checkpoint) per_layer_stats = self._create_compressed_network(sess, rank_index, False) # Save the temp model output_file = os.path.join(self._output_dir, 'svd_rank_index_' + str(rank_index)) self._save_graph(sess, saver, output_file) # Reset the session and start a new graph for loading the compressed model self._reset_session(sess) g = tf.Graph() with g.as_default(): # In TF after making changes to the graph you must save and reload, then evaluate sess, saver = self._load_graph(g, output_file+'.meta', output_file) model_perf = self._run_graph(sess, self._generator, self._eval_names, self._eval_func, self._iterations) logger.info('%s performance: %s', output_file, str(model_perf)) self._model_performance_candidate_ranks.append(model_perf * 100) # Estimate relative compression score for this rank_index compression_score = self._compute_compression_ratio(sess, self._metric) objective_score = self._compute_objective_score(model_perf, compression_score) rank_data = stats_u.SvdStatistics.PerRankIndex(rank_index=rank_index, model_accuracy=model_perf, model_compression_ratio=compression_score, layer_stats_list=per_layer_stats) stats_per_rank_index.append(rank_data) logger.info('Compressed network with rank_index %i/%i: accuracy = %f percent ' 'with %f percent compression (%r option) and an objective score of %f', rank_index, self._num_ranks, model_perf * 100, compression_score * 100, self._metric, objective_score) if rank_index == 0: optimal_score = objective_score logger.info('Initializing objective score to %f at rank index %i', optimal_score, rank_index) if model_perf + self._error_margin/100 < self._baseline_perf: logger.info('Model performance %f falls below %f percent of baseline performance %f' ' Ending rank selection', model_perf, self._error_margin, self._baseline_perf) break else: if objective_score <= optimal_score: optimal_score = objective_score logger.info('Found a better value for the objective score %f at rank_index %i', optimal_score, rank_index) best_index = rank_index if best_index != -1: self._svd.StoreBestRanks(best_index) memory_compression_ratio = self._compute_compression_ratio(sess, CostMetric.memory) mac_compression_ratio = self._compute_compression_ratio(sess, CostMetric.mac) stats = stats_u.SvdStatistics(self._baseline_perf, model_perf, self._metric, best_index, mem_comp_ratio=memory_compression_ratio, mac_comp_ratio=mac_compression_ratio, rank_stats_list=stats_per_rank_index) # close the session and reset the default graph self._reset_session(sess) return stats # close the session and reset the default graph self._reset_session(sess) raise RuntimeError('No suitable ranks found to compress model within defined error bounds.') def manual_rank_svd(self): """ Set provided ranks in the PyMo library :return: None """ # Store total net cost self._svd.ComputeNetworkCost() # Ensure proper layer names are provided in no_eval mode if not self._layer_ranks: raise ValueError('Layer names MUST be specified in no_eval mode.') # Ensure layer_ranks is in list of tuples format if not all(isinstance(item, tuple) for item in self._layer_ranks): raise ValueError('layer_ranks should be in list of tuples format for both SVD and SSVD') # Check number of input ranks match with number of input layers if len(self._layers_to_compress) != self._num_layer_ranks: raise ValueError('Number of Input SVD ranks does not match number of layers.') for layer_name, rank in zip(self._layers_to_compress, self._layer_ranks): rank_list = list() rank_list.append(rank[1]) if self.svd_type == _SVD_TYPES['ssvd']: rank_list.append(rank[1]) self._svd.StoreBestRanks(layer_name, rank_list) stats = self._stats_for_manual_rank_svd() return stats @staticmethod def _save_graph(sess, saver, output_graph): """ Utility function to save a graph :param sess: tf.compat.v1.Session :param saver: TF save :param output_graph: Filename and path for saving the output :return: """ logger.info('Saving graph: %s', output_graph) saver.save(sess, output_graph) _ = tf.compat.v1.summary.FileWriter(os.path.dirname(output_graph)+"/models", sess.graph) def _save_compressed_network(self): """ Create and save a compressed network (using the best ranks identified) :return: """ logger.info('Saving final compressed network') g = tf.Graph() with g.as_default(): sess, saver = self._load_graph(g, self._default_meta_graph, self._default_checkpoint) per_layer_stats = self._create_compressed_network(sess, 0, True) # Save the
\section{Introduction} Lepton flavor violation (LFV) appears in various extensions of the Standard Model (SM). In particular, lepton-flavor-violating $\tau^-\to\ell^-\ell^+\ell^-$ (where $\ell = e$ or $\mu$ ) decays are discussed in various supersymmetric models~\cite{cite:susy1,cite:susy2,cite:susy3,cite:susy4,cite:susy5,cite:susy6,cite:susy7,cite:susy8}, models with little Higgs~\cite{cite:littlehiggs1,cite:littlehiggs2}, left-right symmetric models~\cite{cite:leftright} as well as models with heavy singlet Dirac neutrinos~\cite{cite:amon} and very light pseudoscalar bosons~\cite{cite:pseudo}. Some of these models with certain combinations of parameters predict that the branching fractions for $\tau^-\to\ell^-\ell^+\ell^-$ decays can be as large as $10^{-7}$, which is in the range already accessible in high-statistics $B$ factory experiments. Searches for lepton flavor violation in $\tau^- \to \ell^- \ell^+ \ell^-$ (where $\ell = e$ or $\mu$) decays have been performed since 1982~\cite{PDG}, starting from the pioneering experiment MARKII~\cite{MARKII}. In the previous high-statistics analyses, Belle (BaBar) reached 90\% confidence level upper limits on the branching fractions of the order of $10^{-8}$~\cite{cite:3l_belle,cite:3l_babar}, based on samples with about 535 (376) fb${}^{-1}$ of data. Here, we update our previous results with a larger data set (782 fb$^{-1}$), collected with the Belle detector at the KEKB asymmetric-energy $e^+e^-$ collider~\cite{kekb}, taken at the $\Upsilon(4S)$ resonance and 60 MeV below it. We apply the same selection criteria as in the previous analysis, but optimized for the new data sample. The Belle detector is a large-solid-angle magnetic spectrometer that consists of a silicon vertex detector (SVD), a 50-layer central drift chamber (CDC), an array of aerogel threshold Cherenkov counters (ACC), a barrel-like arrangement of time-of-flight scintillation counters (TOF), and an electromagnetic calorimeter comprised of CsI(Tl) crystals (ECL), all located inside a superconducting solenoid coil that provides a 1.5~T magnetic field. An iron flux-return located outside the coil is instrumented to detect $K_{\rm{L}}^0$ mesons and to identify muons (KLM). The detector is described in detail elsewhere~\cite{Belle}. Leptons are identified using likelihood ratios calculated from the response of various subsystems of the detector. For electron identification, the likelihood ratio is defined as ${\cal P}(e) = {\cal{L}}_e/({\cal{L}}_e+{\cal{L}}_x)$, where ${\cal{L}}_e$ and ${\cal{L}}_x$ are the likelihoods for electron and non-electron hypotheses, respectively, determined using the ratio of the energy deposit in the ECL to the momentum measured in the SVD and CDC, the shower shape in the ECL, the matching between the position of charged track trajectory and the cluster position in the ECL, the hit information from the ACC and the $dE/dx$ information in the CDC~\cite{EID}. For muon identification, the likelihood ratio is defined as (${\cal P}(\mu) = {\cal{L}_\mu}/({\cal{L}}_\mu+{\cal{L}}_{\pi}+{\cal{L}}_{K})$), where ${\cal{L}}_\mu$, ${\cal{L}}_\pi$ and ${\cal{L}}_K$ are the likelihoods for muon, pion and kaon hypotheses, respectively, based on the matching quality and penetration depth of associated hits in the KLM~\cite{MUID}. In order to optimize the event selection and to estimate the signal efficiency, we use Monte Carlo (MC) samples. The signal and the background (BG) events from generic $\tau^+\tau^-$ decays are generated by KORALB/TAUOLA~\cite{KKMC}. In the signal MC, we generate $\tau^+\tau^-$ pairs, where one $\tau$ decays into three leptons and the other $\tau$ decays generically. All leptons from $\tau^-\to\ell^-\ell^+\ell^-$ decays are assumed to have a phase space distribution in the $\tau$ lepton's rest frame~\cite{XX}. Other backgrounds including $B\bar{B}$ and $e^+e^-\to q\bar{q}$ ($q=u,d,s,c$) processes, Bhabhas, $e^+e^-\rightarrow\mu^+\mu^-$, and two-photon processes are generated by EvtGen~\cite{evtgen}, BHLUMI~\cite{BHLUMI}, KKMC~\cite{KKMC}, and AAFHB~\cite{AAFH}, respectively. All kinematic variables are calculated in the laboratory frame unless otherwise specified. In particular, variables calculated in the $e^+e^-$ center-of-mass (CM) system are indicated by the superscript ``CM''. \section{Event Selection} We search for $\tau^+\tau^-$ events in which one $\tau$ decays into three leptons~(signal $\tau$), while the other $\tau$ decays into one charged track, any number of additional photons, and neutrinos~(tag $\tau$)\footnotemark[2]. Candidate $\tau$-pair events are required to have four tracks with zero net charge. \footnotetext[2]{Unless otherwise stated, charge-conjugate decays are implied throughout this paper.} The following $\tau^-$ decays into three leptons are searched for: $e^-e^+e^-$, $\mu^-\mu^+\mu^-$, $e^-\mu^+\mu^-$, $\mu^-e^+e^-$, $\mu^-e^+\mu^-$, and $e^-\mu^+e^-$. Since each decay mode has a different mix of backgrounds, the event selection is optimized mode by mode. We optimize the selection criteria to improve the prospect of observing evidence of a genuine signal, rather than to minimize the expected upper limits, as detailed later. The event selection starts by reconstructing four charged tracks and any number of photons within the fiducial volume defined by $-0.866 < \cos\theta < 0.956$, where $\theta$ is the polar angle relative to the direction opposite to that of the incident $e^+$ beam in the laboratory frame. The transverse momentum ($p_t$) of each charged track and energy of each photon ($E_{\gamma}$) are required to satisfy the requirements $p_t> $ 0.1 GeV/$c$ and $E_{\gamma}>0.1$ GeV, respectively. For each charged track, the distance of the closest approach with respect to the interaction point is required to be within $\pm$0.5 cm in the transverse direction and within $\pm$3.0 cm in the longitudinal direction. The particles in an event are then separated into two hemispheres referred to as the signal and tag sides using the plane perpendicular to the thrust axis, as calculated from the observed tracks and photon candidates~\cite{thrust}. The tag side contains a charged track while the signal side contains three charged tracks. We require all charged tracks on the signal side to be identified as leptons. The electron (muon) identification criteria are ${\cal P}(e) > 0.9$ (${\cal P}(\mu) > 0.9$) for momenta greater than 0.3 GeV/$c$ (0.6 GeV/$c$). The electron (muon) identification efficiency for our selection criteria is 91\% (85\%) while the probability of misidentifying a pion as an electron (muon) is below 0.5\% (2\%). To ensure that the missing particles are neutrinos rather than photons or charged particles that pass outside the detector acceptance, we impose additional requirements on the missing momentum $\vec{p}_{\rm miss}$, which is calculated by subtracting the vector sum of the momenta of all tracks and photons from the sum of the $e^+$ and $e^-$ beam momenta. We require that the magnitude of $\vec{p}_{\rm miss}$ be greater than 0.4 GeV/$c$ and that its direction point into the fiducial volume of the detector. To reject $q\bar{q}$ background, the magnitude of thrust ($T$) should lie in the range 0.90 $< T <$ 0.97 for all modes except for $\tau^-\to e^-e^+e^-$ for which we require 0.90 $ < T <$ 0.96. The $T$ distribution for $\tau^- \to \mu^- \mu^+ \mu^-$ is shown in Fig.~\ref{fig:thrust}. We also require $5.29$ GeV $< E^{\mbox{\rm{\tiny{CM}}}}_{\rm{vis}} < 9.5$ GeV, where $E^{\mbox{\rm{\tiny{CM}}}}_{\rm{vis}}$ is the total visible energy in the CM system, defined as the sum of the energies of the three leptons, the charged track on the tag side (with a pion mass hypothesis) and all photon candidates. \begin{figure} \begin{center} \resizebox{0.5\textwidth}{0.5\textwidth}{\includegraphics{Fig1.eps}} \caption{ Distribution of the thrust magnitude $T$ for the $\tau^- \to \mu^- \mu^+ \mu^-$ selection. The points with error bars are data, and the open histogram shows the BG estimated by MC. The shaded histogram is the BG from $e^+e^- \to q \bar{q}$. The dashed histogram is signal MC. The region indicated by the arrow between the two vertical lines is selected. } \label{fig:thrust} \end{center} \end{figure} Since neutrinos are emitted only on the tag side, the direction of $\vec{p}_{\rm miss}$ should lie within the tag side of the event. The cosine of the opening angle between $\vec{p}_{\rm miss}$ and the charged track on the tag side in the CM system, $\cos \theta^{\mbox{\rm \tiny CM}}_{\rm tag-miss}$, is therefore required to lie in the range $0.0<\cos \theta^{\mbox{\rm \tiny CM}}_{\rm tag-miss}<0.98$. The requirement of $\cos \theta^{\mbox{\rm \tiny CM}}_{\rm tag-miss}<0.98$ suppresses Bhabha, $\mu^+\mu^-$ and two-photon backgrounds since an undetected radiated photon results in a missing momentum in the same ECL cluster as the tag-side track~\cite{cite:tau_egamma}. The reconstructed mass on the tag side using a charged track (with a pion mass hypothesis) and photons, $m_{\rm tag}$, is required to be less than 1.78 GeV/$c^2$. Conversions ($\gamma\to e^+e^-$) are a large background for the $\tau^-\-\to e^-e^+e^-$ and $\mu^-e^+e^-$ modes. We require $M_{ee}>0.2$ GeV/$c^2$, to reduce these backgrounds further. For the $\tau^-\to e^-e^+e^-$ and $\tau^-\to e^-\mu^+\mu^-$ modes, the charged track on the tag side is required not to be an electron. We apply the requirement ${\cal P}(e)<0.1$ since a large background from two-photon and Bhabha events still remains. Furthermore, we reject the event if the projection of the charged track on the tag side is in gaps between the ECL barrel and endcap. To reduce backgrounds from Bhabha and $\mu^+\mu^-$ events with extra tracks due to interaction with the detector material, we require that the momentum in the CM system of the charged track on the tag side be less than 4.5 GeV/$c$ for the $\tau^-\to e^-e^+e^-$ and $\tau^-\to\mu^-e^+e^-$ modes. Finally, to suppress backgrounds from generic $\tau^+\tau^-$ and $q\bar{q}$ events, we apply a selection based on the magnitude of the missing momentum ${p}_{\rm{miss}}$ and missing mass squared $m^2_{\rm{miss}}$ for all modes
co_design_efficacy_sum/avg_ctr #result = {"rate":{}, "efficacy":{}} #rate_column_co_design = {} plt.figure() plotdata = pd.DataFrame(column_co_design_dist_avg, index=y_column_name_list) fontSize = 10 plotdata.plot(kind='bar', fontsize=fontSize) plt.xticks(fontsize=fontSize, rotation=6) plt.yticks(fontsize=fontSize) plt.xlabel("co design parameter", fontsize=fontSize) plt.ylabel("co design distance", fontsize=fontSize) plt.title("co desgin distance of different parameters", fontsize=fontSize) # dump in the top folder output_base_dir = '/'.join(input_dir_names[0].split("/")[:-2]) output_dir = os.path.join(output_base_dir, "cross_workloads/co_design_rate") if not os.path.exists(output_dir): os.makedirs(output_dir) plt.savefig(os.path.join(output_dir,"_".join(experiment_name_list) +"_"+"co_design_avg_dist"+'_'.join(y_column_name_list)+".png")) plt.close('all') plt.figure() plotdata = pd.DataFrame(column_co_design_efficacy_avg, index=y_column_name_list) fontSize = 10 plotdata.plot(kind='bar', fontsize=fontSize) plt.xticks(fontsize=fontSize, rotation=6) plt.yticks(fontsize=fontSize) plt.xlabel("co design parameter", fontsize=fontSize) plt.ylabel("co design dis", fontsize=fontSize) plt.title("co desgin efficacy of different parameters", fontsize=fontSize) # dump in the top folder output_base_dir = '/'.join(input_dir_names[0].split("/")[:-2]) output_dir = os.path.join(output_base_dir, "cross_workloads/co_design_rate") if not os.path.exists(output_dir): os.makedirs(output_dir) plt.savefig(os.path.join(output_dir,"_".join(experiment_name_list) +"_"+"co_design_efficacy"+'_'.join(y_column_name_list)+".png")) plt.close('all') def plot_codesign_rate_efficacy_per_workloads(input_dir_names, res_column_name_number): #itrColNum = all_res_column_name_number["iteration cnt"] #distColNum = all_res_column_name_number["dist_to_goal_non_cost"] trueNum = all_res_column_name_number["move validity"] move_name_number = all_res_column_name_number["move name"] # experiment_names file_full_addr_list = [] for dir_name in input_dir_names: file_full_addr = os.path.join(dir_name, "result_summary/FARSI_simple_run_0_1_all_reults.csv") file_full_addr_list.append(file_full_addr) axis_font = {'fontname': 'Arial', 'size': '4'} x_column_name = "iteration cnt" #y_column_name_list = ["high level optimization name", "exact optimization name", "architectural principle", "comm_comp"] y_column_name_list = ["exact optimization name", "architectural principle", "comm_comp", "workload"] #y_column_name_list = ["high level optimization name", "exact optimization name", "architectural principle", "comm_comp"] column_co_design_cnt = {} column_non_co_design_cnt = {} column_co_design_rate = {} column_non_co_design_rate = {} column_co_design_efficacy_rate = {} column_non_co_design_efficacy_rate = {} column_non_co_design_efficacy = {} column_co_design_efficacy= {} last_col_val = "" for file_full_addr in file_full_addr_list: experiment_name = get_experiments_name(file_full_addr, res_column_name_number) column_co_design_cnt = {} for y_column_name in y_column_name_list: y_column_number = res_column_name_number[y_column_name] x_column_number = res_column_name_number[x_column_name] dis_to_goal_column_number = res_column_name_number["dist_to_goal_non_cost"] ref_des_dis_to_goal_column_number = res_column_name_number["ref_des_dist_to_goal_non_cost"] column_co_design_cnt[y_column_name] = [] column_non_co_design_cnt[y_column_name] = [] column_non_co_design_efficacy[y_column_name] = [] column_co_design_efficacy[y_column_name] = [] all_values = get_all_col_values_of_a_folders(input_dir_names, all_res_column_name_number, y_column_name) with open(file_full_addr, newline='') as csvfile: resultReader = csv.reader(csvfile, delimiter=',', quotechar='|') rows = list(resultReader) for i, row in enumerate(rows): if i >= 1: last_row = rows[i - 1] if row[y_column_number] not in all_values or row[trueNum] == "False" or row[move_name_number]=="identity": continue col_value = row[y_column_number] col_values = col_value.split(";") for idx, col_val in enumerate(col_values): delta_x_column = (float(row[x_column_number]) - float(last_row[x_column_number]))/len(col_values) value_to_add_1 = (float(last_row[x_column_number]) + idx * delta_x_column, 1) value_to_add_0 = (float(last_row[x_column_number]) + idx * delta_x_column, 0) # only for improvement if float(row[ref_des_dis_to_goal_column_number]) - float(row[dis_to_goal_column_number]) < 0: continue if not col_val == last_col_val: column_co_design_cnt[y_column_name].append(value_to_add_1) column_non_co_design_cnt[y_column_name].append(value_to_add_0) column_co_design_efficacy[y_column_name].append((float(row[ref_des_dis_to_goal_column_number]) - float(row[dis_to_goal_column_number]))/float(row[ref_des_dis_to_goal_column_number])) column_non_co_design_efficacy[y_column_name].append(0) else: column_co_design_cnt[y_column_name].append(value_to_add_0) column_non_co_design_cnt[y_column_name].append(value_to_add_1) column_co_design_efficacy[y_column_name].append(0) column_non_co_design_efficacy[y_column_name].append((float(row[ref_des_dis_to_goal_column_number]) - float(row[dis_to_goal_column_number]))/float(row[ref_des_dis_to_goal_column_number])) last_col_val = col_val # co_des cnt x_values_co_design_cnt = [el[0] for el in column_co_design_cnt[y_column_name]] y_values_co_design_cnt = [el[1] for el in column_co_design_cnt[y_column_name]] y_values_co_design_cnt_total =sum(y_values_co_design_cnt) total_iter = x_values_co_design_cnt[-1] # non co_des cnt x_values_non_co_design_cnt = [el[0] for el in column_non_co_design_cnt[y_column_name]] y_values_non_co_design_cnt = [el[1] for el in column_non_co_design_cnt[y_column_name]] y_values_non_co_design_cnt_total =sum(y_values_non_co_design_cnt) column_co_design_rate[y_column_name] = y_values_co_design_cnt_total/total_iter column_non_co_design_rate[y_column_name] = y_values_non_co_design_cnt_total/total_iter # co_des efficacy y_values_co_design_efficacy = column_co_design_efficacy[y_column_name] y_values_co_design_efficacy_total =sum(y_values_co_design_efficacy) # non co_des efficacy y_values_non_co_design_efficacy = column_non_co_design_efficacy[y_column_name] y_values_non_co_design_efficacy_total =sum(y_values_non_co_design_efficacy) column_co_design_efficacy_rate[y_column_name] = y_values_co_design_efficacy_total/(y_values_non_co_design_efficacy_total + y_values_co_design_efficacy_total) column_non_co_design_efficacy_rate[y_column_name] = y_values_non_co_design_efficacy_total/(y_values_non_co_design_efficacy_total + y_values_co_design_efficacy_total) result = {"rate":{}, "efficacy":{}} rate_column_co_design = {} result["rate"] = {"co_design":column_co_design_rate, "non_co_design": column_non_co_design_rate} result["efficacy_rate"] = {"co_design":column_co_design_efficacy_rate, "non_co_design": column_non_co_design_efficacy_rate} # prepare for plotting and plot plt.figure() plotdata = pd.DataFrame(result["rate"], index=y_column_name_list) fontSize = 10 plotdata.plot(kind='bar', fontsize=fontSize, stacked=True) plt.xticks(fontsize=fontSize, rotation=6) plt.yticks(fontsize=fontSize) plt.xlabel("co design parameter", fontsize=fontSize) plt.ylabel("co design rate", fontsize=fontSize) plt.title("co desgin rate of different parameters", fontsize=fontSize) # dump in the top folder output_base_dir = '/'.join(input_dir_names[0].split("/")[:-2]) output_dir = os.path.join(output_base_dir, "single_workload/co_design_rate") if not os.path.exists(output_dir): os.makedirs(output_dir) plt.savefig(os.path.join(output_dir,experiment_name +"_"+"co_design_rate_"+'_'.join(y_column_name_list)+".png")) plt.close('all') plt.figure() plotdata = pd.DataFrame(result["efficacy_rate"], index=y_column_name_list) fontSize = 10 plotdata.plot(kind='bar', fontsize=fontSize, stacked=True) plt.xticks(fontsize=fontSize, rotation=6) plt.yticks(fontsize=fontSize) plt.xlabel("co design parameter", fontsize=fontSize) plt.ylabel("co design efficacy rate", fontsize=fontSize) plt.title("co design efficacy rate of different parameters", fontsize=fontSize) # dump in the top folder output_base_dir = '/'.join(input_dir_names[0].split("/")[:-2]) output_dir = os.path.join(output_base_dir, "single_workload/co_design_rate") if not os.path.exists(output_dir): os.makedirs(output_dir) plt.savefig(os.path.join(output_dir,experiment_name+"_"+"co_design_efficacy_rate_"+'_'.join(y_column_name_list)+".png")) plt.close('all') def plot_codesign_progression_per_workloads(input_dir_names, res_column_name_number): #itrColNum = all_res_column_name_number["iteration cnt"] #distColNum = all_res_column_name_number["dist_to_goal_non_cost"] trueNum = all_res_column_name_number["move validity"] # experiment_names experiment_names = [] file_full_addr_list = [] for dir_name in input_dir_names: file_full_addr = os.path.join(dir_name, "result_summary/FARSI_simple_run_0_1_all_reults.csv") file_full_addr_list.append(file_full_addr) experiment_name = get_experiments_name(file_full_addr, res_column_name_number) experiment_names.append(experiment_name) axis_font = {'size': '20'} x_column_name = "iteration cnt" y_column_name_list = ["high level optimization name", "exact optimization name", "architectural principle", "comm_comp"] experiment_column_value = {} for file_full_addr in file_full_addr_list: experiment_name = get_experiments_name(file_full_addr, res_column_name_number) for y_column_name in y_column_name_list: y_column_number = res_column_name_number[y_column_name] x_column_number = res_column_name_number[x_column_name] experiment_column_value[experiment_name] = [] all_values = get_all_col_values_of_a_folders(input_dir_names, all_res_column_name_number, y_column_name) all_values_encoding = {} for idx, val in enumerate(all_values): all_values_encoding[val] = idx with open(file_full_addr, newline='') as csvfile: resultReader = csv.reader(csvfile, delimiter=',', quotechar='|') rows = list(resultReader) for i, row in enumerate(rows): #if row[trueNum] != "True": # continue if i >= 1: if row[y_column_number] not in all_values: continue col_value = row[y_column_number] col_values = col_value.split(";") for idx, col_val in enumerate(col_values): last_row = rows[i-1] delta_x_column = (float(row[x_column_number]) - float(last_row[x_column_number]))/len(col_values) value_to_add = (float(last_row[x_column_number])+ idx*delta_x_column, col_val) experiment_column_value[experiment_name].append(value_to_add) # prepare for plotting and plot axis_font = {'size': '20'} fontSize = 20 fig = plt.figure(figsize=(12, 8)) plt.rc('font', **axis_font) ax = fig.add_subplot(111) x_values = [el[0] for el in experiment_column_value[experiment_name]] #y_values = [all_values_encoding[el[1]] for el in experiment_column_value[experiment_name]] y_values = [el[1] for el in experiment_column_value[experiment_name]] #ax.set_title("experiment vs system implicaction") ax.tick_params(axis='both', which='major', labelsize=fontSize, rotation=60) ax.set_xlabel(x_column_name, fontsize=20) ax.set_ylabel(y_column_name, fontsize=20) ax.plot(x_values, y_values, label=y_column_name, linewidth=2) ax.legend(bbox_to_anchor=(1, 1), loc='upper left', fontsize=fontSize) # dump in the top folder output_base_dir = '/'.join(input_dir_names[0].split("/")[:-2]) output_dir = os.path.join(output_base_dir, "single_workload/progression") if not os.path.exists(output_dir): os.makedirs(output_dir) plt.tight_layout() fig.savefig(os.path.join(output_dir,experiment_name+"_progression_"+'_'.join(y_column_name_list)+".png")) # plt.show() plt.close('all') fig = plt.figure(figsize=(12, 8)) plt.rc('font', **axis_font) ax = fig.add_subplot(111) x_values = [el[0] for el in experiment_column_value[experiment_name]] # y_values = [all_values_encoding[el[1]] for el in experiment_column_value[experiment_name]] y_values = [el[1] for el in experiment_column_value[experiment_name]] # ax.set_title("experiment vs system implicaction") ax.tick_params(axis='both', which='major', labelsize=fontSize, rotation=60) ax.set_xlabel(x_column_name, fontsize=20) ax.set_ylabel(y_column_name, fontsize=20) ax.plot(x_values, y_values, label=y_column_name, linewidth=2) ax.legend(bbox_to_anchor=(1, 1), loc='upper left', fontsize=fontSize) # dump in the top folder output_base_dir = '/'.join(input_dir_names[0].split("/")[:-2]) output_dir = os.path.join(output_base_dir, "single_workload/progression") if not os.path.exists(output_dir): os.makedirs(output_dir) plt.tight_layout() fig.savefig(os.path.join(output_dir, experiment_name + "_progression_" + y_column_name + ".png")) # plt.show() plt.close('all') def plot_3d(input_dir_names, res_column_name_number): # experiment_names experiment_names = [] file_full_addr_list = [] for dir_name in input_dir_names: file_full_addr = os.path.join(dir_name, "result_summary/FARSI_simple_run_0_1.csv") file_full_addr_list.append(file_full_addr) experiment_name = get_experiments_name(file_full_addr, res_column_name_number) experiment_names.append(experiment_name) axis_font = {'size': '10'} fontSize = 10 column_value = {} # initialize the dictionary column_name_list = ["budget_scaling_power", "budget_scaling_area","budget_scaling_latency"] under_study_vars =["iteration cnt", "local_bus_avg_theoretical_bandwidth", "local_bus_max_actual_bandwidth", "local_bus_avg_actual_bandwidth", "system_bus_avg_theoretical_bandwidth", "system_bus_max_actual_bandwidth", "system_bus_avg_actual_bandwidth", "global_total_traffic", "local_total_traffic", "global_memory_total_area", "local_memory_total_area", "ips_total_area", "gpps_total_area","ip_cnt", "max_accel_parallelism", "avg_accel_parallelism", "gpp_cnt", "max_gpp_parallelism", "avg_gpp_parallelism"] # get all the data for file_full_addr in file_full_addr_list: with open(file_full_addr, newline='') as csvfile: resultReader = csv.reader(csvfile, delimiter=',', quotechar='|') experiment_name = get_experiments_name( file_full_addr, res_column_name_number) for i, row in enumerate(resultReader): #if row[trueNum] != "True": # continue if i >= 1: for column_name in column_name_list + under_study_vars: if column_name not in column_value.keys() : column_value[column_name] = [] column_number = res_column_name_number[column_name] col_value = row[column_number] col_values = col_value.split(";") if "=" in col_values[0]: column_value[column_name].append(float((col_values[0]).split("=")[1])) else: column_value[column_name].append(float(col_values[0])) for idx,under_study_var in enumerate(under_study_vars): fig_budget_blkcnt = plt.figure(figsize=(12, 12)) plt.rc('font', **axis_font) ax_blkcnt = fig_budget_blkcnt.add_subplot(projection='3d') img = ax_blkcnt.scatter3D(column_value["budget_scaling_power"], column_value["budget_scaling_area"], column_value["budget_scaling_latency"], c=column_value[under_study_var], cmap="bwr", s=80, label="System Block Count") for idx,_ in enumerate(column_value[under_study_var]): coordinate = column_value[under_study_var][idx] coord_in_scientific_notatio = "{:.2e}".format(coordinate) ax_blkcnt.text(column_value["budget_scaling_power"][idx], column_value["budget_scaling_area"][idx], column_value["budget_scaling_latency"][idx], '%s' % coord_in_scientific_notatio, size=fontSize) ax_blkcnt.set_xlabel("Power Budget", fontsize=fontSize) ax_blkcnt.set_ylabel("Area Budget", fontsize=fontSize) ax_blkcnt.set_zlabel("Latency Budget", fontsize=fontSize) ax_blkcnt.legend() cbar = fig_budget_blkcnt.colorbar(img, aspect=40) cbar.set_label("System Block Count", rotation=270) #plt.title("{Power Budget, Area Budget, Latency Budget} VS System Block Count: " + subDirName) plt.tight_layout() output_base_dir = '/'.join(input_dir_names[0].split("/")[:-2]) output_dir = os.path.join(output_base_dir, "3D/case_studies") if not os.path.exists(output_dir): os.makedirs(output_dir) plt.savefig(os.path.join(output_dir, under_study_var+ ".png")) # plt.show() plt.close('all') def plot_convergence_per_workloads(input_dir_names, res_column_name_number): #itrColNum = all_res_column_name_number["iteration cnt"] #distColNum = all_res_column_name_number["dist_to_goal_non_cost"] trueNum = all_res_column_name_number["move validity"] move_name_number = all_res_column_name_number["move name"] # experiment_names experiment_names = [] file_full_addr_list = [] for dir_name in input_dir_names: file_full_addr = os.path.join(dir_name, "result_summary/FARSI_simple_run_0_1_all_reults.csv") file_full_addr_list.append(file_full_addr) experiment_name = get_experiments_name(file_full_addr, res_column_name_number) experiment_names.append(experiment_name) color_values = ["r","b","y","black","brown","purple"] column_name_color_val_dict = {"best_des_so_far_power":"purple", "power_budget":"purple","best_des_so_far_area_non_dram":"blue", "area_budget":"blue", "latency_budget_hpvm_cava":"orange", "latency_budget_audio_decoder":"yellow", "latency_budget_edge_detection":"red", "best_des_so_far_latency_hpvm_cava":"orange", "best_des_so_far_latency_audio_decoder": "yellow","best_des_so_far_latency_edge_detection": "red", "latency_budget":"white" } axis_font = {'size': '20'} fontSize = 20 x_column_name = "iteration cnt" y_column_name_list = ["power", "area_non_dram", "latency", "latency_budget", "power_budget","area_budget"] experiment_column_value = {} for file_full_addr in file_full_addr_list: experiment_name = get_experiments_name(file_full_addr, res_column_name_number) experiment_column_value[experiment_name] = {} for y_column_name in y_column_name_list: if "budget" in y_column_name: prefix = "" else: prefix = "best_des_so_far_" y_column_name = prefix+y_column_name y_column_number = res_column_name_number[y_column_name] x_column_number = res_column_name_number[x_column_name] #dis_to_goal_column_number = res_column_name_number["dist_to_goal_non_cost"] #ref_des_dis_to_goal_column_number = res_column_name_number["ref_des_dist_to_goal_non_cost"] if not y_column_name == prefix+"latency": experiment_column_value[experiment_name][y_column_name] = [] with open(file_full_addr, newline='') as csvfile: resultReader = csv.reader(csvfile, delimiter=',', quotechar='|') for i, row in enumerate(resultReader): if i > 1: if row[trueNum] == "FALSE" or row[move_name_number]=="identity": continue col_value = row[y_column_number] if ";" in col_value: col_value = col_value[:-1] col_values = col_value.split(";") for col_val in col_values: if "=" in col_val: val_splitted = col_val.split("=") value_to_add = (float(row[x_column_number]), (val_splitted[0], val_splitted[1])) else: value_to_add = (float(row[x_column_number]), col_val) if y_column_name
a diabetogenic property in extracts of the anterior lobe of the pituitary served to emphasize the possibility that a close relation­ ship existed betv.-een the hypophysis and the pancreas. However so far as clinical diabetes is concerned the connection is not so obvious: there can be little doubt that in certain cases, for example acromegalics, the diabetes is due to the presence of an anterior pituitary factor, but the great majority of cases of diabetes mellitus present no clinical evidence of any overactivity of the hypophysis. It may be that this failure to recognise any overactivity of the anterior pituitary lobe in the majority of cases of diabetes mellitus is due to the absence of any method of measuring the activity of this gland; in this way recognition of hyperpituitarism cannot be made until obvious clinical signs develop,/ m develop, such as disturbance of growth. However it is not improbable that minor degrees of oversecretion and temporary overactivity may pass unnoticed. In this connection in the work of Young (1937. 1933) i*1 producing permanent diabetes in dogs by means of the injection of anterior pituitary extract is of great interest. As has been mentioned, this work represents the first convincing demonstration that permanent diabetes could be produced in experimental animals by the injection of an extract of the pituitary, and an important feature of the experiment was that the diabetes persisted after the extract had been stopped. (1939) Young makes the interesting comment that, if the results obtained in producing experimental diabetes in animals are of significance in human diabetes mellitus, it is possible that a short period of pituitary overactivity in a human subject might result in damage to the islets of Langerhans so as to produce a diabetic state, although no persistent sign of pituitary overaction might be found. In such a case the diabetes might appear to result primarily from islet lesions. This suggestion may prove to be of importance but it is impossible of proof in the absence of any means of assessing pituitary activity other than by the presence of gross clinical signs. It is however possible to examine some of the factors which are believed to be of importance in deciding the onset of diabetes mellitus in the light of this suggestion. Until now no comment has been made concerning either the composition or the identity of the factor present in crude anterior pituitary extracts and playing an important part in the metabolism of/ ns of carbohydrate. No useful purpose would be served by considering the views of various workers concerning the composition of this substance, since there is no general agreement on this subject; but there is a suggestion regarding its identity that is of great present interest and must therefore be mentioned. Young (1939), in reviewing'the relation of the anterior pituitary to carbohydrate metabolism, remarks that the growth hormone may be identical with the diabetogenic substance and says that the evidence at present available is compatible with such an idea. If this is so then it is possible to consider the incidence of diabetes mellitus in human subjects at the period of active growth, and in this way consider the possible importance of overaction of the anterior pituitary as a causal factor in the production of the disorder. • Attention has already been directed towards this field of study by certain investigators. White (1935* 1936) In particular has in a series of 303 diabetic children overheight was noticed amounting to an average of 2.4 inches in 87 per cent of the cases. If this overgrowth is regarded as a sign of temporary overactivity of the pituitary these cases may be compared to the dogs treated intensively with anterior pituitary extract and developing permanent diabetes as the result. Personal experience of one case was striking: a boy, aged 18 years, grew a total of 4 inches between his 17th and l8th birthdays, at the end of this period diabetes developed and required 40 to 50 units of insulin daily to control the glycosuria. Since the normal increase in height at this age is only 0.5 inches there is some/ A! some justification for considering this as an example of an overactive anterior lobe cf the pituitary. There is therefore some evidence that overactivity of the pituitary may be a factor in the causation of certain cases of diabetes in childhood. Although it may seem reasonable to suppose that a temporary overactivity of the pituitary may play a part in the production of diabetes in early life, no such close relationship can be demonstrat­ ed in connection with the majority of cases - those occurring in In adult life same measurement of pituitary activity may be obtained by a consideration of the sexual processes and Joslin (1935) points out that the incidence of diabetes is somewhat increased at puberty and at the menopause in females; has not been found to cause an increased incidence. but pregnancy The influence of puberty and the menopause in this connection may be due to the increase in pituitary activity at these times, but even if this is so only a small number of cases can be explained on this basis. On the other hand, 80 out of 100 adult cases of diabetes mellitus have a history of previous obesity and this relationship is too close to be merely a matter of coincidence (Joslin, 1935). Obesity is therefore the most important single cause of diabetes mellitus, and so far as the present writer is aware no worker has ever blamed obesity on overaction of the pituitary, indeed the tendency is to regard many examples of obesity as the result of hypopituitarism. Of course all these arguments are based on indirect evidence concern­ ing the activity of the pituitary in clinical diabetes mellitus; direct evidence on this point is scanty. (1936)/ De 7vesselow and Griffiths / #o (1936) were able to demonstrate that the plasma of certain elder­ ly diabetics, when injected into rabbits, was capable of checking the development of insulin hypoglycaemia in these animals. The interest so far as this observation was concerned rested in the resemblance between this action of diabetic plasma and extracts of the anterior lobe of the pituitary. Further work may demonstrate the existence of a pituitary factor in the majority of cases, but until this is done there seems little reason at present to blame the pituitary as a causal factor of paramount importance in clinical diabetes mellitus. Pathology is also of little assistance in deciding the import­ ance of the pituitary gland in this respect, except in providing negative evidence: Eisenhardt (193$) examined serial and random sections of the pituitary in a series of cases of diabetes mellitus. The cases were chosen so as to give a fair cross section of the diabetic population but no constant significant change in the pituitary was found. This conclusion is in agreement with that reached by Warren (1938)• Houssay (1936a) attempted to explain the incidence of diabetes mellitus in terms of the degree of activity of the pituitary, and suggested that the- pituitary exerted its influence in 2 ways, hyperactivity and hypoactivity. Hyperactivity was observed clinically by the occurrence of overgrowth in diabetic children, and also by the increased incidence of the disease at puberty and the menopause, and was theoretically associated with an excess of the/ the diabetogenic factor of Houssay. In contrast to this obesity in the adult, and dwarfism in the child suggested hypoactivity of the pituitary, and could be associated in theory with a lack of the pancreotropic hormone of Anselmino and Hoffmann - This explanation may be regarded as an ingenious attempt to apply experimental findings to a clinical problem, but it is not difficult to find obstacles that stand in the way of its acceptance. Such an explanation does not take into account the fact that injection of the diabetogenic substance in the usual quantities into normal animals produces only a temporary glycosuria, and not a permanent diabetes. It is true that the work of Young has altered this, at least for very large quantities, but this was not known at the time of Houssay*s suggestion. Again, to explain the existence of diabetes in cases of dwarfism as being due to the absence of a pancreotropic hormone secreted by the pituitary is at first sight attractive, but, as has been mentioned previously, there is no convincing evidence that any such substance exists. Therefore, for the present writer at least, this hypothesis will not bear critical analysis.
random variable 102 0 obj For example, f(g(x)) is the composite function that is formed when g(x) is substituted for x in f(x). Terms are separated by + or - signs: example of a polynomial with more than one variable: For each term: Find the degree by adding the exponents of each variable in it, The largest such degree is the degree of the polynomial. >> f(x,y)=x^4+x^3-18x^2-16x+32-y^2. /ColorSpace /DeviceCMYK x and y represent these quantities, respectively. . Numerical integration using an adaptive vectorized Simpson’s rule. Download books for free. Variable functions. Examples . In this section we will take a look at limits involving functions of more than one variable. Octave supports five different adaptive quadrature algorithms for computing the integral of a function f over the interval from a to b. /Width 300 %���� Shlomo Sternberg May 10, 2005. So, let's try to change the variables … We also noted that \lim_{(x,y) \to (a,b)} f(x,y) does not exist if either: along, Temperature functions T(x,y,t), where x and y represent the, Density functions p(x,y,z) for a three dimensional solid, where, Concentration functions C(x,y,z,,t), where x,y, and z represent. These are quad. The range of a real-valued function f is the collection of all real numbers f … variables graphically, since for a function of n variables, n+1 dimensional f(g(x)) can also be written as (f ∘ g)(x) or fg(x), In the composition (f ∘ g)(x), the domain of f becomes g(x). [Math (4-1) This is a transformation of the random variable X into the random variable Y. Block of code: Set of C statements, which will be executed whenever a call will be made to the function. So with … In case, if the function contains more variables, then the variables should be constant, or it might be the known variables for the function to remain it in the same linear function condition. Fortunately, the functions we will examine will typically be continuous almost everywhere. quadl. In our example, the mymaxfunction has five input arguments and on… It gives the name of the function and order of arguments. This program is divided in two functions: addition and main.Remember that no matter the order in which they are defined, a C++ program always starts by calling main.In fact, main is the only function called automatically, and the code in any other function is only executed if its function is called from main (directly or indirectly). This example clearly demonstrates one of the main purposes of functions: to avoid code duplication. We report these formulae below. Optional arguments can have default values and types other than Variant. quadv. In C++, there are different types of variables (defined with different keywords), for example: int - stores integers (whole numbers), without decimals, such as 123 or -123 double - stores floating point numbers, with decimals, such as 19.99 or -19.99 ?�6�Ȁ���/x����F'��?��^�_�>�ޕ_�>�2��� ��P�ˏ���r_e�� F߹8�����1�����1�����%�W���RI����%�\o�m��RK�]�ڟc���?�J��r^��>���ˍ���r_e�� D߹������1�����1�����%�W���RI����%�\o�M��K�]�گg���?�J��r^���C�.7�&��}��~�D��s�����n���ү�Gܗ�_I?���oܗ�q��7�DI)w?j}�_���(�}*� t%�W��I?���oܗ�q��7�EI. Functions. Real Functions in One Variable: Examples of Integrals by Leif Mejlbro. ���tjb�v�)-��C��DZ] � Ӱ!��J�u��g�P:;������x���*[A�SBq�1�����i� ���O �>�=�*���7�5�U��N��B�r�ڗ �%���y�E��DhI���w�Se�����9##@כ^RG�q'�;+->����I��Z�7���uR�L�I�RI�I���I�2I)I'L��क�RI(JR�JRI�IJI�I�R�I�k)�4j���G�ԥF}�X� �W�Q�1� �����)�Ks�ŋ��v?��� 6߀RQ�����P�D����3�@vT�c�~�T�A���f�Oη���T����|��%7�T�H��L�ɦ�솪_i��9'�v��Sa�3� �8>�' %Ki���꺋�d��Ѫ���)�l�Bw0���9�)A&SI՞0 uc}��j����t?�!��)T؝5R�08� �5� �G�S ՞ U��� �? contact us. Let the variables Function of a Random Variable v ≤ v1 if u ≤ a v ≤ v2 if u ≤ b or c ≤ u ≤ d v ≤ v3 if u ≤ e For any number s, values of u such that g(u) ≤ s fall in a set of intervals Is. It is difficult to completely represent a function of more than 2 These are quad. Chapter 4 - Function of Random Variables Let X denote a random variable with known density fX(x) and distribution FX(x). Independent Variable . In particular, we can state the following theorem. We also noted that … x is a result I got from the first function, it is just a literal example, I need to import a result I got in the first function and use in the second function. If you have questions or comments, don't hestitate to Consider the transformation Y = g(X). It will run like this. corresponds to the height above the horizontal axis. For many commonly used real functions, the domain is the whole set of real numbers, and the function … For example, if ( a 1 , ..., a n ) is a point of the interior of the domain of the function f , we can fix the values of x 2 , ..., x n to a 2 , ..., a n respectively, to get a univariable function << f(x)=x^4+x^3-18x^2-16x+32. A polynomial in one variable is a function in which the variable is only to whole number powers, and the variable does not appear in denominators, in exponents, under radicals, or in between absolute value signs or greatest integer signs. A function g is one-to-one if every element of the range of g corresponds to exactly one element of the domain of g. One-to-one is also written as 1-1. Also, we will be learning here the inverse of this function.One-to-One functions define that each Random variable X( ) is a mapping from the sample space into the real line. ���� Adobe d� �� � ''''25552;;;;;;;;;; So, let’s start learning each section one by one below. [Vector Calculus Home] a graph of the function z=sin(sqrt(x^2+y^2)). If you want to access that variable from the base workspace, then declare the variable at the command line. The theory of functions of one complex variable contains some of the most powerful and widely useful tools in all of mathematical analysis. space. Our first step is to explain what a function of more than one variable is, starting with functions of two independent variables. Then we can define T(x,y) Temperature depends on position. Like nested loops, we can also have nested functions in Python. However, it is useful to take a brief look at functions of more than two variables. This is a function of 2 variables.A function of 2 For example – A function which is used to add two integer variables, will be having two integer argument. Numerical integration using an adaptive Lobatto rule. )w?j}�?���,�J�� zU��SI����%�\o�M��K�_�~�{����Q�zu��Kӯ�B�He�� D߹/���g܊�\R�~������X� >�������)��\o�L���.7�&}ȩ%�.��W����?ŏ�a����/N�� The definitions and notation used for functions with two variables are similar to those for one variable. 3 !1AQa"q�2���B#R�b34r��C%�S���cs5���&D�TdE£t6�U�e���u��F'���������������Vfv��������7GWgw�������� ; !1AQaq"2����B#�R��3b�r��CScs4�%���&5��D�T�dEU6te����u��F���������������Vfv��������'7GWgw���������� ? [Notation] The following problems involve the CONTINUITY OF A FUNCTION OF ONE VARIABLE. quadgk. A variable declared without a value will have the value undefined. Automatic Functions. For example, Output Hello world Output 3 This seems quite simple. ThenVis also a rv since, for any outcomee,V(e)=g(U(e)). Octave supports five different algorithms for computing the integral of a function f over the interval from a to b. The domain of a function of one variable is a subset of the real line { x | x ∈ {R} }. We simply create a function using def inside another function to nest two functions. There is one more example where argument is being passed by reference and the reference is being overwritten inside the called function. i!}��>�ˍ?�7�EM�. A function of several variables has several independent argument list: Argument list contains variables names along with their data types. Every 'C' program has at least one function which is the main function, but a program can have any number of functions. The application derivatives of a function of one variable is the determination of maximum and/or minimum values is also important for functions of two or more variables, but as we have seen in earlier sections of this chapter, the introduction of more independent variables leads to more possible outcomes for the calculations. It is a function that graphs to the straight line. If k is positive and at most 1, the set of points for which
\section{Self-Stabilizing Distributed Reset Algorithm}\label{sec:algo} \subsection{Overview of the Algorithm} In this section, we present our distributed cooperative reset algorithm, called {\tt SDR}\xspace. The formal code of {\tt SDR}\xspace, for each process $u$, is given in Algorithm~\ref{alg:A}. This algorithm aims at reinitializing an input algorithm {\tt I} when necessary. {\tt SDR}\xspace is self-stabilizing in the sense that the composition {\tt I} $\circ$ {\tt SDR}\xspace is self-stabilizing for the specification of {\tt I}. Algorithm {\tt SDR}\xspace works in anonymous networks and is actually is multi-initiator: a process $u$ can initiate a reset whenever it locally detects an inconsistency in {\tt I}, {\em i.e.}, whenever the predicate $\neg\mathbf{P\_ICorrect}\xspace(u)$ holds ({\em i.e.}, {\tt I} is locally checkable). So, several resets may be executed concurrently. In this case, they are coordinated: a reset may be partial since we try to prevent resets from overlapping. \subsection{The Variables} Each process $u$ maintains two variables in Algorithm {\tt SDR}\xspace: $\variable{st}_u \in \{C,RB,RC\}$, the {\em status} of $u$ with respect to the reset, and $\variable{d}_u \in \mathds{N}$, the {\em distance} of $u$ in a reset. \paragraph{Variable $\variable{st}_u$.} If $u$ is not currently involved into a reset, then it has status $C$, which stands for {\em correct}. Otherwise, $u$ has status either $RB$ or $RF$, which respectively mean {\em reset broadcast} and {\em reset feedback}. Indeed, a reset is based on a (maybe partial) {\em Propagation of Information with Feedback (PIF)} where processes reset their local state in {\tt I} (using the macro $\variable{reset}$) during the broadcast phase. When a reset locally terminates at process $u$ ({\em i.e.}, when $u$ goes back to status $C$ by executing $\mathbf{rule\_C}\xspace(u)$), each member $v$ of its closed neighborhood satisfies $\mathbf{P\_reset}\xspace(v)$, meaning that they are in a pre-defined initial state of {\tt I}. At the global termination of a reset, every process $u$ involved into that reset has a state in {\tt I} which is consistent {\em w.r.t.} that of its neighbors, {\em i.e.}, $\mathbf{P\_ICorrect}\xspace(u)$ holds. Notice that, to ensure that $\mathbf{P\_ICorrect}\xspace(u)$ holds at the end of a reset and for liveness issues, we enforce each process $u$ stops executing {\tt I} whenever a member of its closed neighborhood (in particular, the process itself) is involved into a reset: whenever $\neg \mathbf{P\_Clean}\xspace(u)$ holds, $u$ is not allowed to execute {\tt I}. \paragraph{Variable $\variable{d}_u $.} This variable is meaningless when $u$ is not involved into a reset ({\em i.e.}, when $u$ has status $C$). Otherwise, the distance values are used to arrange processes involved into resets as a {\em Directed Acyclic Graph (DAG)}. This distributed structure allows to prevent both livelock and deadlock. Any process $u$ initiating a reset (using rule $\mathbf{rule\_R}\xspace(u)$) takes distance 0. Otherwise, when a reset is propagated to $u$ ({\em i.e.}, when $\mathbf{rule\_RB}\xspace(u)$ is executed), $\variable{d}_u$ is set to the minimum distance of a neighbor involved in a broadcast phase plus 1; see the macro $\variable{compute}(u)$. \subsection{Typical Execution}\label{sub:normalexec} Assume the system starts from a configuration where, for every process $u$, $\variable{st}_u = C$. A process $u$ detecting an inconsistency in {\tt I} ({\em i.e.}, when $\neg\mathbf{P\_ICorrect}\xspace(u)$ holds) stops executing {\tt I} and initiates a reset using $\mathbf{rule\_R}\xspace(u)$, unless one of its neighbors $v$ is already broadcasting a reset, in which case it joins the broadcast of some neighbor by $\mathbf{rule\_RB}\xspace(u)$. To initiate a reset, $u$ sets $(\variable{st}_u,\variable{d}_u)$ to $(RB,0)$ meaning that $u$ is the root of a reset (see macro $\variable{beRoot}(u)$), and resets its {\tt I}'s variables to an pre-defined state of {\tt I}, which satisfies $\mathbf{P\_reset}\xspace(u)$, by executing the macro $\variable{reset}(u)$. Whenever a process $v$ has a neighbor involved in a broadcast phase of a reset (status $RB$), it stops executing {\tt I} and joins an existing reset using $\mathbf{rule\_RB}\xspace(v)$, even if its state in {\em I} is correct, ({\em i.e.}, even if $\mathbf{P\_ICorrect}\xspace(v)$ holds). To join a reset, $v$ also switches its status to $RB$ and resets its {\tt I}'s variables ($\variable{reset}(v)$), yet it sets $\variable{d}_v$ to the minimum distance of its neighbors involved in a broadcast phase plus 1; see the macro $\variable{compute}(v)$. Hence, if the configuration of {\tt I} is not legitimate, then within at most $n$ rounds, each process receives the broadcast of some reset. Meanwhile, processes (temporarily) stop executing {\tt I} until the reset terminates in their closed neighborhood thanks to the predicate $\mathbf{P\_Clean}\xspace$. When a process $u$ involved in the broadcast phase of some reset realizes that all its neighbors are involved into a reset ({\em i.e.}, have status $RB$ or $RF$), it initiates the feedback phase by switching to status $RF$, using $\mathbf{rule\_RF}\xspace(u)$. The feedback phase is then propagated up in the DAG described by the distance value: a broadcasting process $u$ switches to the feedback phase if each of its neighbors $v$ has not status $C$ and if $\variable{d}_v > \variable{d}_u$, then $v$ has status $RF$. This way the feedback phase is propagated up into the DAG within at most $n$ additional rounds. Once a root of some reset has status $RF$, it can initiate the last phase of the reset: all processes involves into the reset has to switch to status $C$, using $\mathbf{rule\_C}\xspace$, meaning that the reset is done. The values $C$ are propagated down into the reset DAG within at most $n$ additional rounds. A process $u$ can executing {\tt I} again when all members of its closed neighborhood (that is, including $u$ itself) have status $C$, {\em i.e.}, when it satisfies $\mathbf{P\_Clean}\xspace(u)$. Hence, overall in this execution, the system reaches a configuration $\gamma$ where all resets are done within at most $3n$ rounds. In $\gamma$, all processes have status $C$. However, process has not necessarily kept a state satisfying $\mathbf{P\_reset}\xspace$ ({\em i.e.}, the initial pre-defined state of {\tt I}) in this configuration. Indeed, some process may have started executing {\tt I} again before $\gamma$. However, the predicate $\mathbf{P\_Clean}\xspace$ ensures that no resetting process has been involved in these latter (partial) executions of {\tt I}. Hence, {\tt SDR}\xspace rather ensures that all processes are in {\tt I}'s states that are coherent with each other from $\gamma$. That is, $\gamma$ is a so-called {\em normal configuration}, where $\mathbf{P\_Clean}\xspace(u) \And \mathbf{P\_ICorrect}\xspace(u)$ holds for every process $u$. \subsection{Stabilization of the Reset}\label{err:corr} If a process $u$ is in an incorrect state of Algorithm {\tt SDR}\xspace ({\em i.e.}, if $\mathbf{P\_R1}\xspace(u) \vee \mathbf{P\_R2}\xspace(u)$ holds), we proceed as for inconsistencies in Algorithm {\tt I}. Either it joins an existing reset (using $\mathbf{rule\_RB}\xspace(u)$) because at least one of its neighbors is in a broadcast phase, or it initiates its own reset using $\mathbf{rule\_R}\xspace(u)$. Notice also that starting from an arbitrary configuration, the system may contain some reset in progress. However, similarly to the typical execution, the system stabilizes within at most $3n$ rounds to a normal configuration. Algorithm {\tt SDR}\xspace is also efficient in moves. Indeed, in Sections~\ref{sect:alliance} and~\ref{sect:unison} we will give two examples of composition {\tt I} $\circ$ {\tt SDR}\xspace that stabilize in a polynomial number of moves. Such complexities are mainly due to the coordination of the resets which, in particular, guarantees that if a process $u$ is enabled to initiate a reset ($\mathbf{P\_Up}\xspace(u)$) or the root of a reset with status $RB$, then it satisfies this disjunction since the initial configuration ({\em cf.}, Theorem~\ref{theo:pseudoRoots}, page \pageref{theo:pseudoRoots}). \subsection{Requirements on the Input Algorithm}\label{sect:require} According to the previous explanation, Algorithm {\tt I} should satisfy the following prerequisites: \begin{enumerate} \item Algorithm {\tt I} should not write into the variables of {\tt SDR}\xspace, {\em i.e.}, variables $\variable{st}_u$ and $\variable{d}_u$, for every process $u$. \label{RQ1} \item For each process $u$, Algorithm {\tt I} should provide the two input predicates $\mathbf{P\_ICorrect}\xspace(u)$ and $\mathbf{P\_reset}\xspace(u)$ to {\tt SDR}\xspace, and the macro $\variable{reset}(u)$. Those inputs should satisfy: \begin{enumerate} \item $\mathbf{P\_ICorrect}\xspace(u)$ does not involve any variable of {\tt SDR}\xspace and is closed by Algorithm {\tt I}. \label{RQ2} \item $\mathbf{P\_reset}\xspace(u)$ involves neither a variable of {\tt SDR}\xspace nor a variable of a neighbor of $u$. \label{RQ4} \item If $\neg \mathbf{P\_ICorrect}\xspace(u) \vee \neg \mathbf{P\_Clean}\xspace(u)$ holds ({\em n.b.} $\mathbf{P\_Clean}\xspace(u)$ is defined in {\tt SDR}\xspace), then no rule of Algorithm {\tt I} is enabled at $u$. \label{RQ3} \item If $\mathbf{P\_reset}\xspace(v)$ holds, for every $v \in \variable{N}[u]$, then $\mathbf{P\_ICorrect}\xspace(u)$ holds.\label{RQ6} \item If $u$ performs a move in $\gamma \mapsto \gamma'$, where, in particular, it modifies its variables in Algorithm {\tt I} by executing $\variable{reset}(u)$ (only), then $\mathbf{P\_reset}\xspace(u)$ holds in $\gamma'$.\label{RQ5} \end{enumerate} \end{enumerate} \input{code} \section{$(f,g)$-alliance}\label{sect:alliance} \subsection{The Problem} The $(f,g)$-alliance problem has been defined by Dourado {\em et al.}~\cite{DouradoPRS11}. Given a graph $G =(V,E)$, and two non-negative integer-valued functions on nodes $f$ and $g$, a subset of nodes $A \subseteq
of the same design paradigms that digital computers have traditionally utilized. One such model, the Differentiable Neural Computer (DNC) \citep{DNC16} and its predecessor the Neural Turing Machine (NTM) \citep{NTM14}, structure the architecture to explicitly separate memory from computation. The DNC has a recurrent neural controller that can access an external memory resource by executing differentiable read and write operations. This allows the DNC to act and memorize in a structured manner resembling a computer processor, where read and write operations are sequential and data is store distinctly from computation. The DNC has been used sucessfully to solve complicated algorithmic tasks, such as finding shortest paths in a graph or querying a database for entity relations. Building off these previous external memories, we introduce a new architecture called the Neural Map, a structured memory designed specifically for reinforcement learning agents in 3D environments. The Neural Map architecture overcomes some of the shortcomings of the previously mentioned neural memories. First, it uses an adaptable write operation and so its size and computational cost does not grow with the time horizon of the environment as it does with memory networks. Second, we impose a particular inductive bias on the write operation so that it is 1) well suited to 3D environments where navigation is a core component of sucessful behaviours, and 2) uses a sparse write operation that prevents frequent overwriting of memory locations that can occur with NTMs and DNCs. To accomplish this, we structure a DNC-style external memory in the form of a 2-dimensional map, where each position in the map is a distinct memory. To demonstrate the effectiveness of the neural map, we run it in on variety of 2D partially-observable maze-based environments and test it against LSTM and memory network policies. Finally, to establish its scalability, we run a Neural Map agent on a challenging 3D maze environment based on the video game Doom. \section{Neural Map} In this section, we will describe the details of the neural map. We assume we want our agent to act within some 2- or 3-dimensional environment. The neural map is the agent's internal memory storage that can be read from and written to during interaction with its environment, but where the write operator is selectively limited to affect only the part of the neural map that represents the area where the agent is currently located. For this paper, we assume for simplicity that we are dealing with a 2-dimensional map. This can easily be extended to 3-dimensional or even higher-dimensional maps (i.e. a 4D map with a 3D sub-map for each cardinal direction the agent can face). Let the agent's position be $(x,y)$ with $x \in\mathbb{R}$ and $y \in\mathbb{R}$ and let the neural map $M$ be a $C\times H\times W$ feature block, where $C$ is the feature dimension, $H$ is the vertical extent of the map and $W$ is the horizontal extent. Assume there exists some coordinate normalization function $\psi(x,y)$ such that every unique $(x,y)$ can be mapped into $(x',y')$, where $x'\in \{0,\hdots,W\}$ and $y'\in \{0,\hdots,H\}$. For ease of notation, suppose in the sequel that all coordinates have been normalized by $\psi$ into neural map space. Let $s_t$ be the current state embedding, $M_t$ be the current neural map, and $(x_t,y_t)$ be the current position of the agent within the neural map. The Neural Map is defined by the following set of equations: \begin{align} r_t &= read(M_t) \\ c_t &= context(M_t, s_t, r_t) \\ w_{t+1}^{(x_t,y_t)} &= write(s_t, r_t, c_t, M_t^{(x_t,y_t)}) \\ M_{t+1} &= update(M_t, w_{t+1}^{(x_t,y_t)}) \\ o_t &= [r_t, c_t, w_{t+1}^{(x_t,y_t)}] \\ \pi_t(a|s) &= \text{Softmax}(f(o_t)), \end{align} where $w_{t}^{(x_t,y_t)}$ represents the feature at position $(x_t,y_t)$ at time $t$, $[x_1,\hdots,x_k]$ represents a concatenation operation, and $o_{t}$ is the output of the neural map at time $t$ which is then processed by another deep network $f$ to get the policy outputs $\pi_t(a|s)$. We will now separately describe each of the above operations in more detail. \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{neuralmap_compgraphnew.pdf} \caption{\small A visualization of two time steps of the neural map.} \vspace{-0.1in} \end{figure} \subsection{Global Read Operation} The $read$ operation passes the current neural map $M_t$ through a deep convolutional network and produces a $C$-dimensional feature vector $r_t$. The global read vector $r_t$ summarizes information about the entire map. \subsection{Context Read Operation} The $context$ operation performs context-based addressing to check whether certain features are stored in the map. It takes as input the current state embedding $s_t$ and the current global read vector $r_t$ and first produces a query vector $q_t$. The inner product of the query vector and each feature $M_t^{(x,y)}$ in the neural map is then taken to get scores $a_t^{(x,y)}$ at all positions $(x,y)$. The scores are then normalized to get a probability distribution $\alpha_t^{(x,y)}$ over every position in the map, also known as ``soft attention''~\citep{bahdanau2015nmt}. This probability distribution is used to compute a weighted average $c_t$ over all features $M_t^{(x,y)}$. To summarize: \begin{align} \label{eq:context1} q_t &= W [s_t, r_t] \\ \label{eq:context2} a_t^{(x,y)} &= q_t \cdot M_t^{(x,y)} \\ \label{eq:contextprob} \alpha_t^{(x,y)} &= \frac{e^{a_t^{(x,y)}}}{\sum_{(w,z)} e^{a_t^{(w,z)}}} \\ \label{eq:context3} c_t &= \sum_{(x,y)} \alpha_t^{(x,y)} M_t^{(x,y)}, \end{align} where $W$ is a weight matrix. The context read operation allows the neural map to operate as an associative memory: the agent provides some possibly incomplete memory (the query vector $q_t$) and the operation will return the completed memory that most closely matches $q_t$. So, for example, the agent can query whether it has seen something similar to a particular landmark that is currently within its view. \subsection{Local Write Operation} Given the agent's current position $(x_t,y_t)$ at time $t$, the $write$ operation takes as input the current state embedding $s_t$, the global read output $r_t$, the context read vector $c_t$ and the current feature at position $(x_t,y_t)$ in the neural map $M_t^{(x_t,y_t)}$ and produces, using a deep neural network $f$, a new C-dimensional vector $w_{t+1}^{(x_t,y_t)}$. This vector functions as the new local write candidate vector at the current position $(x_t,y_t)$: \begin{align} w_{t+1}^{(x_t,y_t)} &= f([s_t, r_t, c_t, M_t^{(x_t,y_t)}]) \end{align} \subsection{Map Update Operation} The $update$ operation creates the neural map for the next time step. The new neural map $M_{t+1}$ is equal to the old neural map $M_t$, except at the current agent position $(x_t,y_t)$, where the current write candidate vector $w_{t+1}^{(x_t,y_t)}$ is stored: \begin{align} M_{t+1}^{(a,b)} = \left\{ \begin{array}{lr} w_{t+1}^{(x_t,y_t)}, & \text{for } (a,b)=(x_t,y_t) \\ M_t^{(a,b)}, & \text{for } (a,b)\neq(x_t,y_t) \end{array}\right. \end{align} \subsection{Operation Variants} There are several modifications that can be made to the standard operations as defined above. Below we discuss some variants. \subsubsection{Localized Read Operation} Instead of passing the entire neural map through a deep convolutional network, a spatial subset of the map can be passed instead. For example, a Spatial Transformer Network~\citep{STN15} can be used to attentively subsample the neural map at particular locations and scales. This can be helpful when the environment requires a large high-resolution map which can be computationally expensive to process in its entirety at each time step. \subsubsection{Key-Value Context Read Operation} We can impose a stronger bias on the context addressing operation by splitting each feature of the neural map into two parts $M_t^{(x,y)} = [k_t^{(x,y)}, v_t^{(x,y)}]$, where $k_t^{(x,y)}$ is the $(C/2)$-dimensional ``key'' feature and $v_t^{(x,y)}$ is the $(C/2)$-dimensional ``value'' feature~\cite{key_value}. The key features are matched against the query vector (which is now a $(C/2)$-dimensional vector) to get the probability distribution $\alpha_t^{(x,y)}$, and the weighted average is taken over the value features. Concretely: \begin{align} q_t &= W [s_t, r_t] \\ M_t^{(x,y)} &= [k_t^{(x,y)}, v_t^{(x,y)}] \\ a_t^{(x,y)} &= q_t \cdot k_t^{(x,y)} \\ \alpha_t^{(x,y)} &= \frac{e^{a_t^{(x,y)}}}{\sum_{(w,z)} e^{a_t^{(w,z)}}} \\ c_t &= \sum_{(x,y)} \alpha_t^{(x,y)} v_t^{(x,y)} \end{align} Having distinct key-value features allows the network to more explicitly separate the addressing feature space from the content feature space. \subsubsection{GRU-based Local Write Operation} As previously defined, the write operation simply replaces the vector at the agent's current position with a new feature produced by a deep network. Instead of this hard rewrite of the current position's feature vector, we can use a gated write operation based on the recurrent update equations of
# Model Tools POMDPTools contains assorted tools that are not part of the core POMDPs.jl interface for working with (PO)MDP Models. ## Interface Extensions POMDPTools contains several interface extensions that provide shortcuts and standardized ways of dealing with extra data. Programmers should use these functions whenever possible in case optimized implementations are available, but all of the functions have default implementations based on the core POMDPs.jl interface. Thus, if the core interface is implemented, all of these functions will also be available. ### Weighted Iteration Many solution techniques, for example value iteration, require iteration through the support of a distribution and evaluating the probability mass for each value. In some cases, looking up the probability mass is expensive, so it is more efficient to iterate through value => probability pairs. weighted_iterator provides a standard interface for this. POMDPTools.POMDPDistributions.weighted_iteratorFunction weighted_iterator(d) Return an iterator through pairs of the values and probabilities in distribution d. This is designed to speed up value iteration. Distributions are encouraged to provide a custom optimized implementation if possible. Example julia> d = BoolDistribution(0.7) BoolDistribution(0.7) julia> collect(weighted_iterator(d)) 2-element Array{Pair{Bool,Float64},1}: true => 0.7 false => 0.3 ### Observation Weight Sometimes, e.g. in particle filtering, the relative likelihood of an observation is required in addition to a generative model, and it is often tedious to implement a custom observation distribution type. For this case, the shortcut function obs_weight is provided. POMDPTools.ModelTools.obs_weightFunction obs_weight(pomdp, s, a, sp, o) Return a weight proportional to the likelihood of receiving observation o from state sp (and a and s if they are present). This is a useful shortcut for particle filtering so that the observation distribution does not have to be represented. ### Ordered Spaces It is often useful to have a list of states, actions, or observations ordered consistently with the respective index function from POMDPs.jl. Since the POMDPs.jl interface does not demand that spaces be ordered consistently with index, the states, actions, and observations functions are not sufficient. Thus POMDPModelTools provides ordered_actions, ordered_states, and ordered_observations to provide this capability. POMDPTools.ModelTools.ordered_actionsFunction ordered_actions(mdp) Return an AbstractVector of actions ordered according to actionindex(mdp, a). ordered_actions(mdp) will always return an AbstractVector{A} v containing all of the actions in actions(mdp) in the order such that actionindex(mdp, v[i]) == i. You may wish to override this for your problem for efficiency. POMDPTools.ModelTools.ordered_statesFunction ordered_states(mdp) Return an AbstractVector of states ordered according to stateindex(mdp, a). ordered_states(mdp) will always return a AbstractVector{A} v containing all of the states in states(mdp) in the order such that stateindex(mdp, v[i]) == i. You may wish to override this for your problem for efficiency. POMDPTools.ModelTools.ordered_observationsFunction ordered_observations(pomdp) Return an AbstractVector of observations ordered according to obsindex(pomdp, a). ordered_observations(mdp) will always return a AbstractVector{A} v containing all of the observations in observations(pomdp) in the order such that obsindex(pomdp, v[i]) == i. You may wish to override this for your problem for efficiency. ### Info Interface It is often the case that useful information besides the belief, state, action, etc is generated by a function in POMDPs.jl. This information can be useful for debugging or understanding the behavior of a solver, updater, or problem. The info interface provides a standard way for problems, policies, solvers or updaters to output this information. The recording simulators from POMDPTools automatically record this information. To specify info from policies, solvers, or updaters, implement the following functions: POMDPTools.ModelTools.action_infoFunction a, ai = action_info(policy, x) Return a tuple containing the action determined by policy 'p' at state or belief 'x' and information (usually a NamedTuple, Dict or nothing) from the calculation of that action. By default, returns nothing as info. POMDPTools.ModelTools.solve_infoFunction policy, si = solve_info(solver, problem) Return a tuple containing the policy determined by a solver and information (usually a NamedTuple, Dict or nothing) from the calculation of that policy. By default, returns nothing as info. POMDPTools.ModelTools.update_infoFunction bp, i = update_info(updater, b, a, o) Return a tuple containing the new belief and information (usually a NamedTuple, Dict or nothing) from the belief update. By default, returns nothing as info. ## Model Transformations POMDPTools contains several tools for transforming problems into other classes so that they can be used by different solvers. ### Linear Algebra Representations For some algorithms, such as value iteration, it is convenient to use vectors that contain the reward for every state, and matrices that contain the transition probabilities. These can be constructed with the following functions: POMDPTools.ModelTools.transition_matricesFunction transition_matrices(p::SparseTabularProblem) Accessor function for the transition model of a sparse tabular problem. It returns a list of sparse matrices for each action of the problem. transition_matrices(m::Union{MDP,POMDP}) transition_matrices(m; sparse=true) Construct transition matrices for (PO)MDP m. The returned object is an associative object (usually a Dict), where the keys are actions. Each value in this object is an AbstractMatrix where the row corresponds to the state index of s and the column corresponds to the state index of s'. The entry in the matrix is the probability of transitioning from state s to state s'. POMDPTools.ModelTools.reward_vectorsFunction reward_vectors(m::Union{MDP, POMDP}) Construct reward vectors for (PO)MDP m. The returned object is an associative object (usually a Dict), where the keys are actions. Each value in this object is an AbstractVector where the index corresponds to the state index of s and the entry is the reward for that state. ### Sparse Tabular MDPs and POMDPs The SparseTabularMDP and SparseTabularPOMDP represents discrete problems defined using the explicit interface. The transition and observation models are represented using sparse matrices. Solver writers can leverage these data structures to write efficient vectorized code. A problem writer can define its problem using the explicit interface and it can be automatically converted to a sparse tabular representation by calling the constructors SparseTabularMDP(::MDP) or SparseTabularPOMDP(::POMDP). See the following docs to know more about the matrix representation and how to access the fields of the SparseTabular objects: POMDPTools.ModelTools.SparseTabularMDPType SparseTabularMDP An MDP object where states and actions are integers and the transition is represented by a list of sparse matrices. This data structure can be useful to exploit in vectorized algorithm (e.g. see SparseValueIterationSolver). The recommended way to access the transition and reward matrices is through the provided accessor functions: transition_matrix and reward_vector. Fields • T::Vector{SparseMatrixCSC{Float64, Int64}} The transition model is represented as a vector of sparse matrices (one for each action). T[a][s, sp] the probability of transition from s to sp taking action a. • R::Array{Float64, 2} The reward is represented as a matrix where the rows are states and the columns actions: R[s, a] is the reward of taking action a in sate s. • terminal_states::Set{Int64} Stores the terminal states • discount::Float64 The discount factor Constructors • SparseTabularMDP(mdp::MDP) : One can provide the matrices to the default constructor or one can construct a SparseTabularMDP from any discrete state MDP defined using the explicit interface. Note that constructing the transition and reward matrices requires to iterate over all the states and can take a while. To learn more information about how to define an MDP with the explicit interface please visit https://juliapomdp.github.io/POMDPs.jl/latest/explicit/ . • SparseTabularMDP(smdp::SparseTabularMDP; transition, reward, discount) : This constructor returns a new sparse MDP that is a copy of the original smdp except for the field specified by the keyword arguments. POMDPTools.ModelTools.SparseTabularPOMDPType SparseTabularPOMDP A POMDP object where states and actions are integers and the transition and observation distributions are represented by lists of sparse matrices. This data structure can be useful to exploit in vectorized algorithms to gain performance (e.g. see SparseValueIterationSolver). The recommended way to access the transition, reward, and observation matrices is through the provided accessor functions: transition_matrix, reward_vector, observation_matrix. Fields • T::Vector{SparseMatrixCSC{Float64, Int64}} The transition model is represented as a vector of sparse matrices (one for each action). T[a][s, sp] the probability of transition from s to sp taking action a. • R::Array{Float64, 2} The reward is represented as a matrix where the rows are states and the columns actions: R[s, a] is the reward of taking action a in sate s. • O::Vector{SparseMatrixCSC{Float64, Int64}} The observation model is represented as a vector of sparse matrices (one for each action). O[a][sp, o] is the probability of observing o from state sp after having taken action a. • terminal_states::Set{Int64} Stores the terminal states • discount::Float64 The discount factor Constructors • SparseTabularPOMDP(pomdp::POMDP) : One can provide the matrices to the default constructor or one can construct a
\ref{eq_eta_den}) to consider the existence of a representative value for specific regions. In the following, for simplicity since we are only dealing with one type of regions, the 500 pc wide ones, we will be using the analysed terms (e.g. $\overline{\Sigma}_{\rm{SFR\thinspace recent}}$, $\overline{\Sigma}_{\rm{SFR\thinspace past}}$, $\dot{\overline{{\Sigma}}}_{\rm{net \thinspace flow}}$) without the need of using the average symbols ($\Sigma_{\rm{SFR\thinspace recent}}$, $\Sigma_{\rm{SFR\thinspace past}}$, $\dot{{\Sigma}}_{\rm{net \thinspace flow}}$). Therefore, when we present an average, the average will be for several 500pc wide regions. Assuming the instantaneous recycling approximation \citep{2014ARA&A..52..415M} for stars more massive than $3\rm{M_{\odot}}$ ($\tau_{\rm{MS}}\sim 0.6\rm{Gyr}$, where $\tau_{\rm{MS}}$ is the main sequence lifetime), and a Chabrier IMF \citep{2003PASP..115..763C}, we obtain a value of $R=0.27$. We use the values $A=10^{-4.32}\rm{M_{\odot}/kpc^2/yr}$, and $N=1.56$ for the KS law\cite{2007ApJ...671..333K}. \section{Results} \begin{figure} \includegraphics[width=0.5\textwidth]{ugc11001_500_sfr1_sfr2.pdf} \caption{Recent star formation rate surface density, $\Sigma_{\rm{SFR\thinspace recent}}$, versus the past star formation rate surface density, $\Sigma_{\rm{SFR\thinspace past}}$, for the UGC 11001 galaxy. The red dots are the regions identified as those on the envelope. We plot the fit of Eq. \ref{eq_etafit} to the regions on the envelope as well as the result of the fit and the 1-$\sigma$ uncertainty range of the fit as shaded region. } \label{fig_diagram_ex} \end{figure} As an example, we plot the $\Sigma_{\rm{SFR\thinspace recent}}$ versus $\Sigma_{\rm{SFR\thinspace past}}$ diagram for one of the galaxies, UGC 11001, in Fig. \ref{fig_diagram_ex}. We plot the $\Sigma_{\rm{SFR\thinspace recent}}$ versus $\Sigma_{\rm{SFR\thinspace past}}$ diagrams for all of the galaxies in Fig. \ref{sfr_diagrams}. Each of the points in these plots can be seen as the relation between the $\Sigma_{\rm{SFR\thinspace recent}}$ and the $\Sigma_{\rm{SFR\thinspace past}}$ which depends on the value of $\dot{\Sigma}_{\rm{net\thinspace flow}}$, and $\eta$ (Eq. \ref{eq_etafit}). We identify those regions having the maximum $\Sigma_{\rm{SFR\thinspace recent}}$, per bin of $\Sigma_{\rm{SFR\thinspace past}}$, as the regions on the envelope. We see the regions on the envelope as red dots in Fig. \ref{fig_diagram_ex}, and as red squares in the MUSE recovered false color image of UGC 11001 in Fig. \ref{fig_colormuse}. \begin{table} \centering \caption{Estimated mass-loading factors, $\eta$, maximum flow gas surface density term, $\dot{\Sigma}_{\rm{flow\thinspace max}}$, and the associated average stellar mass surface density for the regions on the envelope, $\Sigma_{*}$.} \label{tab_results} \begin{tabular}{rrrr} \hline Galaxy identifier & $\eta$ $^a$ & $\dot{\Sigma}_{\rm{flow\thinspace max}}$ $^b$ & $\Sigma_{*}$ $^c$ \\ & & $10^{-8}\rm{M_{\odot}\thinspace yr^{-1}\thinspace kpc^{-2}}$ & $10^{6}\rm{M_{\odot}\thinspace kpc^{-2}}$ \\ \hline pgc33816 & $4.8 \pm 0.9$ & $4.9\pm 0.9$ & $27\pm 15 $ \\ eso184-g082 & $5.0 \pm 1.0$ & $8.7\pm 0.6$ & $49\pm 20 $ \\ eso467-062 & $8.0 \pm 2.0$ & $14.0\pm 1.0$ & $51\pm 39 $ \\ ugc272 & $3.4 \pm 0.2$ & $6.9\pm 0.4$ & $57\pm 44 $ \\ ngc5584 & $2.2 \pm 0.4$ & $4.0\pm 1.0$ & $60\pm 31 $ \\ eso319-g015 & $5.0 \pm 2.0$ & $11.0\pm 3.0$ & $66\pm 65 $ \\ ugc11214 & $2.6 \pm 0.6$ & $9.0\pm 2.0$ & $84\pm 33 $ \\ ngc6118 & $2.19 \pm 0.09$ & $2.8\pm 0.3$ & $90\pm 51 $ \\ ic1158 & $6.8 \pm 0.4$ & $16.3\pm 0.3$ & $109\pm 28 $ \\ ngc5468 & $2.2 \pm 0.8$ & $23.0\pm 2.0$ & $113\pm 66 $ \\ eso325-g045 & $1.7 \pm 0.2$ & $12.0\pm 1.0$ & $121\pm 57 $ \\ ngc1954 & $3.3 \pm 0.5$ & $23.8\pm 0.8$ & $121\pm 31 $ \\ ic5332 & $3.0 \pm 1.0$ & $12.0\pm 2.0$ & $120\pm 100 $ \\ ugc04729 & $2.8 \pm 0.6$ & $7.0\pm 2.0$ & $126\pm 57 $ \\ ngc2104 & $1.7 \pm 0.6$ & $4.0\pm 2.0$ & $132\pm 62 $ \\ eso316-g7 & $2.0 \pm 1.0$ & $12.0\pm 6.0$ & $136\pm 39 $ \\ eso298-g28 & $6.0 \pm 0.7$ & $47.0\pm 2.0$ & $136\pm 46 $ \\ mcg-01-57-021 & $7.0 \pm 1.0$ & $17.0\pm 2.0$ & $137\pm 24 $ \\ pgc128348 & $2.9 \pm 0.1$ & $11.6\pm 0.7$ & $140\pm 92 $ \\ pgc1167400 & $2.3 \pm 0.3$ & $4.1\pm 0.8$ & $141\pm 78 $ \\ ngc2835 & $2.2 \pm 0.7$ & $7.0\pm 3.0$ & $144\pm 40 $ \\ ic2151 & $2.0 \pm 0.5$ & $5.0\pm 3.0$ & $146\pm 61 $ \\ ngc988 & $1.2 \pm 0.4$ & $1.0\pm 2.0$ & $158\pm 63 $ \\ ngc1483 & $3.0 \pm 2.0$ & $13.0\pm 5.0$ & $158\pm 40 $ \\ ngc7421 & $1.1 \pm 0.3$ & $1.0\pm 0.9$ & $167\pm 78 $ \\ fcc290 & $2.0 \pm 0.4$ & $3.0\pm 1.0$ & $169\pm 42 $ \\ ic344 & $2.5 \pm 0.2$ & $11.0\pm 1.0$ & $171\pm 78 $ \\ ngc3389 & $4.4 \pm 0.7$ & $32.4\pm 0.7$ & $190\pm 130 $ \\ eso246-g21 & $2.9 \pm 0.7$ & $7.0\pm 2.0$ & $188\pm 67 $ \\ pgc170248 & $4.9 \pm 0.7$ & $16.0\pm 1.0$ & $192\pm 77 $ \\ ngc7329 & $4.1 \pm 0.2$ & $7.8\pm 0.4$ & $200\pm 120 $ \\ ugc12859 & $2.8 \pm 0.3$ & $5.1\pm 0.9$ & $202\pm 90 $ \\ ugc1395 & $2.7 \pm 0.3$ & $8.0\pm 1.0$ & $200\pm 160 $ \\ ngc5339 & $2.9 \pm 0.4$ & $5.0\pm 1.0$ & $210\pm 150 $ \\ ngc1591 & $2.3 \pm 0.7$ & $19.0\pm 4.0$ & $212\pm 93 $ \\ pgc98793 & $1.7 \pm 0.1$ & $6.6\pm 0.8$ & $214\pm 95 $ \\ ugc5378 & $2.7 \pm 0.4$ & $10.0\pm 1.0$ & $223\pm 93 $ \\ ngc4806 & $1.9 \pm 0.2$ & $10.3\pm 0.9$ & $230\pm 170 $ \\ ngc1087 & $1.3 \pm 0.3$ & $14.0\pm 2.0$ & $230\pm 170 $ \\ ngc4980 & $1.8 \pm 0.3$ & $6.6\pm 0.9$ & $240\pm 170 $ \\ ngc6902 & $2.4 \pm 0.3$ & $8.0\pm 1.0$ & $240\pm 13 $ \\ ugc11001 & $2.5 \pm 0.1$ & $21.8\pm 0.5$ & $250\pm 140 $ \\ ic217 & $1.2 \pm 0.4$ & $3.0\pm 2.0$ & $266\pm 88 $ \\ eso506-g004 & $3.4 \pm 0.2$ & $13.9\pm 0.4$ & $270\pm 170 $ \\ ic2160 & $2.4 \pm 0.4$ & $12.0\pm 2.0$ & $270\pm 260 $ \\ ngc1385 & $0.9 \pm 0.1$ & $9.7\pm 0.9$ & $272\pm 44 $ \\ mcg-01-33-034 & $1.0 \pm 0.09$ & $7.9\pm 0.5$ & $270\pm 150 $ \\ ngc4603 & $0.9 \pm 0.1$ & $3.0\pm 1.0$ & $276\pm 85 $ \\ ngc4535 & $4.1 \pm 0.7$ & $15.0\pm 2.0$ & $280\pm 200 $ \\ ngc1762 & $2.3 \pm 0.2$ & $8.2\pm 0.7$ & $290\pm 140 $ \\ ngc3451 & $3.7 \pm 0.4$ & $11.0\pm 1.0$ & $300\pm 180 $ \\ ngc4790 & $1.2 \pm 0.3$ & $9.0\pm 2.0$ & $334\pm 63 $ \\ ngc3244 & $1.8 \pm 0.2$ & $8.0\pm 1.0$ & $340\pm 270 $ \\ ngc628 & $1.8 \pm 0.1$ & $19.0\pm 1.0$ & $360\pm 170 $ \\ pgc30591 & $0.7 \pm 0.4$ & $0.0\pm 2.0$ & $360\pm 160 $ \\ ngc5643 & $1.1 \pm 0.3$ & $3.0\pm 2.0$ & $410\pm 160 $ \\ ngc1309 & $1.3 \pm 0.2$ & $17.0\pm 2.0$ & $420\pm 220 $ \\ \end{tabular} \\ \hrulefill \\ {\raggedright $^a$ Mass-loading factor derived in this work.\ $^b$ Maximum flow gas surface density term derived in this work.\ $^c$ Stellar mass surface density for the regions on the envelope obtained in this work. } \\ \hrulefill \end{table} \begin{table} \centering \contcaption{} \begin{tabular}{rrrr} \hline Galaxy identifier & $\eta$ $^a$ & $\dot{\Sigma}_{\rm{flow\thinspace max}}$ $^b$ & $\Sigma_{*}$ $^c$ \\ & & $10^{-8}\rm{M_{\odot}\thinspace yr^{-1}\thinspace kpc^{-2}}$ & $10^{6}\rm{M_{\odot}\thinspace kpc^{-2}}$ \\ \hline ngc1084 & $0.6 \pm 0.2$ & $13.0\pm 6.0$ & $420\pm 170 $ \\ ngc7580 & $1.6 \pm 0.2$ & $18.0\pm 1.0$ & $420\pm 110 $ \\ ngc692 & $2.7 \pm 0.2$ & $14.0\pm 1.0$ & $420\pm 220 $ \\ eso462-g009 & $4.0 \pm 1.0$ & $8.0\pm 2.0$ & $440\pm 310 $ \\ ic5273 & $1.5 \pm 0.7$ & $9.0\pm 4.0$ & $450\pm 220 $ \\ pgc3140 & $1.0 \pm 0.1$ & $4.0\pm 1.0$ & $460\pm 210 $ \\ ic1553 & $0.6 \pm 0.3$ & $5.0\pm 2.0$ & $460\pm 290 $ \\ ugc11289 & $1.8 \pm 0.3$ & $8.0\pm 3.0$ & $472\pm 46 $ \\
to expand M without shrinking B, then it has to buy stuff with the M--and ideally it has to buy stuff with the M that is as far from being a substitute for B as possible. Bridges seem good. So does cholesterol-reducing medicine. And smaller school class sizes 9. Voila! The case for fiscal stimulus. That is the argument for fiscal policy from the LM-side of the IS-LM scissors--and it is an argument that I think captures the spirit of Keynes's Treatise on Money. There is another argument for fiscal policy that starts from the IS-side of the IS-LM scissors--and that is the argument of the General Theory But, as Hicks (1937) pointed out, it's a scissors: to ask which blade does the cutting--PY = MV(i) or I(i) = S--is to ask the wrong question. ## Hoisted from Comments: Robert Waldmann on the Durability of Capital Robert Waldmann: A Free Lunch for America: You are assuming that infrastructure depreciates away in 30 years. This isn't true. Also, I think [infrastructure] is a one horse shay -- no depreciation for 30 years then poof. I'd say that, with normal maintainance whose cost is included in the correct calculation of returns, infrastructure lasts a long long time. The Brooklyn bridge is still there (oh and I regularly drive on the Appian way). Also, as always, another free lunch would be to sell bonds and invest the proceeds in the S&P 500. Or is that a free supper? "Regularly drive on the Appian Way" wins the infrastructure debate. First, at a meeting I was at a couple of weeks ago, Robert Solow lamented that he knew his age when he said "one-horse-shay depreciation" and was rewarded with blank looks from an entire room of economists. He would be happy to hear you say it. Second, selling government bonds and investing the proceeds in equity index funds is not a free lunch, exactly. It is the largest hedge-fund operation ever contemplated--and in my view a good thing to do… ## Scholastic Theology of Inflation Watch Scholastic econotheologians say inflation can arise from no previous direct cause. But c`an inflation arise ex nihilo? Modeled Behavior say no: Immaculate Inflation «  Modeled Behavior: "We can take a step back and interpret these events as saying liquidity demand is being satiated. Or, we can take a micro perspective and say that the demand for goods and services in these markets is increasing. Either way we look at it, however, demand driven inflation should be drive a rise in production. In an economy with little unemployment we would expect this to bid up wages as employers competed for scarce labor. The result would simply be higher prices and wages and a distortion of long term contracts like mortgages. However, in an economy with high unemployment we should expect some of this to result in an increase in hiring. Thus I see when I see rents rising, I think that means that construction employment will rise. When I see new car prices rising I think that means manufacturing employment will rise.... [B]y looking at a the economy on a market by market basis we should be able to tell which is which. This is one reason why inflation driven by gasoline prices is “bad.” It almost certainly represents an increase in the price of a commodity – oil and a reduction in the supply of gasoline. This means that we expect contraction in the gasoline market. In addition through income effects we should expect a contract in the demand in other individual markets. When inflation is coming through the commodity markets it means that either the commodity is in short supply generally or that it is being pulled away from the US market by demand elsewhere. In either case the result is less real resources available for US households and firms. However, when inflation is coming through the final goods market it means that real resources are being pulled towards US households and firms. That implies both that US households and firms are trading out of cash and into real goods and that the net effect in each individual market will be an increase in output. ## David Wessel Says: It Is 1931 Again David Wessel: The Perils of Ignoring History: There is an optimistic scenario for the U.S. economy: Europe gets its act together. The pace of world growth quickens, igniting demand for U.S. exports. American politicians agree to a credible compromise that gives the economy a fiscal boost now and restrains deficits later. The housing market turns up. Relieved businesses hire. Relieved consumers spend. But there are at least two unpleasant scenarios: One is that Europe becomes the epicenter of a financial earthquake on the scale of the crash of 1929 or Lehman Brothers 2008. The other is that Europe muddles through, but the U.S. stagnates for another five years, mired in slow growth, high unemployment and ugly politics…. No one would intentionally choose the second or third, yet policy makers look more likely to stumble into one of those holes than find a path to the happier ending. Why? Liaquat Ahamed has been pondering that question…. "Is it because people don't know what to do (or there's disagreement about what to do)," he wonders, "or is it the politics, particularly the reluctance to ask some people to pay for the mistakes of others?" "In the '20s," he says, "there was much more ignorance"—the disastrous fealty to the gold standard, the Federal Reserve's failure to understand its role as lender of last resort. Today? Mr. Ahamed can't decide if it's ignorance or insurmountable political barriers that keep governments from doing what needs to be done. In the 1920s, two crises fed on each other: a banking crisis in the U.S. and a sovereign-debt crisis in Europe. (Sound familiar?) In our time, the U.S. handled its banking crisis better than it did back then. (Yes, much better, despite missteps and criticism.) But Europe? The problems go well beyond the inevitable Greek default on its debts. "We are discussing a broken ankle in the presence of organ failure," Lawrence Summers, the former U.S. Treasury secretary, quipped last week about the fixation on Greece…. Mr. Ahamed sees another, largely unappreciated lesson from the '20s. The few moves in the right direction then were too small for the scale of the economic disaster. After 1929, the Fed did open the credit spigot—a bit. And Herbert Hoover did push through an increase in public-works spending and an income-tax cut, but they were small. In our time, says Mr. Ahamed, "I don't think Keynesians or even monetarists ever realized that the numbers to make their policies work are so gigantic. Everyone had sticker shock." The Obama stimulus seemed huge and the Fed's quantitative easing—printing money to buy bonds—looked massive, but in retrospect perhaps they weren't sufficiently large. To be sure, some advocates of the earlier fiscal and monetary stimulus, such as Harvard University's Robert Barro, doubt that another big dose now would do much good. Today, political stalemates in Europe and the U.S. block both the short-term policies these economies need to avoid a return to recession and—importantly—also block the long-term course corrections required to get the economies growing faster in the future. Europe needs to avoid financial calamity now and to decide whether and how it will move toward economic and fiscal integration or less integration. It cannot stay where it is. In the U.S., it's hard to see what will power the economy over the next couple of years. It won't be consumers, still laden with debt. It won't be housing. Exports are up, but overseas economies are slowing. Local, state and federal governments are retrenching. Small businesses can't get credit, and big businesses look at all of the above and won't hire…. A senior U.S.
the fibers $X_b$ are Riemann surfaces, every $(0,1)$-form is primitive so every horizontal lift $\xi_\tau$ is a primitive horizontal lift of $\tau$. Thus, Theorem \ref{Bgeneral} follows for one-dimensional fibers even though $\mathcal{N}_b^{0,2}$ does not exist. In the case of the trivial fibration $\pi: X\to B$, i.e., when $X= \Omega \times B$, the tangent bundle of $X$ splits canonically as $T_X^{1,0} = p_1^*T^{1,0}_{\Omega}\oplus \pi^*T^{1,0}_B$ where $p_1:X\to \Omega$ is the projection onto the first factor. A vector field $\tau \in \Gamma(B, T^{1,0}_B)$ can be lifted as $\xi_\tau = (0,\tau)$ with respect to this splitting. These lifts satisfy $\bar \partial \xi_{\tau} \equiv 0$, so they are primitive horizontal lifts. Thus, one does not need the global regularity of $\mathcal{N}^{0,2}$ for $\Omega$ to prove Theorem \ref{Bgeneral}. Another feature of the trivial fibration is that these primitive horizontal lifts lie in the kernel of the Levi form of $\partial X$, i.e., $\partial \bar \partial\rho(\xi_\tau, \overline{\xi}_\tau)=0$ for lifts $\xi_\tau = (0,\tau)$ as above. Since the curvature formula \eqref{hcurv} is independent of the choice of horizontal lifts, it follows that the formula \eqref{hcurv} for a trivial fibration does not contain a term involving integral over $\partial \Omega$. The quantity $\kappa$ (as in \eqref{kappa}) appearing in the formula for curvature of ${\mathcal H}$ is related to the deformation of complex structure on $X_b$. Observe that in the case of trivial fibration (when all the fibers are biholomorphic), for any vector field $\tau$ on $B$ we were able to produce lifts $\xi_\tau$ such that $\bar \partial \xi_\tau \equiv 0$, so the term containing $\kappa$ is also absent from the curvature formula of a trivial fibration. An interpretation of \eqref{hcurv} is that for families of domains, the curvature of $(E,h)\to X$, the Levi form of $\partial X$ in directions transverse to the fibers of $X\to B$ and the deformation of complex structure on the fibers contribute to the curvature of ${\mathcal H}$. The relation between the deformation of complex structure on $X_b$ and the curvature formula with regard to the contribution of $\kappa$ is better understood when $\pi: X\to B$ is a proper map (see \cite[Theorem 4]{V2021}, \cite[Theorem 1.1]{BerndtssonStrict} for instance). In order to get a better understanding of how the positivity of $\Theta^{{\mathcal H}}$ relates to the geometry of the family $(E,h)\to X\to B$ it would be desirable to get an exact formula for the square norm of the \emph{second fundamental map}, which is the quantity appearing on the left hand side of \eqref{2ndfunestimateintro} (see Section \ref{iBLS} for precise definition). When $\widetilde{X}$ is a Stein manifold we obtain a couple of exact formulae for the square norm of the second fundamental map, which we record below. However, it seems difficult to determine the positivity of $\Theta^{{\mathcal H}}$ from either of the formulae. \begin{theorem}[= Theorem \ref{2ndfunexact}]\label{2ndfunexactintro} Let $(E,h)\to X\to B$ be as in Theorem \ref{smoothcurvintro}. Also suppose that the ambient manifold $\widetilde{X}$ is Stein and that the Neumann operator $N_b^{n,q}$ acting on $E|_{X_b}$-valued $(n,q)$-forms is globally regular for $1\leq q \leq n$. Then \begin{align} \ipr{P_b^{\perp} L^{1,0}_{\xi_{\sigma_1}}u_1,P_b^{\perp}L^{1,0}_{\xi_{\sigma_2}}u_2}_{{\mathcal H}_b} &= -\frac{1}{2} \ipr{N^{(n,1)}_b\iota_{X_b}^*\bar \partial\nabla^{1,0}(\xi_{\sigma_1} \lrcorner u_1), \iota_{X_b}^*(\xi_{\sigma_2}\lrcorner \Theta^{E})u_2+\iota_{X_b}^*\nabla^{1,0}(\bar \partial\xi_{\sigma_2}\lrcorner u_2)}_{{\mathcal H}_b} \nonumber \\ &\quad- \frac{1}{2} \ipr{ \iota_{X_b}^*(\xi_{\sigma_1}\lrcorner \Theta^{E})u_1+\iota_{X_b}^*\nabla^{1,0}(\bar \partial\xi_{\sigma_1}\lrcorner u_2), N^{(n,1)}_b\iota_{X_b}^*\bar \partial\nabla^{1,0}(\xi_{\sigma_2} \lrcorner u_2)}_{{\mathcal H}_b}, \label{2ndfunex} \end{align} where $u_i$ are $E$-valued $(n,0)$-forms satisfying \eqref{ufcond} such that $\iota_{X_b}^* u\in {\mathcal H}_b$ and $\xi_{\sigma_i}$ are horizontal lifts of holomorphic $(1,0)$-vector fields $\sigma_i$ on an open set containing $b$. \end{theorem} \begin{theorem}[= Theorem \ref{2ndfunexact2}] Let $(E,h)\to X \to B$ be as in Theorem \ref{smoothcurvintro}. Also suppose that the ambient manifold $\widetilde{X}$ is Stein and that the Neumann operator $N_b^{n,q}$ acting on $E|_{X_b}$-valued $(n,q)$-forms is globally regular for $1\leq q \leq n$. Further assume that for all $b\in B$ the Neumann operator $\mathcal{N}^{(0,2)}_b$ acting on $(0,2)$-forms on $X_b$ is globally regular. Let $u$ be an $E$-valued $(n,0)$-form satisfying \ref{ufcond} such that $\iota_{X_b}^*u\in {\mathcal H}_b$. Then \begin{equation*} \norm{P_b^{\perp}L^{1,0}_{\xi_\sigma}u}^2_{{\mathcal H}_b} = \ipr{\alpha, \iota_{X_b}^*(\xi_\sigma\lrcorner \Theta^E)u}_{{\mathcal H}_b} + \ipr{\bar \partial \Lambda_{\omega_b}\alpha, \iota_{X_b}^*(\bar \partial\xi_\sigma \lrcorner u)}_{{\mathcal H}_b} + \int_{\partial X_b}\left\{\iota_{X_b}^*(\overline{\xi}_\sigma \lrcorner \partial \bar \partial \rho)\wedge\Lambda_{\omega_b}\alpha ,u \right\}dS_b, \end{equation*} where $\alpha := \sqrt{\scalebox{1.5}[0.9]{-}1} N^{(n,1)}_b \iota_{X_b}^*\bar \partial \nabla^{1,0} (\xi_\sigma \lrcorner u)$. Here $\sigma$ is a holomorphic $(1,0)$-vector field on a neighborhood of $b$ and $\xi_{\sigma}$ is its primitive horizontal lift. \end{theorem} \subsection{Scope of results} The main theorems in this paper are Theorem \ref{smoothcurvintro} and Theorem \ref{2ndfunpropintro}. These theorems rely on the technical assumption that ${\mathcal H} \to B$ is an iBLS field. The following are examples of fibrations $X\to B$ for which this assumption holds. (See Section \ref{finalremarks} for details.) \begin{enumerate} \item Let $B\subset {\mathbb C}^m$ be the unit ball, let $\widetilde{X} = B\times {\mathbb C}^n$ and let $E\to \widetilde{X}$ be the trivial line bundle with a nontrivial metric. Let $\Omega \subset {\mathbb C}^n$ be a smoothly bounded pseudoconvex domain. Take $X = B\times \Omega \subset \widetilde{X}$ and $\pi :X\to B$ to be the projection onto the first factor. This scenario was studied in \cite{B2009}. \item Let $E\to \widetilde{X}$ be a hermitian holormophic line bundle over an $n+m$-dimensional complex manifold $\widetilde{X}$ and let $\widetilde{\pi}:\widetilde{X}\to B$ be a holomorphic submersion, where $B\subset {\mathbb C}^m$ is the unit ball. Take $X\subset \widetilde{X}$ to be the domain $X = \{ \rho <0\}$, where $\rho$ is a smooth real valued function on $\widetilde{X}$ such that for all $b\in B$ (i) $\rho|_{\widetilde{\pi}^{-1}(b)}$ is a strictly plurisubharmonic function in a neighborhood of the closure of $X_b := X\cap \widetilde{\pi}^{-1}(b)$ and (ii) $d\rho(x)\neq 0$ for all $x \in \partial X_b$. Take $\pi:X\to B$ to be the map $\widetilde{\pi}|_X$. This scenario was studied in \cite{W2017}. \item Let $\widetilde{X} = B\times {\mathbb C}^n$ where $B\subset {\mathbb C}^m$ is a domain, let $E\to \widetilde{X}$ be the trivial line bundle with nontrivial metric and let $\widetilde{\pi}:\widetilde{X}\to B$ be the projection onto the first factor. Take $X\subset \widetilde{X}$ to be the domain $X = \{ \rho <0\}$, where $\rho$ is a smooth real valued function on $\widetilde{X}$ such that for all $b\in B$ (i) $X_b$ is a bounded domain in ${\mathbb C}^n$ (ii) $\rho|_{\widetilde{\pi}^{-1}(b)}$ is a plurisubharmonic function and (iii) $d\rho(x)\neq 0$ for all $x \in \partial X_b$. Take $\pi:X\to B$ be the map $\widetilde{\pi}|_X$. \item Let $\widetilde{X} = B\times {\mathbb C}^n$ where $B\subset {\mathbb C}^m$ is a domain, let $E\to \widetilde{X}$ be the trivial line bundle with nontrivial metric and let $\widetilde{\pi}:\widetilde{X}\to B$ be the projection onto the first factor. Take $X\subset \widetilde{X}$ to a smoothly bounded domain such that the domains $X_b := X\cap \widetilde{\pi}^{-1}(b) \subset {\mathbb C}^n$ admit good Stein neighborhood bases in the sense of \cite{straube2001}. Take $\pi:X\to B$ be the map $\widetilde{\pi}|_X$. \end{enumerate} \subsubsection{An example with compact fibers}\label{compactexample} Here we reproduce an example from \cite{V2021} of a fibration $\pi: X\to B$ whose fibers are compact manifolds (without boundary). It can be shown that for any line bundle $(E,h)\to X\to B$ the associated family of Hilbert spaces is an iBLS field when the fibers are manifolds without boundary (see \cite[Proposition 4.15]{V2021}). Let $\mathbb{H} \subset {\mathbb C}$ denote the upper half plane and let $X = {\mathbb C}\times \mathbb{H}/\sim$ where $(z,b) \sim (z',b')$ if $b=b'$ and there exist $m, n \in {\mathbb Z}$ such that $z-z' = m+ nb$. Then $\pi: X\to \mathbb{H}$ is the map induced by the projection ${\mathbb C}\times \mathbb{H} \to \mathbb{H}$. Thus, the fiber $X_b$ is the torus corresponding to the lattice spanned by $\{1,b\}$. Let $D_a$ be the smooth hypersurface on $X$ obtained from $\{a\}\times \mathbb{H} \subset {\mathbb C}\times \mathbb{H}$ after passing to the quotient. Let $E\to X$ be the line bundle associated to the divisor $D:= D_0- D_{\sqrt{\scalebox{1.5}[0.9]{-}1}}$. The divisor $\Delta_b := D\cap X_b$ is trivial if there are $m,n \in {\mathbb Z}$ such that $\sqrt{\scalebox{1.5}[0.9]{-}1} = m+nb$ and otherwise is $[0]-[\sqrt{\scalebox{1.5}[0.9]{-}1}] \in X_b$. Let $B\subset \mathbb{H}$ denote the discrete set of $b$ for which there are $m,n \in {\mathbb Z}$ such that $\sqrt{\scalebox{1.5}[0.9]{-}1} = m+nb$. Then it can be shown that \[ {\mathcal H}_b = \begin{cases} {\mathbb C} \quad \text{if} \quad b\in B \\ \{0\} \quad \text{if} \quad
decisions. It is a key element of capital budgeting. By comparing actual results with predicted results #### Original toplevel document Subject 1. Capital Budgeting: Introduction include in the capital budget. "Capital" refers to long-term assets. The "budget" is a plan which details projected cash inflows and outflows during a future period. <span>The typical steps in the capital budgeting process: Generating good investment ideas to consider. Analyzing individual proposals (forecasting cash flows, evaluating profitability, etc.). Planning the capital budget. How does the project fit within the company's overall strategies? What's the timeline and priority? Monitoring and post-auditing. The post-audit is a follow-up of capital budgeting decisions. It is a key element of capital budgeting. By comparing actual results with predicted results and then determining why differences occurred, decision-makers can: Improve forecasts (based on which good capital budgeting decisions can be made). Otherwise, you will have the GIGO (garbage in, garbage out) problem. Improve operations, thus making capital decisions well-implemented. Project classifications: Replacement projects. There are two types of replacement d #### Flashcard 1438152002828 Tags Question Which questions should you ask in the "Planning the capital budget" step? How does the project [...] What's the [...] fit within the company's overall strategies? timeline and priority? status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it The typical steps in the capital budgeting process: Generating good investment ideas to consider. Analyzing individual proposals (forecasting cash flows, evaluating profitability, etc.). Planning the capital budget. <span>How does the project fit within the company's overall strategies? What's the timeline and priority? Monitoring and post-auditing. The post-audit is a follow-up of capital budgeting decisions. It is a key element of capital budgeting. By comparing actual results with predicted results #### Original toplevel document Subject 1. Capital Budgeting: Introduction include in the capital budget. "Capital" refers to long-term assets. The "budget" is a plan which details projected cash inflows and outflows during a future period. <span>The typical steps in the capital budgeting process: Generating good investment ideas to consider. Analyzing individual proposals (forecasting cash flows, evaluating profitability, etc.). Planning the capital budget. How does the project fit within the company's overall strategies? What's the timeline and priority? Monitoring and post-auditing. The post-audit is a follow-up of capital budgeting decisions. It is a key element of capital budgeting. By comparing actual results with predicted results and then determining why differences occurred, decision-makers can: Improve forecasts (based on which good capital budgeting decisions can be made). Otherwise, you will have the GIGO (garbage in, garbage out) problem. Improve operations, thus making capital decisions well-implemented. Project classifications: Replacement projects. There are two types of replacement d #### Flashcard 1438156721420 Tags Question The post-audit is a [...] of capital budgeting. (regarding importance) key element status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ility, etc.). Planning the capital budget. How does the project fit within the company's overall strategies? What's the timeline and priority? Monitoring and post-auditing. The post-audit is a follow-up of capital budgeting decisions. It is a <span>key element of capital budgeting. By comparing actual results with predicted results and then determining why differences occurred, decision-makers can: Improve forecasts (base #### Original toplevel document Subject 1. Capital Budgeting: Introduction include in the capital budget. "Capital" refers to long-term assets. The "budget" is a plan which details projected cash inflows and outflows during a future period. <span>The typical steps in the capital budgeting process: Generating good investment ideas to consider. Analyzing individual proposals (forecasting cash flows, evaluating profitability, etc.). Planning the capital budget. How does the project fit within the company's overall strategies? What's the timeline and priority? Monitoring and post-auditing. The post-audit is a follow-up of capital budgeting decisions. It is a key element of capital budgeting. By comparing actual results with predicted results and then determining why differences occurred, decision-makers can: Improve forecasts (based on which good capital budgeting decisions can be made). Otherwise, you will have the GIGO (garbage in, garbage out) problem. Improve operations, thus making capital decisions well-implemented. Project classifications: Replacement projects. There are two types of replacement d #### Flashcard 1438162488588 Tags Question Project classifications: • Replacement projects. • Expansion projects. • [...] • Others. Regulatory, safety and environmental projects. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it e cash flows from selling old assets should be used to offset the initial investment outlay. Analysts also need to compare revenue/cost/depreciation before and after the replacement to identify changes in these elements. <span>Expansion projects. Projects concerning expansion into new products, services, or markets involve strategic decisions and explicit forecasts of future demand, and thus require detailed analysis. These proj #### Original toplevel document Subject 1. Capital Budgeting: Introduction ood capital budgeting decisions can be made). Otherwise, you will have the GIGO (garbage in, garbage out) problem. Improve operations, thus making capital decisions well-implemented. <span>Project classifications: Replacement projects. There are two types of replacement decisions: Replacement decisions to maintain a business. The issue is twofold: should the existing operations be continued? If yes, should the same processes continue to be used? Maintenance decisions are usually made without detailed analysis. Replacement decisions to reduce costs. Cost reduction projects determine whether to replace serviceable but obsolete equipment. These decisions are discretionary and a detailed analysis is usually required. The cash flows from the old asset must be considered in replacement decisions. Specifically, in a replacement project, the cash flows from selling old assets should be used to offset the initial investment outlay. Analysts also need to compare revenue/cost/depreciation before and after the replacement to identify changes in these elements. Expansion projects. Projects concerning expansion into new products, services, or markets involve strategic decisions and explicit forecasts of future demand, and thus require detailed analysis. These projects are more complex than replacement projects. Regulatory, safety and environmental projects. These projects are mandatory investments, and are often non-revenue-producing. Others. Some projects need special considerations beyond traditional capital budgeting analysis (for example, a very risky research project in which cash flows cannot be reliably forecast). LOS a. describe the capital budgeting process and distinguish among the various categories of capital projects; <span><body><html> #### Flashcard 1438164847884 Tags Question Replacement projects. There are two types of replacement decisions: • to [...] • to reduce costs. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Project classifications: Replacement projects. There are two types of replacement decisions: Replacement decisions to maintain a business. The issue is twofold: should the existing operations be continued? If yes, should the same processes continue to be used? Maintenance decisions are usually made without detailed analysi #### Original toplevel document Subject 1. Capital Budgeting: Introduction ood capital budgeting decisions can be made). Otherwise, you will have the GIGO (garbage in, garbage out) problem. Improve operations, thus making capital decisions well-implemented. <span>Project classifications: Replacement projects. There are two types of replacement decisions: Replacement decisions to maintain a business. The issue is twofold: should the existing operations be continued? If yes, should the same processes continue to be used? Maintenance decisions are usually made without detailed analysis. Replacement decisions to reduce costs. Cost reduction projects determine whether to replace serviceable but obsolete equipment. These decisions are discretionary and a detailed analysis is usually required. The cash flows from the old asset must be considered in replacement decisions. Specifically, in a replacement project, the cash flows from selling old assets should be used to offset the initial investment outlay. Analysts also need to compare revenue/cost/depreciation before and after the replacement to identify changes in these elements. Expansion projects. Projects concerning expansion into new products, services, or markets involve strategic decisions and explicit forecasts of future demand, and thus require detailed analysis. These projects are more complex than replacement projects. Regulatory, safety and environmental projects. These projects are mandatory investments, and are often non-revenue-producing. Others. Some projects need special considerations beyond traditional capital budgeting analysis (for example, a very risky research project in which cash flows cannot be reliably forecast). LOS a. describe the capital
# zbMATH — the first resource for mathematics ## Li, Ker-Chau Compute Distance To: Author ID: li.ker-chau Published as: Li, K. C.; Li, Ker-Chau Documents Indexed: 41 Publications since 1982 all top 5 #### Co-Authors 13 single-authored 5 Cheng, Chingshui 4 Chen, Chun-Houh 4 Duan, Naihua 3 Shedden, Kerby A. 2 Ando, Tomohiro 2 Carroll, Raymond James 1 Agnan, C. Thomas 1 Aragon, Yve 1 Bina, Minou 1 Elashoff, Robert M. 1 Filliben, James J. 1 Hall, Peter Gavin 1 Hwang, Jiunn Tzon 1 Lee, Young Jack 1 Lue, Heng-Hui 1 Wang, Jane-Ling 1 Xie, Jun 1 Yan, Ming 1 Ylvisaker, Nils Donald 1 Yuan, Shinsheng all top 5 #### Serials 17 The Annals of Statistics 11 Journal of the American Statistical Association 5 Statistica Sinica 1 Biometrika 1 Journal of Econometrics 1 Journal of Multivariate Analysis 1 Technometrics 1 Statistics & Probability Letters 1 Linear Algebra and its Applications all top 5 #### Fields 39 Statistics (62-XX) 2 Numerical analysis (65-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Probability theory and stochastic processes (60-XX) 1 Biology and other natural sciences (92-XX) 1 Information and communication theory, circuits (94-XX) #### Citations contained in zbMATH 34 Publications have been cited 1,446 times in 884 Documents Cited by Year Sliced inverse regression for dimension reduction. Zbl 0742.62044 Li, Ker-Chau 1991 On principal Hessian directions for data visualization and dimension reduction: Another application of Stein’s lemma. Zbl 0765.62003 Li, Ker-Chau 1992 Regression analysis under link violation. Zbl 0753.62041 Li, Ker-Chau; Duan, Naihua 1989 Asymptotic optimality for $$C_ p$$, $$C_ L$$, cross-validation and generalized cross-validation: Discrete index set. Zbl 0653.62037 Li, Ker-Chau 1987 On almost linearity of low dimensional projections from high dimensional data. Zbl 0782.62065 Hall, Peter; Li, Ker-Chau 1993 Slicing regression: A link-free regression method. Zbl 0738.62070 Duan, Naihua; Li, Ker-Chau 1991 Asymptotic optimality of $$C_ L$$ and generalized cross-validation in ridge regression with application to spline smoothing. Zbl 0629.62043 Li, Ker-Chau 1986 Can SIR be as popular as multiple linear regression? Zbl 0897.62069 Chen, Chun-Houh; Li, Ker-Chau 1998 Honest confidence regions for nonparametric regression. Zbl 0681.62047 Li, Ker-Chau 1989 From Stein’s unbiased risk estimates to the method of generalized cross- validation. Zbl 0605.62047 Li, Ker-Chau 1985 Dimension reduction for multivariate response data. Zbl 1047.62059 Li, Ker-Chau; Aragon, Yve; Shedden, Kerby; Agnan, C. Thomas 2003 Measurement error regression with unknown link: Dimension reduction and data visualization. Zbl 0765.62002 Carroll, Raymond J.; Li, Ker-Chau 1992 A model-averaging approach for high-dimensional regression. Zbl 1367.62209 Ando, Tomohiro; Li, Ker-Chau 2014 Robust designs for nearly linear regression. Zbl 0481.62056 Li, K. C.; Notz, W. 1982 Dimension reduction for censored regression data. Zbl 0932.62050 Li, Ker-Chau; Wang, Jane-Ling; Chen, Chun-Houh 1999 Minimaxity of the method of regularization on stochastic processes. Zbl 0497.62078 Li, Ker-Chau 1982 A weight-relaxed model averaging approach for high-dimensional generalized linear models. Zbl 1421.62094 Ando, Tomohiro; Li, Ker-Chau 2017 Nonlinear confounding in high-dimensional regression. Zbl 0873.62071 Li, Ker-Chau 1997 Robust regression designs when the design space consists of finitely many points. Zbl 0546.62048 Li, Ker-Chau 1984 Interactive tree-structured regression via principal Hessian directions. Zbl 1013.62074 Li, Ker-Chau; Lue, Heng-Hui; Chen, Chun-Houh 2000 Consistency for cross-validated nearest neighbor estimates in nonparametric regression. Zbl 0538.62030 Li, Ker-Chau 1984 Binary regressors in dimension reduction models: A new look at treatment comparisons. Zbl 0828.62033 Carroll, R. J.; Li, Ker-Chau 1995 A study of the method of principal Hessian direction for analysis of data from designed experiments. Zbl 0828.62068 Cheng, Ching-Shui; Li, Ker-Chau 1995 Distribution-free and link-free estimation for the sample selection model. Zbl 0657.62046 Duan, Naihua; Li, Ker-Chau 1987 The strong consistency of M-estimators in linear models. Zbl 0542.62057 Cheng, Ching-Shui; Li, Ker-Chau 1984 A minimax approach to sample surveys. Zbl 0524.62015 Cheng, Ching-Shui; Li, Ker-Chau 1983 A bias bound for least squares linear regression. Zbl 0824.62057 Duan, Naihua; Li, Ker-Chau 1991 Regression models with infinitely many parameters: Consistency of bounded linear functionals. Zbl 0544.62062 Li, Ker-Chau 1984 Optimality criteria in survey sampling. Zbl 0628.62009 Cheng, Ching-Shui; Li, Ker-Chau 1987 The data-smoothing aspect of Stein estimates. Zbl 0557.62007 Li, Ker-Chau; Hwang, Jiunn Tzon 1984 Identification of shared components in large ensembles of time series using dimension reduction. Zbl 1048.62086 Li, Ker-Chau; Shedden, Kerby 2002 A simple statistical model for depicting the cdc15-synchronized yeast cell-cycle regulated gene expression data. Zbl 1005.62093 Li, Ker-Chau; Yan, Ming; Yuan, Shinsheng 2002 A systematic approach to the analysis of complex interaction patterns in two-level factorial designs. Zbl 0892.62058 Filliben, James J.; Li, Ker-Chau 1997 Minimaxity for randomized designs: Some general results. Zbl 0517.62077 Li, Ker-Chau 1983 A weight-relaxed model averaging approach for high-dimensional generalized linear models. Zbl 1421.62094 Ando, Tomohiro; Li, Ker-Chau 2017 A model-averaging approach for high-dimensional regression. Zbl 1367.62209 Ando, Tomohiro; Li, Ker-Chau 2014 Dimension reduction for multivariate response data. Zbl 1047.62059 Li, Ker-Chau; Aragon, Yve; Shedden, Kerby; Agnan, C. Thomas 2003 Identification of shared components in large ensembles of time series using dimension reduction. Zbl 1048.62086 Li, Ker-Chau; Shedden, Kerby 2002 A simple statistical model for depicting the cdc15-synchronized yeast cell-cycle regulated gene expression data. Zbl 1005.62093 Li, Ker-Chau; Yan, Ming; Yuan, Shinsheng 2002 Interactive tree-structured regression via principal Hessian directions. Zbl 1013.62074 Li, Ker-Chau; Lue, Heng-Hui; Chen, Chun-Houh 2000 Dimension reduction for censored regression data. Zbl 0932.62050 Li, Ker-Chau; Wang, Jane-Ling; Chen, Chun-Houh 1999 Can SIR be as popular as multiple linear regression? Zbl 0897.62069 Chen, Chun-Houh; Li, Ker-Chau 1998 Nonlinear confounding in high-dimensional regression. Zbl 0873.62071 Li, Ker-Chau 1997 A systematic approach to the analysis of complex interaction patterns in two-level factorial designs. Zbl 0892.62058 Filliben, James J.; Li, Ker-Chau 1997 Binary regressors in dimension reduction models: A new look at treatment comparisons. Zbl 0828.62033 Carroll, R. J.; Li, Ker-Chau 1995 A study of the method of principal Hessian direction for analysis of data from designed experiments. Zbl 0828.62068 Cheng, Ching-Shui; Li, Ker-Chau 1995 On almost linearity of low dimensional projections from high dimensional data. Zbl 0782.62065 Hall, Peter; Li, Ker-Chau 1993 On principal Hessian directions for data visualization and dimension reduction: Another application of Stein’s lemma. Zbl 0765.62003 Li, Ker-Chau 1992 Measurement error regression with unknown link: Dimension reduction and data visualization. Zbl 0765.62002 Carroll, Raymond J.; Li, Ker-Chau 1992 Sliced inverse regression for dimension reduction. Zbl 0742.62044 Li, Ker-Chau 1991 Slicing regression: A link-free regression method. Zbl 0738.62070 Duan, Naihua; Li, Ker-Chau 1991 A bias bound for least squares linear regression. Zbl 0824.62057 Duan, Naihua; Li, Ker-Chau 1991 Regression analysis under link violation. Zbl 0753.62041 Li, Ker-Chau; Duan, Naihua 1989 Honest confidence regions for nonparametric regression. Zbl 0681.62047 Li, Ker-Chau 1989 Asymptotic optimality for $$C_ p$$, $$C_ L$$, cross-validation and generalized cross-validation: Discrete index set. Zbl 0653.62037 Li, Ker-Chau 1987 Distribution-free and link-free estimation for the sample selection model. Zbl 0657.62046 Duan, Naihua; Li, Ker-Chau 1987 Optimality criteria in survey sampling. Zbl 0628.62009 Cheng, Ching-Shui; Li, Ker-Chau 1987 Asymptotic optimality of $$C_ L$$ and generalized cross-validation in ridge regression with application to spline smoothing. Zbl 0629.62043 Li, Ker-Chau 1986 From Stein’s unbiased risk estimates to the method of generalized cross- validation. Zbl 0605.62047 Li, Ker-Chau 1985 Robust regression designs when the design space consists of finitely many points. Zbl 0546.62048 Li, Ker-Chau 1984 Consistency for cross-validated nearest neighbor estimates in nonparametric regression. Zbl 0538.62030 Li, Ker-Chau 1984 The strong consistency of M-estimators in linear models. Zbl 0542.62057 Cheng, Ching-Shui; Li, Ker-Chau 1984 Regression models with infinitely many parameters: Consistency of bounded linear functionals. Zbl 0544.62062 Li, Ker-Chau 1984 The data-smoothing aspect of Stein estimates. Zbl 0557.62007 Li, Ker-Chau; Hwang, Jiunn Tzon 1984 A minimax approach to sample surveys. Zbl 0524.62015 Cheng, Ching-Shui; Li, Ker-Chau 1983 Minimaxity for randomized designs: Some general results. Zbl 0517.62077 Li, Ker-Chau 1983 Robust designs for nearly linear regression. Zbl 0481.62056 Li, K. C.; Notz, W. 1982 Minimaxity of the method of regularization on stochastic processes. Zbl 0497.62078 Li, Ker-Chau 1982 all top 5 #### Cited by 1,175 Authors 61 Zhu, Lixing 28 Cook, Ralph Dennis 26 Zhu, Liping 24 Yin, Xiangrong 19 Yu, Zhou 18 Li, Bing 18 Saracco, Jérôme 17 Dong, Yuexiao 15 Yoo, Jae Keun 14 Zhang, Xinyu 13 Li, Lexin 13 Wang, Qihua 11 Zou, Guohua 10 Wen, Xuerong Meggie 10 Zhang, Jun 9 Huang, Zhensheng 9 Lin, Lu 9 Wiens, Douglas P. 9 Zhang, Riquan 9 Zhu, Xuehu 8 Artemiou, Andreas 8 Chiaromonte, Francesca 8 Prendergast, Luke Anthony 8 Wang, Qin 8 Xue, Liugen 7 Guo, Xu 7 Lue, Heng-Hui 7 Tsai, Chihling 7 Yang, Yuhong 7 Zhao, Junlong 6 Davies, Patrick Laurie 6 Feng, Zhenghui 6 Liang, Hua 6 Liquet, Benoit 6 Lukas, Mark A. 6 Shin, Seungjun 6 Sugiyama, Masashi 6 Wahba, Grace 6 Wu, Yichao 6 Xia, Yingcun 5 Brown, Lawrence David 5 Fan, Jianqing 5 Girard, Stéphane 5 Li, Gaorong 5 Li, Runze 5 Lian, Heng 5 Peng, Heng 5 Wang, Guochang 5 Xu, Wangli 5 Zhang, Jingxiao 5 Zhou, Julie 4 Arlot, Sylvain 4 Beran, Rudolf J. 4 Bura, Efstathia 4 Cai, Tony Tony 4 Chen, Xin 4 Comte, Fabienne 4 Constantine, Paul G. 4 Ding, Shanshan 4 Forzani, Liliana 4 Härdle, Wolfgang Karl 4 Hilafu, Haileab 4 Kou, S. C. 4 Lin, Huazhen 4 Lu, Xuewen 4 Ma, Yanyuan 4 McNicholas, Paul D. 4 Mukherjee, Sayan 4 Scrucca, Luca 4 Stenger, Horst 4 Velilla, Santiago 4 Wang, Jane-Ling 4 Wasserman, Larry Alan 4 Weisberg, Sanford 4 Wu, Qiang 4 Xu, Peirong 4 Xu, Xingzhong 4 Xue, Yuan 4 Yao, Weixin 4 Zhao, Shangwei 4 Zhou, Jingke 3 Adragni, Kofi Placid 3 Akritas, Michael G. 3 Bach, Francis R. 3 Baraud, Yannick 3 Belitser, Eduard 3 Cai, Tianxi 3 Chen, Chun-Shu 3 Christou, Eliana 3 Ding, Xiaobo 3 Draper, Norman R. 3 Fang, Kai-Tai 3 Forbes, Florence 3 Gabler, Siegfried 3 Gai, Yujie 3 Gannoun, Ali 3 Gao, Yan 3 Gardes, Laurent 3 Genovese, Christopher R. 3 Gu, Chong ...and 1,075 more Authors all top 5 #### Cited in 112
t -¥ Vi(x)g(t-x)dx (21) Causality requires that the upper limit be t rather than ¥; but analysis (after the fact) usually is concerned with a fully specified time history (so that t ® ¥). It is seen from Equation(21) that for every desired value of the time in Vo(t), a separate integral is required, unless Vi(t) is simple enough to yield a complete analytic solution. In practice, numerical processing is usually necessary. The only practical way to generate Vo(t) is by means of the convolution theorem. Consider, for example, the present paper's use of 2048 sampling points for each time record that is to be presented. Since it is unrealistic to integrate Equation(21) 2048 times in a brute force" manner, the following elegant methodology is used, based on the fast Fourier transform (FFT). (As compared to the discrete Fourier transform, the FFT is faster by a factor of 190, when dealing with 2K = 2048 point record lengths.) #### 4.1.1  Convolution Theorem The convolution theorem [involving Equation (21)] says that L[Vo]  =  L[Vi]L[g] (22) where L[f] is the transform" of f. This may be any of several transforms, such as Laplace's. (Note: It is the author's experience that engineers are primarily trained with the Laplace transform, whereas physicists are trained with the Fourier transform, which is a subset of the former and therefore less general. The Laplace transform is more elegant when dealing with analytic solutions, whereas the FFT, developed by Cooley and Tukey [], makes the Fourier transform more useful in numerical work.) Equation(22) makes for an extremely powerful numerical tool, which is used as follows. From the time record of Vi(t), one obtains its Fourier transform, using the FFT. This transform is then multiplied by the FFT of the Green's function (a product involving 2 x 2048 real and imaginary pairs in the present paper). Finally, the inverse FFT is computed, which gives Vo(t) for the assumed Vi(t). Although the reader may perceive this operation to be cumbersome and time consuming, the entire process takes just a few seconds on a 90 MHz Pentium based PC. It's also a user friendly process because of the extensive algorithm development that has been performed for doing FFT's and their inverse []. ### 4.2  Transient Response-Condenser Microphone For the examples that follow, the Green's function of Equation(20) was used. The Fourier transform of g could have been readily evaluated analytically, since L[g]  =  G(w)  = óõ ¥ -¥ e-iwt(d(0) - 1 t e-[t/( t)])dt  = iwt 1 + iwt (23) the right hand side of which is equivalent to Equation(12), as it must be. Note: Although the Fourier transform range of integration is +¥, it should be noted that the integrand is zero for negative times, since g(t)  =  0 for t < 0. It's also worth noting that there are six different ways to specify the transform and its inverse, all of them acceptable. Some users put [1/( 2p)] in front of one of the pair of equations, others put it with the opposite equation; still others go symmetric" by placing [1/( Ö{2p})] with each equation. Also, the -iwt term may be replaced with iwt. The only necessary condition is that the signs be different in the pair. The present algorithms used the time form of g(t) and computed the transform numerically with the FFT, rather than using the result given in Equation(23)]. Although more work is thus required of the computer, the software accommodates any Green's function the user should want to use. #### 4.2.1  Example Waveforms In the examples which follow, several cases are considered, covering a wide range of the time constant spectrum". Understanding these cases will provide an appreciation for how the low frequency rolloff may or may not be important to a given problem. Because of the properties of the Dirac delta function, it is readily seen from Equation(20) that the output signal will faithfully reproduce the input in the limit as t® ¥. Steady state analysis in such a case is not meaningful, since a steady condition could never be realized. Figure 4 is an approximation to this case, and is seen to be consistent with expectations. On the other hand, when t is very small, the differentiator characteristic of the lead circuit is evident, as shown in Figure 6. Shape distortion is severe for this case. The intermediate time constant case shown in Figure 5 illustrates not only some distortion, but another property of the lead circuit as well-that the long term time average of the output signal must be zero, once steady state has been reached. Those familiar with electronics will not find this result surprising, since capacitors are used to block" DC, while passing" AC. Such a blocking capacitor is even used with non-electret condenser microphones, so that the large polarizing voltage is not present on the input to the amplifier. It is large enough to be inconsequential to the present analysis. Figure Figure 4: Response to a pulse train, long time constant. Figure Figure 5: Response to a pulse train, intermediate time constant. Figure Figure 6: Response to a pulse train, extremely short time constant. Relative to the total record length of 2048, the time constant was set respectively at 2000, 200, and 2 in Figures 4 through 6. Each of the pulses, in the 4 pulse input train, was of width 50 (every figure). Thus the pulse width was respectively 0.025t, 0.25t, and 25t. These numbers relate to the corresponding rolloff frequencies for the three cases, according to fR  = 1 2pt (24) A pulse of sound input to the microphone is distorted greatly when the pulse width, D >  2.5t. This is illustrated in Figure 7, for the case of an input pulse, of width 50; whose shape is one cycle of an offset cosine. This shape was chosen to approximate a low frequency blast for which there is molecular flow (not a shock of the characteristic N-wave" type). (Some experiments with an SDC pressure sensor seem to support this case for some conditions, although it may not be representative of battlefield conditions.) The offset cosine was chosen instead of one half a sine wave, because the latter has infinite slope at the beginning and end points. Figure Figure 7: Response to a pulse for which Dt  =  2.5t. Stated in terms of rolloff frequency, pulse distortion begins to become significant as fR® 1/(2pDt) and is severe for fR  >  (2.5 Dt)-1. Many bullets create a shock whose width is in the neighborhood of Dt » 1 - 2 ms . Thus, shape distortion is serious for rolloff frequency of the order of 200 - 400 Hz; which is where many inexpensive electret microphones experience significant low frequency cutoff. The problem with trying to use these microphones in reconstruction of pressure profiles is thus obvious. Figure 8 illustrates pulse" distortion with a more likely shock input, in the form of an N-wave. For comparison with Figure 7, the ratio of pulse width to time constant was retained at 2.5; however, the ratio used to generate Figure 9 was 25. From this figure, it can be seen how the complications due to natural ringing" (echos) that result from reflection of a shock from various obstacles, will be exacerbated when the low frequency cutoff of the microphone is severely limited. Any process to estimate direction to a shock source, using arrival time data, will be made more difficult because of the distortion. Figure Figure 8: Response to an N-wave, Dt  =  2.5t. Figure Figure 9: Response to an N-wave pulse, extremely short time constant. ### 4.3  Physical basis for fR The basis for low frequency cutoff in the condenser microphone is easy to understand. The value of R in t  =  RC is collectively determined by all the components to which the backing plate of the microphone is connected. In times past [before the field effect transistor (FET)], the dominant factor was oftentimes the input resistance of the amplifier. For example, a microphone with 50 pF capacitance
include only the neutral density as a source term in Poisson's equation (\ref{poissoneqa}). Similarly, the gravitational and thermal-pressure forces (per unit volume) on the plasma (ions and electrons) have been neglected in the plasma force equation (\ref{ionforceqa}). One can easily show that, for the physical conditions in typical molecular clouds, they are completely negligible in comparison to the magnetic force exerted on the plasma, except in a direction almost exactly parallel to the magnetic field. Ignoring them parallel to the magnetic field implies that we are neglecting the ion acoustic waves and the (extremely long-wavelength) Jeans instability in the ions. The quantity $\sigin$ in equations (\ref{tnieq}) and (\ref{tineq}) is the elastic collision rate between ions and neutrals. For $\HCO-\HII$ collisions, $\sigin=1.69\times 10^{-9}~\ccb~\rm{s}^{-1}$ (McDaniel \& Mason 1973). The factor 1.4 in equations (\ref{tnieq}) and (\ref{tineq}) accounts for the inertial effect of He on the motion of the neutrals (for a discussion, see \S~2.1 of Mouschovias \& Ciolek 1999). \vspace{-4ex} \subsection{Linear System} To investigate the propagation, dissipation, and growth of HM waves in molecular clouds, we follow the original analysis by Jeans (1928; see also Spitzer 1978, \S~13.3a; and Binney \& Tremaine 1987, \S~5.1), and assume that the zeroth-order state is uniform, static (i.e., $\vvec_\alpha=0$), and in equilibrium. \footnote{It is well known that the assumption that the gravitational potential is uniform in the zeroth-order state is not consistent with Poisson's equation (\ref{poissoneqa}) (e.g., see Spitzer 1978; Binney \& Tremaine 1987). However, it is not well known that, for an infinite uniform system, no such inconsistency exists; i.e., there is no net gravitational force on any fluid element, hence this state is a true, albeit unstable, equilibrium state.} We consider only adiabatic perturbations; therefore, the net heating rate ($\Gamma_{\rm n}-\Lambda_{\rm n}$) on the right-hand side of equation (\ref{heateqa}) vanishes. We write any scalar quantity or component of a vector $q_{\rm{tot}}(\rvec,t)$ in the form $q_{\rm{tot}}(\rvec,t)=q_{0}+ q(\rvec,t)$, where $q_0$ refers to the zeroth-order state, and the first-order quantity $q$ satisfies the condition $|q| \ll |q_0|$. We thus obtain from equations (\ref{contineqa}) - (\ref{gausseqa}) the linearised system \leteq \begin{eqnarray} \label{contineqb} \frac{\partial \rhon}{\partial t} &=& - \rhono \left(\del \cdot \vnvec\right), \\ \label{ioncontineqb} \frac{\partial \rhoi}{\partial t} &=& - \rhoio \left(\del \cdot \vivec\right) + \frac{\rhoio}{\mn} \xxio \alphdr \rhon - 2 \frac{\rhoio}{\mi} \alphdr \rhoi, \\ \label{neutforceqb} \rhono \frac{\partial \vnvec}{\partial t} & = & - \del P_{\rm n} - \rhono \del \psi - \frac{\rhono}{\tnio}\left(\vnvec-\vivec\right), \\ \label{ionforceqb} \rhoio \frac{\partial \vivec}{\partial t} &=& \frac{\left(\del \cross \Bvec \right) \cross \Bveco}{4 \pi} -\frac{\rhoio}{\tino}\left(\vivec - \vnvec \right), \\ \label{heateqb} \frac{1}{T_0}\frac{\partial T}{\partial t} &=& \left(\gamma - 1\right) \frac{1}{\rhono}\frac{\partial \rhon}{\partial t} , \\ \label{inducteqb} \frac{\partial \Bvec}{\partial t} &=& \del \cross \left(\vivec \cross \Bveco\right), \\ \label{gaseqb} \frac{P_{\rm n}}{P_{\rm{n,0}}} &=& \frac{\rhon}{\rhono} + \frac{T}{T_0}, \\ \label{poissoneqb} \del^2 \psi &=& 4 \pi G \rhon, \\ \label{gausseqb} \del \cdot \Bvec &=& 0 . \end{eqnarray} \beq Equation (\ref{ioncontineqb}) has been simplified by using the relation \begin{eqnarray} \label{zeroorderchargeeq} \zcr \frac{\rhono}{\mn} = \alphdr \left(\frac{\rhoio}{\mi}\right)^{2} , \end{eqnarray} which expresses equilibrium of the ion density in the zeroth-order state, as a result of balance between the rate of creation of ions from ionisation of neutral matter by high-energy ($E \gtrsim 100~\rm{MeV}$) cosmic rays and the rate of destruction of ions by electron$-$molecular-ion dissociative recombinations. This relation allowed us to replace $\zcr$ by $\alphdr(\rhoio/\mi)^2(\mn/\rhono)$ in equation (\ref{ioncontineqb}). The quantity $\xxio \equiv \nio/\nno$ in equation (\ref{ioncontineqb}) is the degree of ionisation (where $\nio$ and $\nno$ are the number densities of ions and neutrals in the unperturbed state). For an ideal gas (with only translational degrees of freedom), $\gamma=5/3$ in equation (\ref{heateqb}). We seek plane-wave solutions of the form $q(\rvec, t)=\overline{q}\exp (i\kvec\cdot \rvec - i \omega t)$, where $\kvec$ is the propagation vector, $\omega$ the frequency, and $\overline{q}$ the amplitude (in general, complex) of the perturbation. Equations (\ref{contineqb}) - (\ref{gausseqb}) reduce to \leteq \begin{eqnarray} \label{contineqc} \omega \rhon &=& \rhono \kvec \cdot \vnvec, \\ \label{ioncontineqc} \omega \rhoi &=& \rhoio \kvec \cdot \vivec + i \frac{\rhoio}{\mn} \xxio \alphdr \rhon - i2 \frac{\rhoio}{\mi} \alphdr \rhoi, \\ \label{neutforceqc} \omega \vnvec &=& \left(\czero^2 k - \frac{1}{\tffo^2 k}\right)\frac{\rhon}{\rhono} \frac{\kvec}{k} - \frac{i}{\tnio} \vnvec + \frac{i}{\tnio} \vivec , \\ \label{ionforceqc} \omega \vivec &=& -\frac{\left(\kvec \cross \Bvec\right)\cross \Bveco}{4 \pi \rhoio} + \frac{i}{\tino}\vnvec - \frac{i}{\tino}\vivec, \\ \label{inducteqc} \omega \Bvec &=& - \kvec \cross \left(\vivec \cross \Bveco\right), \\ \label{gausseqc} \kvec \cdot \Bvec &=&0, \end{eqnarray} \beq where \begin{eqnarray} \label{czerodefeq} \czero \equiv \left(\frac{\gamma P_{\rm{n,0}}}{\rhono}\right)^{1/2} =\left(\frac{\gamma \kB T_{0}}{\mn}\right)^{1/2} \end{eqnarray} is the adiabatic speed of sound in the neutrals, and \begin{eqnarray} \label{tffdefeq} \tffo \equiv \left(4 \pi G \rhono\right)^{-1/2} \end{eqnarray} is the (one-dimensional) neutral free-fall timescale. The quantities $T$, $P_{\rm n}$, and $\psi$ have been eliminated by using equations (\ref{heateqb}), (\ref{gaseqb}), and (\ref{poissoneqb}), respectively. \subsection{The Dimensionless Problem} We put equations (\ref{contineqc}) - (\ref{gausseqc}) in dimensionless form by adopting $\rhono$, $\Bo$, $\tffo$, and $\czero$ as units of density, magnetic field strength, time, and speed, respectively. The implied unit of length is $\czero \tffo$, which is proportional to the one-dimensional thermal Jeans lengthscale $\lambda_{\rm{J,th}}$ (see \S~3.1.1). For convenience, we adopt a cartesian coordinate system such that the propagation vector $\kvec$ is in the $x$-direction and the zeroth-order magnetic field $\Bveco$ is in the $(x,z)$-plane at an angle $\theta$ with respect to $\kvec$ (see Fig. 1). Then the three unit vectors are \leteq \begin{eqnarray} \label{xdefeq} \ehat_{x} &\equiv& \frac{\kvec}{k}, \\ \label{ydefeq} \ehat_{y} &\equiv& \frac{\Bveco \cross \kvec}{|\Bveco \cross \kvec|}, \\ \label{zdefeq} \ehat_{z} &\equiv& \ehat_{x} \cross \ehat_{y} . \end{eqnarray} \beq \begin{center} \begin{figure} \includegraphics[width=70mm]{MCMf1.eps} \caption{{\it Coordinate system used in analysing the linearised hydromagnetic equations.} The $x$-axis is aligned with the propagation vector $\kvec$, and the magnetic field $\Bveco$ is in the $(x,z)$-plane at an angle $\theta$ with respect to $\kvec$ (see eqs. [\ref{xdefeq}] - [\ref{zdefeq}]).} \end{figure} \end{center} One may write $\Bveco = \ehat_{x} \Bo \cos \theta + \ehat_{z} \Bo \sin \theta$, and the dimensionless form of equations (\ref{contineqc}) - (\ref{gausseqc}) can be written in component form as \leteq \begin{eqnarray} \label{contineqd} \omegad \rhond &=& \kd \vnxd, \\ \label{ioncontineqd} \omegad \rhoid &=& \rhoiod \kd \vixd + i \rhoiod^2 \alphdrd \rhond - i2 \rhoiod \alphdrd \rhoid , \\ \label{xneutforceq} \omegad \vnxd &=& \left(\kd - \frac{1}{\kd}\right) \rhond - \frac{i}{\tniod}\vnxd + \frac{i}{\tniod}\vixd, \\ \label{xionforceq} \omegad \vixd &=& \frac{i}{\tinod}\vnxd - \frac{i}{\tinod}\vixd + \vaiod^2 \kd \Bzd \sin \theta, \\ \label{xinducteq} \omegad \Bxd &=& 0, \\ \nonumber \\ \label{yneutforceq} \omegad \vnyd &=& - \frac{i}{\tniod} \vnyd + \frac{i}{\tniod} \viyd, \\ \label{yionforceq} \omegad \viyd &=& \frac{i}{\tinod} \vnyd - \frac{i}{\tinod} \viyd - \vaiod^2 \kd \Byd \cos \theta , \\ \label{yinducteq} \omegad \Byd &=& - \kd \viyd \cos \theta , \\ \nonumber \\ \label{zneutforceq} \omegad \vnzd &=& - \frac{i}{\tniod}\vnzd + \frac{i}{\tniod}\vizd, \\ \label{zionforceq} \omegad \vizd &=& \frac{i}{\tinod} \vnzd - \frac{i}{\tinod} \vizd - \vaiod^2 \kd \Bzd \cos \theta , \\ \label{zinducteq} \omegad \Bzd &=& \kd \vixd \sin \theta - \kd \vizd \cos \theta , \\ \label{gausseqd} \kd \Bxd &=& 0 . \end{eqnarray} \beq Note that equation (\ref{gausseqd}) is redundant in that it gives the same information as equation (\ref{xinducteq}), namely, that there cannot be a nonvanishing component of the perturbed magnetic field in the direction of propagation. The dimensionless free parameters appearing in equations (\ref{contineqd}) - (\ref{gausseqd}) are given by \leteq \begin{eqnarray} \label{tnioddefeq} \tniod &\equiv& \frac{\tnio}{\tffo} = 0.506 \left(\frac{10^{-7}}{\xxio}\right) \left(\frac{10^3~{\cc}}{\nno}\right)^{1/2}, \\ \label{tinoddefeq} \tinod &\equiv& \frac{\tino}{\tffo} = 6.29 \times 10^{-7} \left(\frac{10^3{~\cc}}{\nno}\right)^{1/2}, \\ \label{vaioddefeq} \vaiod &\equiv& \frac{\vaio}{\czero} = 5.00 \times 10^3 \left(\frac{\Bo}{30~\mu \rm{G}}\right) \left(\frac{10^{-7}}{\xxio}\right)^{1/2} \left(\frac{10~{\rm K}}{T}\right)^{1/2} \left(\frac{10^3~{\cc}}{\nno}\right)^{1/2}, \\ \label{alphdrddefeq} \alphdrd &\equiv& \frac{\alphdr}{\mi} \rhono \tffo = 1.41 \times 10^9 \left(\frac{\alphdr}{10^{-6}~\rm{cm}^3~\rm{s}^{-1}}\right)\left(\frac{\nno} {10^3~\cc}\right)^{1/2} \left(\frac{29~{\rm{amu}}}{\mi}\right); \end{eqnarray} they represent, respectively, the neutral-ion collision time, the ion-neutral collision time, the ion {\Alf} speed $\vaio=\Bo/(4 \pi\rhoio)^{1/2}$, and the electron$-$molecular-ion dissociative recombination rate per unit ion mass. In evaluating the numerical constants in equations (\ref{tnioddefeq}) - (\ref{alphdrddefeq}) we have used $\mu=2.33$ and $\gamma=5/3$; we have also normalized the ion mass to that of $\HCO$ (= 29 amu). For any given ion mass $\mi$ and mean mass per neutral particle $\mu$ (in units of $m_{\rm H}$), the {\em ion mass fraction} $\rhoiod$ and the cosmic-ray ionisation rate $\zcrd$ are not free parameters in the problem; the former is determined by the ratio $\tinod/\tniod$ and the latter by the product ${\rhoiod}^2 \alphdrd$, where $\alphdrd$ is the dimensionless dissociative-recombination coefficient for molecular ions. \beq We note that $\tniod = 1/\vffo$, where $\vffo$ is the {\em collapse retardation factor}, which is a parameter that measures the effectiveness with which magnetic forces are transmitted to the neutrals via neutral-ion collisions (Mouschovias 1982), and appears naturally in the timescale for the formation of protostellar cores by ambipolar diffusion (e.g., see reviews by Mouschovias 1987a, \S~2.2.5; 1987b, \S~3.4; 1991b, \S~2.3.1; and discussions in Fiedler \& Mouschovias 1992, 1993; Ciolek \& Mouschovias 1993, 1994, 1995; Basu \& Mouschovias 1994, 1995). It is essentially the factor by which ambipolar diffusion in a magnetically supported cloud retards the formation and contraction of a protostellar fragment (or core) relative to free fall up to the stage at which the mass-to-flux ratio exceeds the critical value for collapse. It is discussed further in \S~3.2.1. Equations (\ref{contineqd}) - (\ref{zinducteq}) govern the behaviour of small-amplitude disturbances in a weakly ionised cloud; they (without eq. [\ref{xinducteq}]) form a $10 \times 10$ homogeneous system. In general, the dispersion relation $\omegad(\kd)$ can be obtained by setting the determinant
272: 'independent': is that true? the transfer factor derive from the same fit, i.e. the same gaussian. How could N_est^i,fake be independent? l 326: among the systematic uncertainties, the one associated to the ISR modelling is the largest; nevertheless is it sufficient? You just consider 1sigma of statistical fluctuation (according to the AN, Section 8.2.5), a quantity that, in principle, you can reduce by increasing the statistics. Is there no systematic associated to the reweighing method itself? I think it is needed given the large correction factors you end up with (350%). A possibility (indeed extreme but for sure conservative) would be to evaluate the efficiency change with and without correction factors. Which would be the systematic in this case? <!--/twistyPlugin--> ## Questions from Joe Pastika HN August 7 <!--/twistyPlugin twikiMakeVisibleInline--> L254: "bad charged hadron filter" is listed as "not recommended" on the JetMET twiki. Is there a reason this is still included in your filter list? Changed: < < In the next version "(2017 only)" is added to this filter. The recommendation was changed between our analysis of 2017 and 2018 data, and it wasn't feasible to re-process 2017 for this; all of the MET filters listed remove only 2% of the /MET/ dataset, so it is a small issue. > > The recommendation was changed between our analysis of 2017 and 2018 data, and it wasn't feasible to re-process 2017 for this; all of the MET filters listed remove only 2% of the /MET/ dataset, so it is a small issue. In the next AN version "(2017 only)" is added here. L268: Could you use a difference symbol in the text/tables for when you use dxy/dz referenced from 0,0,0 (maybe dz_000 or something else reasonable) to differentiate it clearly from measurement w.r.t. the beamspot? Line: 113 to 152 This was normalized incorrectly and will be corrected to one in the next AN version. Changed: < < L458: Can you add plots (at least a few examples) to the AN of the Z->ll mass distributions in the OS and SS categories used for the T&P method? > > L458: Can you add plots (at least a few examples) to the AN of the Z->ll mass distributions in the OS and SS categories used for the T&P method? We will add these plots. L490: I don't understand how this test shows that P_veto does not depend on track pT. How significant is the KS test for distributions with so few events? Changed: < < This question was asked in the pre-approval (see below in the responses). Investigating we found that the estimate as presented is statistically consistent with several other hypotheses of P_veto's pt-dependence, for example a linear dependence. In short we do not have the statistics to determine a pt-dependence or for any potential dependence to matter. > > This question was asked in the pre-approval (see below in the responses). Investigating we found that the estimate as presented is statistically consistent with several other hypotheses of P_veto's pt-dependence, for example a linear dependence. In short we do not have the statistics to determine a pt-dependence or for any potential dependence to affect the estimate. L523: Can you say a few more words about how the trigger efficiency is calculated? Line: 168 to 209 nLayers = 5 1.013 +- 0.088 1.0 +- 1.4 nLayers >= 6 1.0 +- 0.21 1.02 +- 0.83 Changed: < < To use those weights in the estimate would need a more careful treatment of the statistical uncertainties, but even ignoring them these do not change the estimates at all. > > Despite the plots above these average weights are very consistent with one, e.g. the estimate does not depend on this. If possible, find the justification for using dxy with respect to the origin for the track isolation pileup subtraction. Line: 176 to 217 Suggestion: compare the nominal Gaussian fit to a flat line for NLayers5, as there's a concern the bias towards the PV changes as nLayers increases. Changed: < < The transfer factor for |d0| ~ (0.05, 0.50) is 0.14 (0.12 ZtoEE), but with a flat line the transfer factor is purely geometric and needs to fit. It would always be 0.02 / 0.45 = 0.0444. To do the same comparison as the "second assumption" (L 646-658 in the AN): ZtoMuMu NLayers5 ZtoEE NLayers5 sideband d0 tracks 25 9 signal d0 tracks 2 5 t.f. * sideband count (from fit) 3.50 ± 1.85 1.08 ± 0.69 t.f. * sideband count (flat) 1.11 ± 0.02 0.40 ± 0.13 > > With a flat line the transfer factor is purely a normalization issue and has no uncertainty; it is always 0.02 / 0.45 = 0.0444. The table below is added to the AN: Changed: < < Due to the ~ 40-50% uncertainty from the fit, the projection to the peak using the fit from NLayers4 ends up being more compatible with the observation of the peak. > > Changed: < < ZtoMuMu NLayers5 (gaussian fit from NLayers4) ZtoMuMu NLayers5 (pol0 fit) > > ZtoMuMu NLayers5 (gaussian fit from NLayers4) ZtoMuMu NLayers5 (pol0 fit, finer binning) > > The flat assumption reduces the estimates by ~2/3 and the agreement is worse, especially as there would be no 40-50% fit uncertainty. <!--/twistyPlugin--> ## First set of comments from Kevin Stenson HN July 14 Line: 1112 to 1148 META FILEATTACHMENT attachment="comparePU_fakeCRs.jpg" attr="" comment="" date="1564787415" name="comparePU_fakeCRs.jpg" path="comparePU_fakeCRs.jpg" size="116660" user="bfrancis" version="1" attachment="comparePU_fakeCRs_ratio.jpg" attr="" comment="" date="1564787415" name="comparePU_fakeCRs_ratio.jpg" path="comparePU_fakeCRs_ratio.jpg" size="128471" user="bfrancis" version="1" attachment="tfFlat_ZtoMuMu_NLayers5.png" attr="" comment="" date="1565462840" name="tfFlat_ZtoMuMu_NLayers5.png" path="tfFlat_ZtoMuMu_NLayers5.png" size="14955" user="bfrancis" version="1" > > META FILEATTACHMENT attachment="sidebandProjections.png" attr="" comment="" date="1565712492" name="sidebandProjections.png" path="sidebandProjections.png" size="70993" user="bfrancis" version="1" META TOPICMOVED by="bfrancis" date="1556204305" from="Main.DisappearingTracks2017" to="Main.EXO19010" #### Revision 472019-08-10 - BrianFrancis Line: 1 to 1 META TOPICPARENT name="BrianFrancis" # Search for new physics with disappearing tracks Line: 174 to 174 We've found where we initially took this from several years ago, however it seems like it was taken incorrectly. We still however observe no pileup-dependence on the track isolation requirement. > > Suggestion: compare the nominal Gaussian fit to a flat line for NLayers5, as there's a concern the bias towards the PV changes as nLayers increases. The transfer factor for |d0| ~ (0.05, 0.50) is 0.14 (0.12 ZtoEE), but with a flat line the transfer factor is purely geometric and needs to fit. It would always be 0.02 / 0.45 = 0.0444. To do the same comparison as the "second assumption" (L 646-658 in the AN): ZtoMuMu NLayers5 ZtoEE NLayers5 sideband d0 tracks 25 9 signal d0 tracks 2 5 t.f. * sideband count (from fit) 3.50 ± 1.85 1.08 ± 0.69 t.f. * sideband count (flat) 1.11 ± 0.02 0.40 ± 0.13 Due to the ~ 40-50% uncertainty from the fit, the projection to the peak using the fit from NLayers4 ends up being more compatible with the observation of the peak. ZtoMuMu NLayers5 (gaussian fit from NLayers4) ZtoMuMu NLayers5 (pol0 fit) </> <!--/twistyPlugin--> ## First set of comments from Kevin Stenson HN July 14 Line: 1095 to 1111 META FILEATTACHMENT attachment="tf_ZtoMuMu_NLayers5.png" attr="" comment="" date="1563389063" name="tf_ZtoMuMu_NLayers5.png" path="tf_ZtoMuMu_NLayers5.png" size="17113" user="bfrancis" version="1" attachment="comparePU_fakeCRs.jpg" attr="" comment="" date="1564787415" name="comparePU_fakeCRs.jpg" path="comparePU_fakeCRs.jpg" size="116660" user="bfrancis" version="1" attachment="comparePU_fakeCRs_ratio.jpg" attr="" comment="" date="1564787415" name="comparePU_fakeCRs_ratio.jpg" path="comparePU_fakeCRs_ratio.jpg" size="128471" user="bfrancis" version="1" > > META FILEATTACHMENT attachment="tfFlat_ZtoMuMu_NLayers5.png" attr="" comment="" date="1565462840" name="tfFlat_ZtoMuMu_NLayers5.png" path="tfFlat_ZtoMuMu_NLayers5.png" size="14955" user="bfrancis" version="1" META TOPICMOVED by="bfrancis" date="1556204305" from="Main.DisappearingTracks2017" to="Main.EXO19010" #### Revision 462019-08-08 - BrianFrancis Line: 1 to 1 META TOPICPARENT name="BrianFrancis" # Search for new physics with disappearing tracks Line: 81 to 81 </> <!--/twistyPlugin--> > > ## Questions from Joe Pastika HN August 7 <!--/twistyPlugin twikiMakeVisibleInline--> L254: "bad charged hadron filter" is listed as "not recommended" on the JetMET twiki. Is there a reason this is still included in your filter list? In the next version "(2017 only)" is added to this filter. The recommendation was changed between our analysis of 2017 and 2018 data, and it wasn't feasible to re-process 2017 for this; all of the MET filters listed remove only 2% of the /MET/ dataset, so it is a small issue. L268: Could you use a difference symbol in the text/tables for when you use dxy/dz referenced from 0,0,0 (maybe dz_000 or something else reasonable) to differentiate it clearly from measurement w.r.t. the beamspot? Now used is $d_{z}^{0}$ and $d_{xy}^{0}$ where relevant. L311: What effect does the choice of "2 sigma" on the inefficiency of tracks
# Why hasn't anyone proved that the two standard approaches to quantizing Chern-Simons theory are equivalent? The two standard approaches to the quantization of Chern-Simons theory are geometric quantization of character varieties, and quantum groups plus skein theory. These two approaches were both first published in 1991 (the geometric quantization picture here and the skein theoretic approach here), and despite a tremendous amount of development since then, it is still not known whether they are equivalent! I guess that it is reasonable to say that the problem of their equivalence has been around for 20 years. There is (at least) one important theorem, namely the asymptotic faithfulness of the mapping class group representations produced by these two quantizations, which has proofs in both settings. The two proofs are of completely different character, and are of course logically independent, since the two representations are not known to be the same (this was proved for the quantum group skein representation by Freedman, Walker, and Wang, and for the geometric quantization representation by Andersen). Is there a good reason why the equivalence of these two viewpoints is not yet a theorem? Is there an idea for a proof, which hasn't been completed because "it's just a long calculation" or "everyone knows it's true" or "it's nice to know, but it wouldn't actually help us prove theorems"? Or is it that it's actually a hard problem that no one knows how to approach? Is it an "important" problem whose solution would have lots of consequences and applications, or at least advance our understanding of "quantization"? - About representation of mapping class group - my feelings - that in genus zero (with "marked points") - it is Kohno-Drinfeld theorem - which says that monodromy representation of Knizhnik-Zamolodchikov equation is given by quantum R-matrix of the corresponding quantum group. I am not sure that higher genus is known, but actually I would say it would not be surprising... I am not big expert but I think that there no problems of seeing equivalences on the physical level of rigour... There are not only these two approaches - people worked on CS quantization by standard QFT - gauge fixing... –  Alexander Chervov Jan 27 '12 at 7:52 One more comment. There is "elder brother" for CS - quantization of Teichmuller space. The relation is like: su(2)(compact) VS. sl(2,R) -non-compact (=> more complicated is Teichmuller). More formally in CS you quantize symplectic manifold Moduli($\pi_1->SU(2)$) in Teichmuller you quantize Moduli($\pi_1->PSL(2,R)$). As far as I understand last decade the progress in quantum Teichmuller was quite big - see paper by Vladimir Fock and coauthors in arXiv. As far as I understand they can prove that all quantizations coincide for Teichmuller... It is related to "Liouville QFT". –  Alexander Chervov Jan 27 '12 at 8:16 Anderson has claimed this equivalence in talks, and used it to prove that the colored Jones polynomials distinguish knots from the unknot. But I don't think this work has appeared, I think he has made progress with collaborators though. –  Ian Agol Jan 27 '12 at 18:55 @Agol you think equivalence is not just Kohno-Drinfeld ? Why ? –  Alexander Chervov Jan 27 '12 at 19:24 @Alexander, is it known that the KZ monodromy is equivalent to the monodromy of the Hitchin connection in geometric quantization? Actually, according to an answer to a previous question of mine (mathoverflow.net/questions/73729), it's nontrivial even to relate the quantum hilbert space for the punctured sphere and the vector space on which the KZ equations act (I'd be happy to be wrong about this, though). –  John Pardon Jan 27 '12 at 20:22 Good question. I'm much more familiar with the QG/skein theory approach than the geometric quantization approach, so perhaps what I write here will be biased. I think the main reason there is not yet a proof that the two approaches are equivalent is that the geometric quantization side is difficult and unwieldy (in my biased opinion), though I'm willing to concede that it might also be beautiful and interesting. I think Jorgen Andersen has made the most progress on the GQ side, so you might want to look at his recent papers to get a feeling for what the state of the art is. A few years ago Andersen told me the outline of an argument for proving that the two representations of the mapping class group were the same. It's a nice idea, so I'll repeat it here. I'm not sure how close Andersen and/or others are to filling in all the details. The GQ Hilbert space for a surface $Y$ is (roughly) the space of holomorphic sections of a certain line bundle $L$ over the space of flat connections on $Y$. The holomorphic structure comes from a choice of complex structure on $Y$. Choose a pants decomposition of $Y$, and deform its complex structure by stretching transversely to the curves which define the pants decomposition. As we head toward the boundary of Teichmuller space, the holomorphic sections of $L$ will become more and more concentrated along a certain Lagrangian submanifold. In the limit, we get delta functions along this submanifold. Recall now that instead of a complex polarization we could have chosen a real polarization; see a paper of Jeffrey and Weitsman from the early 1990's. This real polarization determines a langrangian foliation of the space of flat connections. Certain of the leaves of this foliation have trivial holonomy (of $L$); these are called the Bohr-Somerfeld orbits. Jeffrey and Weitsman showed that the number of Bohr-Somerfeld orbits matched the expected dimension of the Hilbert space. The first punch line: The lagrangian submanifold in the Anderson picture is exactly the Bohr-Somerfeld orbits of the Jeffrey-Weitson picture. This shows that GQ gives the same answer with real or complex polarization. The second punch line: The connected components of the Bohr-Somfeld orbits correspond to the connections where the holonomies around the pants curves take on certain discrete values in $SU(2)$. In other words, we have a finite label set (the set of allowable holonomies), and a basis of the Hilbert space for the real polarization is indexed by labelings of the pants curves by this discrete set. The skein basis is indexed by an exactly similar set of labelings of the pants curves. This gives an isomorphism between the real polarization basis and the skein basis. I'll repeat my caveats: I'm not an expert in the above story (so I may have gotten some of the details wrong), and I think that even the experts cannot presently fill in all the details. But it seems like a nice and plausible argument to me. So far as I know it has not appeared in print, so I thought it was worth mentioning here. - Remark. Number of holomorphic sections $H^0(L^k)$ is given by the famous Verlinde formula. As Kevin mentioned this is Hilbert space of the theory in holomorphic polarization. Level "k" in Verlinde formula corresponds to $k$-th power of basic line bundle –  Alexander Chervov Jan 27 '12 at 10:05 @Kevin. Interesting ideas ! What means "stretching transversely" in the sentence about the deformation of the complex structure ? I do not quite understand what deformation you mean and why it approaches boundary... –  Alexander Chervov Jan 27 '12 at 10:08 "The lagrangian submanifold in the Anderson picture is exactly the Bohr-Somerfeld orbits of the Jeffrey-Weitson picture " ... 1) may be you need also to take classical limit k->inf ? 2) Do you mean that independently of the way to approach Teich. boundary any holomorphic section will become a linear combination of "delta"-functions concentrated along BS-tori ? Or it is somehow dependent ? Or the choice of pants selects unique way to approach boundary ? –  Alexander Chervov Jan 27 '12 at 10:16 @Alexander Chervov: In terms of the corresponding
# Diversity in Living Organisms ## Science ### NCERT 1   Why do we classify organisms? ##### Solution : There are a wide range of life forms (about $10$ million -$13$ million species) around us. These life forms have existed and evolved on the Earth over millions of years ago. The huge range of these life forms makes it very difficult to study them one by one. Therefore, we look for similarities among them and classify them into different classes to study these different classes as a whole. Thus, classification makes our study easier. 2   Give three examples of the range of variations that you see in life-forms around you. ##### Solution : Examples of range of variations observed in daily life are:$\\$ (i) Variety of living organisms in terms of size ranges from microscopic bacteria to tall trees of $100$ metres.$\\$ (ii) The colour, shape, and size of snakes are completely different from those of lizards.$\\$ (iii) The life span of different organisms is also quite varied. For example, a crow lives for only $15$ years, whereas a parrot lives for about $140$ years. 3   Which do you think is a more basic characteristic for classifying organisms?$\\$ (a) The place where they live.$\\$ (b) The kind of cells they are made of. Why? ##### Solution : The kind of cells that living organisms are made up of is a more basic characteristic for classifying organisms, than on the basis of their habitat. This is because on the basis of the kind of cells, we can classify all living organisms into eukaryotes and prokaryotes. On the other hand, a habitat or the place where an organism lives is a very broad characteristic to be used as the basis for classifying organisms. For example, animals that live on land include earthworms, mosquitoes, butterfly, rats, elephants, tigers, etc. These animals do not resemble each other except for the fact that they share a common habitat. Therefore, the nature or kind of a cell is considered to be a fundamental characteristic for the classification of living organisms. 4   What is the primary characteristic on which the first division of organisms is made? ##### Solution : The primary characteristic on which the first division of organisms is made is the nature of the cell. It is considered to be the fundamental characteristic for classifying all living organisms. Nature of the cell includes the presence or absence of membrane-bound organelles. Therefore, on the basis of this fundamental characteristic, we can classify all living organisms into two broad categories of eukaryotes and prokaryotes. Then, further classification is made on the basis of cellularity or modes of nutrition. 5   On what basis are plants and animals put into different categories? ##### Solution : Plants and animals differ in many features such as the absence of chloroplasts, presence of cell wall, etc. But, locomotion is considered as the characteristic feature that separates animals from plants. This is because the absence of locomotion in plants gave rise to many structural changes such as the presence of a cell wall (for protection), the presence of chloroplasts (for photosynthesis) etc. Hence, locomotion is considered to be the basic characteristic as further differences arose because of this characteristic feature. 6   Which organisms are called primitive and how are they different from the so-called advanced organisms? ##### Solution : A primitive organism or lower organism is the one which has a simple body structure and ancient body design or features that have not changed much over a period of time. An advanced organism or higher organism has a complex body structure and organization. For example, an Amoeba is more primitive as compared to a starfish. Amoeba has a simple body structure and primitive features as compared to a starfish. Hence, an Amoeba is considered more primitive than a starfish. 7   Will advanced organisms be the same as complex organisms? Why? ##### Solution : It is not always true that an advanced organism will have a complex body structure. But, there is a possibility that over the evolutionary time, complexity in body design will increase. Therefore, at times, advanced organisms can be the same as complex organisms. 8   What is the criterion for classification of organisms as belonging to kingdom Monera or Protista? ##### Solution : The criterion for the classification of organisms belonging to kingdom Monera or Protista is the presence or absence of a well-defined nucleus or membrane-bound organelles. Kingdom Monera includes organisms that do not have a well-defined nucleus or membrane-bound organelles and these are known as prokaryotes. Kingdom Protista, on the other hand, includes organisms with a well-defined nucleus and membrane-bound organelles and these organisms are called eukaryotes. 9   In which kingdom will you place an organism which is single-celled, eukaryotic and photosynthetic? ##### Solution : Kingdom Protista includes single celled, eukaryotic, and photosynthetic organisms. 10   In the hierarchy of classification, which grouping will have the smallest number of organisms with a maximum of characteristics in common and which will have the largest number of organisms? ##### Solution : In the hierarchy of classification, a species will have the smallest number of organisms with a maximum of characteristics in common, whereas the kingdom will have the largest number of organisms. 11   Which division among plants has the simplest organisms? ##### Solution : Thallophyta is the division of plants that has the simplest organisms. This group includes plants, which do not contain a well differentiated plant body. Their body is not differentiated into roots, stems, and leaves. They are commonly known as algae. 12   How are pteridophytes different from the phanerogams? ##### Solution : $\text{Pteridophyta}$ $\\$ $\bullet$They have inconspicuous or less differentiated reproductive organs.$\\$ $\bullet$They produce naked embryos called spores. $\\$ $\bullet$Ferns, Marsilea, Equisetum, etc. are examples of pteridophyta.$\\$ $\text{Phanerogams}$ $\\$ $\bullet$They have well developed reproductive organs.$\\$ $\bullet$They produce seeds.$\\$ $\bullet$Pinus, Cycas, fir, etc. are examples of phanerogams$\\$ 13   How do gymnosperms and angiosperms differ from each other? ##### Solution : $\text{Gymnosperm}$ $\\$ $\bullet$They are non-flowering plants.$\\$ $\bullet$Naked seeds not enclosed inside fruits are produced.$\\$ $\bullet$Pinus, Cedar, fir, Cycas, etc. are some examples of gymnosperms.$\\$ $\text{Angiosperm}$ $\\$ $\bullet$They are flowering plants.$\\$ $\bullet$Seeds are enclosed inside fruits.$\\$ $\bullet$Coconut, palm, mango, etc. are some examples of angiosperms. 14   How do poriferan animals differ from coelenterate animals? ##### Solution : $\text{Porifera}$$\\ \bullet They are mostly marine, nonmotile, and found attached to rocks.\\ \bullet They show tissue level of organisation.\\ \bullet Spongilla, Euplectella, etc. are poriferans.\\ \text{Coelenterate}$$\\$ $\bullet$They are exclusively marine animals that either live in colonies or have a solitary lifespan.$\\$ $\bullet$They show cellular level of organisation.$\\$ $\bullet$ Hydra, sea anemone, corals, etc. are coelenterates. 15   How do annelid animals differ from arthropods? ##### Solution : $\text{Annelids}$ $\\$ $\bullet$The circulatory system of annelids is closed.$\\$ $\bullet$The body is divided into several identical segments.$\\$ $\text{Arthropods}$ $\\$ $\bullet$Arthropods have an open circulatory system.$\\$ $\bullet$The body is divided into few specialized segments.$\\$ 16   What are the differences between amphibians and reptiles? ##### Solution : $\text{Amphibian}$ $\\$ $\bullet$ They have a dual mode of life.$\\$ $\bullet$Scales are absent.$\\$ $\bullet$They lay eggs in water.$\\$ $\bullet$It includes frogs, toads, and salamanders.$\\$ $\text{Reptiles}$ $\\$ $\bullet$They are completely terrestrial.$\\$ $\bullet$Skin is covered with scales.$\\$ $\bullet$They lay eggs on land.$\\$ $\bullet$It includes lizards, snakes, turtles, chameleons, etc.$\\$ 17   What are the differences between animals belonging to the Aves group and those in the mammalia group? ##### Solution : $\text{Aves}$ $\\$ $\bullet$ Most birds have feathers and they possess a beak.$\\$ $\bullet$They lay eggs. Hence, they are oviparous.$\\$ $\text{ Mammals}$ $\\$ $\bullet$They do not have feathers and the beak is also absent.$\\$ $\bullet$Some of them lay eggs and some give birth to young ones. Hence, they are both oviparous and viviparous. $\\$ 18   What are the advantages of classifying organisms? ##### Solution : There are a wide range of life forms (about $10$ million$-13$ million species) around us. These life forms have existed and evolved on the Earth over millions of years ago.
$Z_4$. Another alternative to correct the relation $M_e=M_d^T$ is to substitute the second Higgs quintet by a 45 dimensional Higgs representation~\cite{Perez:2007rm}. In this case the mass difference will be given by $M_d-M_{e}^{\top}=8\,\Gamma^2_d\,\text{v}^{\ast}_{45}$, where v$_{45}$ is the VEV of the 45. In any of those situations the up-quark mass matrix is no longer symmetric, which is the reason why we have considered arbitrary NNI mass matrices in section ~\ref{CS_sec:matrices}. \paragraph{Proton Decay} The proton decay can occur through the exchange of X and Y heavy gauge bosons or the exchange of the colour Higgs triplets, $T_1$ and $T_2$ contained in the quintets. For the proton decay via the exchange of heavy gauge bosons the decay width can be estimated~\cite{Langacker:1980js} as $\Gamma\approx \alpha_U^2 \frac{m_p^5}{M_V^4}$. Using the partial proton lifetime~\cite{Nakamura:2010zzi} $\tau(p\rightarrow \pi^0 e^+)>8.2\times10^{33}$ years the mass of the heavy gauge bosons is estimated as $M_V>(4.0-5.1)\times 10^{15}$ GeV for a unified gauge coupling in the range $\alpha_U^{-1}\approx 25 - 40$. Concerning the proton decay via the exchange of the colour Higgs triplets, the dimension 6 operators contributions at tree-level are given by \begin{equation} \label{CS_eq:contr1} \sum_{n=1,2} \frac{\left(\Gamma^n_u\right)_{ij}\left(\Gamma^n_d\right)_{kl}}{M^2_{T_n}} \left[\frac{1}{2} (Q_iQ_j)(Q_kL_l)+(u^c_ie^c_j)(u^c_kd^c_l)\right]\,, \end{equation} that in fact vanish due to the Yukawa matrices form. \paragraph{Unification} We have found unification of the gauge couplings at two-loop level without considering the threshold effects and performing the splitting between the masses of the $\Sigma_3$ and $\Sigma_8$. In our computation, we have set the fields X, Y, $T_1$, $T_2$ at GUT scale, $\Lambda$, and $H_1$, $H_2$ around electroweak scale. We found a GUT scale around $\Lambda\approx(1.3-2.4)\times 10^{14}$ GeV and the masses of the $\Sigma_3$ and $\Sigma_8$ components of $\Sigma$ in the range $M_Z \leq M_{\Sigma_3}\leq 1.8\times 10^4\,\text{GeV}$ and $5.4\times 10^{11}\,\text{GeV}\leq M_{\Sigma_8}\leq 1.3\times 10^{14}\,\text{GeV}$. Unfortunately, the unification scale found is smaller than what we expect from the computation of the proton decay through the exchange of the heavy X and Y gauge bosons and the mass splitting between $M_{\Sigma_3}$ and $M_{\Sigma_8}$ is unnaturally large. This discrepancy can be avoid by the introduction of a 24 fermionic representation \cite{Bajc:2006ia}. In such case the neutrino masses will get contributions also from type-III seesaw mechanism in addition to the usual type-I. \section{Mass matrices} \label{CS_sec:matrices} In Ref.~\cite{Branco:2010tx} we have shown that the quark mass matrices in the NNI form accommodate all observed up- and down-quark masses and the CKM mixing matrix. As a consequence of $SU(5)$ symmetry and since the NNI form has zeroes in symmetric positions, the charged lepton mass matrix, $M_e$, has also NNI form. Both quark and charged-lepton mass matrices can be written as, \begin{equation} M_{x}=\begin{pmatrix} 0&A_x(1-\epsilon_a^x)&0\\ A_x(1+\epsilon_a^x)&0&B_x(1-\epsilon_b^x)\\ 0&B_x(1+\epsilon_b^x)&C_x \end{pmatrix}\,, \end{equation} where $x=u,\,d,\,e$ and $\epsilon$ measures the deviation from the Hermiticity; a global measurement of the asymmetry in the quark, $\varepsilon_q$, and leptonic, $\varepsilon_{\ell}$, sectors is given by \begin{equation} \varepsilon_q\equiv\frac{1}{2}\sqrt{\epsilon^{u\,2}_a+\epsilon^{u\,2}_b+\epsilon^{d\,2}_a+\epsilon^{d\,2}_b}\,\quad \text{and}\quad \varepsilon_{\ell}\equiv\sqrt{\frac{\epsilon^{e\,2}_a+\epsilon^{e\,2}_b}2}\,. \end{equation} For $\varepsilon_q=\varepsilon_e=0$ one recovers the Fritzsch form~\cite{Fritzsch:1977vd,Li:1979zj,Fritzsch:1979zq}. The fact that the $Z_4$ neutrino charges are free parameters obliges us to scan all charge combinations and select the viable textures by confronting them with neutrino experimental data. \begin{table} \scriptsize \begin{center} \begin{tabular}{ccc} \hline parameters & NH & IH\\ \hline\\[-1ex] $\Delta m^2_{21}\,\left(\times 10^{-5}\,\text{eV}^2\right)$ & \multicolumn{2}{c}{$7.62\pm 0.19$}\\[2ex] $\left|\Delta m^2_{31}\right|\,\left(\times10^{-3}\,\text{eV}^2\right)$ & $2.53^{+0.08}_{-0.10}$ & $2.40^{+0.10}_{-0.07}$ \\[2ex] $\sin^2\theta_{12}$ & \multicolumn{2}{c}{$0.320^{+0.015}_{-0.017}$} \\[2ex] $\sin^2\theta_{23}$ & $0.49^{+0.08}_{-0.05}$ & $0.53^{+0.05}_{-0.07}$ \\[2ex] $\sin^2\theta_{13}$ & $0.026^{+0.003}_{-0.004}$ & $0.027^{+0.003}_{-0.004}$\\ \hline \end{tabular}\quad \begin{tabular}{l} $m_e(M_Z)=0.486661305\pm{0.000000056}\text{ MeV}$\,, \\[2ex] $m_{\mu}(M_Z)=102.728989\pm{0.000013}\text{ MeV}$\,,\\[2ex] $m_{\tau}(M_Z)=1746.28\pm{0.16}\text{ MeV}$\,, \end{tabular} \caption{\label{CS_tab:data} The three-flavour oscillation parameters with 1 $\sigma$ errors, from Ref.~\cite{Schwetz:2011zk}, for normal hierarchy (NH) and inverted hierarchy (IH) (on the left) and the charged-lepton mass at $M_Z$ scale (on the right)~\cite{EmmanuelCosta:2011jq}.} \end{center} \end{table} The effective neutrino mass matrix is given by the type-I seesaw formula~\cite{Minkowski:1977sc} $m_{\nu}=-m_D\,M^{-1}_R\,m^{\top}_D$ to an excellent approximation ($m_D\ll M_R$). Performing the scan of all $Z_4$ charges for $\phi_1$, $q_3$ and neutrinos one is able to determine the shape of the effective neutrino mass matrix. After its analysis one concludes that among the six different possibilities (see Ref.~\cite{EmmanuelCosta:2011jq}) only two textures are viable: II and II$_{(12)}$ where $\text{II}=P_{12}^{\top}\text{II}_{(12)}\,P_{12}$. \begin{equation} \label{CS_eq:textures} \text{II}=\begin{pmatrix} 0 & \ast & 0 \\ \ast & \ast & \ast\\ 0 & \ast & \ast \end{pmatrix}\,,\qquad \qquad \text{II}_{(12)}=\begin{pmatrix} \ast & \ast & \ast \\ \ast & 0 & 0\\ \ast & 0 & \ast \end{pmatrix}\,. \end{equation} In order to confront the predictions from $M_e$ and $m_\nu$ with the neutrino oscillation data at $M_Z$ energy scale one needs to diagonalize both $M_e$ and $m_\nu$ and compute the Pontecorvo-Maki-Nakagawa-Sakata~(PMNS) matrix~\cite{Pontecorvo:1957cp,Pontecorvo:1957qd,Maki:1962mu}. The leptonic mixing matrix is given by $U_{PMNS}=U^{\top}_{\ell}\,P_{12}\,U_{\nu}$ where $U_{\ell}$ and $U_\nu$ are the diagonalizing matrices of charged-leptons and neutrinos respectively. In our numerics we have varied all charged-lepton masses and neutrino mass differences within their allowed range (see Table~\ref{CS_tab:data}), scanned the mass of the lightest neutrino for different magnitudes below 2 eV and computed the other two masses through $\Delta m^2_{ij}\equiv m _i^2-m _j^2$ the mass squared difference, using the actual neutrino oscillation data~\cite{Schwetz:2011zk}; the free parameters of $M_e$ and $m_\nu$ were also properly taken into account (see Ref.~\cite{EmmanuelCosta:2011jq}). We have considered as additional constraints the effective Majorana mass~\cite{Pascoli:2001by,Pascoli:2002xq,Pascoli:2003ke} $m_{ee}\equiv\sum_{i=1}^3 m_i\,U^{\ast2}_{1i}$; the constraint from Tritium $\beta$ decay \cite{Nakamura:2010zzi} $m_{\nu_e}^2\equiv\sum_{i=1}^3\,m_i^2|U_{1i}|^2<\left(2.3\,\text{eV}\right)^2$ at 95\% C.L. and constraints on the sum of light neutrino masses from cosmological and astrophysical data \cite{Spergel:2006hy} $\mathcal{T}\equiv\sum_{i=1}^3 m_i < 0.68\,\text{eV}$ at 95\% C.L.. We got that texture II is compatible just with normal hierarchy~(NH) while texture II$_{(12)}$ is compatible just with inverted hierarchy~(IH). \begin{figure}[h] \centering \includegraphics[scale=0.35]{Papers/Simoes1.jpg} \caption{\label{CS_fig1} Plot of the effective majorana mass, $|m_{ee}|$, as a function of the lightest neutrino mass $m_1$ for the Textures-II (NH) (left) and $m_3$ for Texture-II$_{(12)}$ (IH) (right).} \end{figure} For texture II and normal hierarchy we found that the lightest neutrino mass varies in the range $m_1=\left[0.0015, 0.013\right]$ eV while the global deviation is $\varepsilon_{\ell}>0.005$; the effective Majorana mass found was $0.00097\,\text{eV} <|m_{ee}| <0.0021\,\text{eV}$. Concerning texture II$_{(12)}$ where inverted hierarchy applies we found the lightest neutrino mass to be in the range $m_3=\left[0.005, 0.010\right]$ eV, the global deviation is $\varepsilon_{\ell}>0.003$ and the $|m_{ee}|$ parameter is given by $0.015\,\text{eV}<|m_{ee}|<0.021\,\text{eV}$. \section{Conclusion} In this work we showed that it is possible to implement a $Z_4$ flavour symmetry, in the context of $SU(5)$ with minimal fermionic content plus three right-handed neutrinos and two Higgs quintets, that leads to quark mass matrices in the NNI form. We have studied the implications of this $SU(5)\times Z_4$ symmetry on the leptonic sector and found that, among the six possible textures for the effective neutrino mass matrix, only two are phenomenologically viable and it is possible to distinguish them by the light neutrino mass spectrum hierarchy. \section*{Acknowledgments} I would like to thank the organizers of FLASY12 - Workshop on Flavor Symmetries for the opportunity to participate and present this work. I would like to thank David Emmanuel-Costa for the encouragement to do the presentation and the proceeding. This work was partially supported by Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (FCT, Portugal) through the contract SFRH/BD/61623/2009 and the projects CERN/FP/123580/2011, PTDC/FIS/098188/2008 and CFTP-FCT Unit 777 which are partially funded through POCTI (FEDER). \chapter[Squark Flavor Implications from $\bar{B}\rightarrow \bar{K}^{(*)}l^+l^-$ (Schacht)]{Squark Flavor Implications from $\bar{B}\rightarrow \bar{K}^{(*)}l^+l^-$} \vspace{-2em} \paragraph{S. Schacht} \paragraph{Abstract} We present new results on supersymmetric flavor from the recently improved constraints on $\bar{B}\rightarrow \bar{K}^{(*)}l^+l^-$. In part of the parameter space the bound on the scharm-stop left-right mixing is as strong as $\left(\delta_{23}^u\right)_{LR} \lesssim 10\%$. We inspect the reach of Supersymmetry (SUSY) models with flavor violation and present implications for models based on Radiative Flavor Violation. \section{Introduction} \paragraph{SM and SUSY Flavor Puzzle} In the Standard Model (SM) the question for the origin of the hierarchy of the Yukawa couplings is not answered: The only natural Yukawa coupling is the one of the top quark $\lambda_{\text{Top}} \sim 1$ which is of the order of the gauge couplings. The other Yukawa couplings are small yet hierarchical, which forms the SM flavor puzzle. When we switch on Supersymmetry (SUSY) we do not only have a puzzle but a serious problem: SUSY itself says nothing about flavor violation in SUSY breaking so generically SUSY flavor violation can be $\sim\mathcal{O}(1)$. On the other hand, flavor changing neutral current (FCNC) data partly drastically constrains SUSY flavor violation, so also here, in the SUSY breaking, a non-generic structure is necessary. The many new sources of flavor violation in SUSY can be parametrized by $6\times 6$ squark mass matrices that are in general not diagonal. Commonly, one normalizes the off-diagonal elements of these
in \eqref{eq:U}, to measure the effectiveness of introducing vicinal functions and obtain a strategy to examine whether the choice of vicinal functions is suitable for the VRM-based learning setting. Finally, we provide a theoretical explanation of some existing VRM models including uniform distribution-based models, Gaussian distribution-based models, and mixup models. In contrast to classical statistical learning theory, obtaining the generalization bounds for VRM requires additional considerations. In view of VRM's inherent characteristics, we introduce the concept of {\it vicinity ghost samples} to exploit the positional relationship between samples and their ghosts. Then, we analyze the behavior of the sample pair composed of one sample and the ghost sample lying in its vicinity. In addition, to guarantee the existence of such a sample pair, we prove that the probability that the Euclidean distance between the sample and its vicinity ghost is large will exponentially decay to {\it zero} when the sample size $N$ goes to {\it infinity}. Based on the sample pairs, we introduce the {\it difference function class} {$\mathcal{P}:=\{f({\bf z}_1)-\phi(f,{\bf z}_2):f\in\mathcal{F},\;{\bf z_1},{\bf z_2}\in\mathcal{Z}\}$}, and then study the complexity of $\mathcal{P}$. We prove that the covering number of $\mathcal{P}$ can be controlled by that of $\mathcal{F}$. To obtain a distribution-free upper bound of the covering number of $\mathcal{P}$, we then develop two types of uniform entropy numbers (UENs) for $\mathcal{P}$: the first by selecting $N$ points from $\mathcal{Z}\times\mathcal{Z}$ and the second by selecting elements from a $\xi$-rectangle cover of $\mathcal{Z}\times\mathcal{Z}$. We highlight that the complexity of $\mathcal{P}$ is dominated by the latter UEN, which actually is defined based on the positional relationship between samples and their ghosts. This result also implies that the positional relationship plays an important role in studying VRM's properties. Since $\mathbb{E}_{{\bf Z} \sim (D(\mathcal{Z}))^N}\{\widehat{\mathrm{R}}f\}$ usually is not identical to $\mathbb{E}_{{\bf Z} \sim (D(\mathcal{Z}))^N}\{\mathrm{R}_vf\}$, we will apply the one-sided concentration inequality for the random variables with non-zero means. Moreover, we present the specific symmetrization inequality for VRM. Although the symmetrization inequality is of the similar form with that of the classical symmetrization for ERM, it has some specific considerations. The rest of this paper is organized as follows. In Section \ref{sec:ghost}, we introduce the concept of vicinity ghost samples and Section \ref{sec:uen} studies the complexity of difference function classes. In Section \ref{sec:main}, we present the generalization bounds for VRM and the last section concludes the paper. In Appendix~\ref{sup:sym}, we present the symmetrization and the concentration inequalities respectively. The proofs of main results are given in Appendix~\ref{sup:proof}. \section{Vicinity Ghost Samples}\label{sec:ghost} Let $\Pi$ be the collection of all permutations of the set $\{1,\dots,N\}$. Given an i.i.d.~sample set ${\bf Z}= \{{\bf z}_1,\cdots,{\bf z}_N\}$, let ${\bf Z}'=\{{\bf z}'_1,\cdots,{\bf z}'_N\}$ be its ghost sample set. It is noteworthy that \begin{align}\label{eq:permute} \widehat{\mathrm{R}}'f-\mathrm{R}_\nu f = \frac{1}{N}\sum_{n=1}^N\big( f({\bf z}'_n) - \phi(f,{\bf z}_n) \big) = \frac{1}{N}\sum_{n=1}^N\big( f({\bf z}'_{\pi(n)}) - \phi(f,{\bf z}_n) \big), \end{align} where $\pi\in\Pi$ is any permutation. In view of VRM's characteristic, we would like to study the behavior of $f({\bf z}'_{\pi(n)})-\phi(f,{\bf z}_n)$ when ${\bf z}'_{\pi(n)}$ locates in the vicinity of ${\bf z}_n$. For this purpose, we introduce the concept of {\it vicinity ghost samples} to exploit the positional relationship between samples and their ghosts. We then present a theoretical guarantee that for any sample ${\bf z}_{n}\in {\bf Z}$, the corresponding ghost sample will locate in its vicinity with a high probability. \begin{definition}[Vicinity Ghost Samples]\label{def:vgs} Given a sample set ${\bf Z}= \{{\bf z}_1,\cdots,{\bf z}_N\}$ and its ghost set ${\bf Z}'=\{{\bf z}'_1,\cdots,{\bf z}'_N\}$, let \begin{equation}\label{eq:permute} \pi_* = \mathop{\arg\min}_{\pi\in\Pi} \sum_{n=1}^N \|{\bf z}'_{\pi(n)}-{\bf z}_n \|_2. \end{equation} The resulted sequence {${\bf Z}_{\pi_*}^\dagger:=\{{\bf z}'_{\pi_*(1)},\cdots,{\bf z}'_{\pi_*(N)}\}$} is called the vicinity ghost samples. Moreover, denote the sample pair {${\bf s}_{[n]} := ({\bf z}'_{\pi_*(n)},{\bf z}_n)$} and ${\bf S}^\dag:= \{ {\bf s}_{[1]},\cdots,{\bf s}_{[N]}\}$. \end{definition} There are two things worth noting: 1) the sample pairs ${\bf s}_{[1]},\cdots,{\bf s}_{[N]}$ are not independent any more; and 2) {the point ${\bf z}'_{\pi_*(n)}$ could be far from ${\bf z}_n$ in some cases, for example, when} they locate in the tail of a distribution on $\mathcal{Z}$. Therefore, it is critical to show that the point ${\bf z}'_{\pi_*(n)}$ locates in the vicinity of the counterpart ${\bf z}_n$ with high probability. {One main challenge is to provide a bridge between the Euclidean distance and the probability distribution. Here, we introduce a probability-based distance to measure the difference between two points in $\mathbb{R}^K$ and then prove that the probability that ${\bf z}'_{\pi_*(n)}$ is far from ${\bf z}_n$ w.r.t. this distance will exponentially decay to {\it zero} as the sample size $N$ approaches {\it infinity}. \begin{definition}[CDF Distance]\label{def:CDFdistance} Let $\bf{F}({\bf t})$ be the cumulant density function (cdf) of a distribution on $\mathcal{Z}\subset\mathbb{R}^K$: \begin{equation*} {\bf F}({\bf t}): =\int_{(-\infty,{\bf t}]} d \mathrm{P}({\bf z}),\quad \forall \,{\bf t}=(t^{(1)},\cdots,t^{(K)})\in\mathbb{R}^K, \end{equation*} where $(-\infty,{\bf t}]$ denotes the set $(-\infty,t^{(1)}]\times\cdots\times (-\infty,t^{(K)}]$, and denote its $k$-th marginal cdf as \begin{equation*} {\rm F}_k({\bf t}): = \lim_{t^{(1)},\cdots,t^{(k-1)},t^{(k+1)},\cdots,t^{(K)}\rightarrow +\infty} {\rm F}({\bf t}). \end{equation*} The cdf distance between two points ${\bf t}_1, {\bf t}_2\in\mathbb{R}^K$ is defined as \begin{equation*} d_{\mathrm{F}}({\bf t}_1, {\bf t}_2) : = \sqrt{\sum_{k=1}^K \big(\mathrm{F}_k ({\bf t}_1) -\mathrm{F}_k ({\bf t}_2)\big)^2 }. \end{equation*} \end{definition} It is easy to prove that $d_{\mathrm{F}}({\bf t}_1, {\bf t}_2)$ satisfies non-negativity, symmetry and the triangle inequality. The details on cdf distance are referred to Chapter 4.2 of \citet{venturini2015statistical}. Since the distribution $\mathrm{P}({\bf z})$ usually is unknown, there is also an empirical version of the cdf distance: \begin{definition}[Empirical CDF Distance] Let ${\bf Z}= \{{\bf z}_1,\cdots,{\bf z}_N\}$ be an i.i.d.~sample set taken from a distribution on $\mathcal{Z}$ with ${\bf z}_n = (z_n^{(1)},\cdots,z_n^{(K)})$. Denote $\widehat{\mathrm{F}}({\bf t})$ as the empirical cdf: \begin{equation*} \widehat{\mathrm{F}}({\bf t}):= \frac{1}{N} \sum_{n=1}^N {\bf 1}_{(-\infty,{\bf t}]}({\bf z}_n),\;\; \forall \,{\bf t}=(t^{(1)},\cdots,t^{(K)})\in\mathbb{R}^K, \end{equation*} and denote $\widehat{\mathrm{F}}_k ({\bf t})$ as the $k$-th marginal empirical cdf: \begin{equation*} \widehat{\mathrm{F}}_k ({\bf t}) := \frac{1}{N} \sum_{n=1}^N {\bf 1}_{(-\infty,t^{(k)}]}(z^{(k)}_n). \end{equation*} The empirical cdf distance between two points ${\bf t}_1, {\bf t}_2\in\mathbb{R}^K$ is defined as \begin{equation}\label{eq:ecdf} d_{\widehat{\mathrm{F}}}({\bf t}_1, {\bf t}_2) : = \sqrt{\sum_{k=1}^K \big(\widehat{\mathrm{F}}_k ({\bf t}_1) -\widehat{\mathrm{F}}_k ({\bf t}_2)\big)^2 }. \end{equation} \end{definition} As mentioned in \citet{venturini2015statistical}, it follows from the strong law of large numbers that the empirical cdf distance $d_{\widehat{\mathrm{F}}}({\bf t}_1, {\bf t}_2)$ will converge to the cdf distance $d_{\mathrm{F}}({\bf t}_1, {\bf t}_2)$. \begin{theorem}\label{thm:point} Let ${\bf Z}$ and ${\bf Z}^\dag$ be a sample set of $N$ elements and its vicinity ghost set taken from a distribution on $\mathcal{Z}\subset \mathbb{R}^K$, respectively. Then, for any $\xi>0$, \begin{align*} &\mathbb{P}\left\{ \max_{n\in\{1,\cdots,N\}}\big[d_{\widehat{\mathrm{F}}}({\bf z}_n, {\bf z}_{\pi_*(n)})\big]>\xi\right\} \leq c \cdot {\rm e}^{\frac{-N\xi^2}{2 K}}, \end{align*} where $c>0$ is an absolute constant. \end{theorem} This theorem shows that the probability that the empirical cdf distance between the sample ${\bf z}_n$ and its vicinity ghost ${\bf z}_{\pi_*(n)}$ is larger than a positive constant $\xi$ will exponentially decay to ${\it zero}$ when the sample size $N$ approaches {\it infinity}. Let $\mathcal{C}_{r}({\cal Z})$ be a cover of ${\cal Z}$ at radius $r>0$ w.r.t. the empirical distance $d_{\widehat{\mathrm{F}}} (\cdot,\cdot)$. Define the mutually exclusive events: \begin{itemize} \item $\mathcal{E}_1$ = ``For any sample pair $({\bf z}'_{\pi_*(n)},{\bf z}_{n})$, there exists an element $C\in\mathcal{C}_{r}({\cal Z})$ that contains ${\bf z}'_{\pi_*(n)}$ and ${\bf z}_n$;" \item $\mathcal{E}_2$ = ``There always exists at least one pair of samples ${\bf z}'_{\pi_*(n)}$ and ${\bf z}_{n}$ that cannot be simultaneously contained by any individual element $C\in\mathcal{C}_{r}({\cal Z})$." \end{itemize} It follows from \begin{equation*} \mathbb{P} \{ \mathcal{E}_2\}=\mathbb{P}\left\{ \max_{n\in\{1,\cdots,N\}}\big[ d_{\widehat{\mathrm{F}}}({\bf z}_n, {\bf z}_{\pi_*(n)})\big]>r\right\} \end{equation*} that \begin{equation*} \mathbb{P}\{\mathcal{E}_1 \} = 1-\mathbb{P}\{\mathcal{E}_2 \}\geq1- c \cdot {\rm e}^{\frac{-Nr^2}{2 K}}, \end{equation*} which implies that the sample pair $({\bf z}'_{\pi_*(n)},{\bf z}_{n})$ can be contained by an element $C\in\mathcal{C}_r(\mathcal{Z})$ with overwhelming probability, i.e., that the event $\mathcal{E}_1$ does not hold decays exponentially to zero when the sample size $N$ approaches {\it infinity}.} \section{Difference Function Classes}\label{sec:uen} For any $f\in\mathcal{F}$, denote the difference function $p({\bf s}):=f({\bf z}_1)-\phi(f,{\bf z}_2)$ with ${\bf s}=({\bf z}_1,{\bf z}_2)\in\mathcal{Z}\times\mathcal{Z}$. Moreover, define the difference function class: \begin{equation}\label{eq:P} \mathcal{P}:=\{f({\bf z}_1)-\phi(f,{\bf z}_2):f\in\mathcal{F}\}. \end{equation} Given a sample set ${\bf Z}=\{{\bf z}_{1},\cdots,{\bf z}_{N}\}$ and its vicinity ghost ${\bf Z}^\dag=\{{\bf z}'_{\pi_*(1)},\cdots,{\bf z}'_{\pi_*(N)}\}$, we then have \begin{equation*} \mathbb{P}\Big\{\sup_{f\in\mathcal{F}}\widehat{\mathrm{R}}'f-\mathrm{R}_\nu f>\frac{\xi}{2}\Big\}= \mathbb{P}\Big\{\sup_{p\in\mathcal{P}}\frac{1}{N}\sum_{n=1}^N p({\bf s}_{[n]})>\frac{\xi}{2}\Big\}. \end{equation*} To obtain an upper bound of the above probability, the supremum $\sup_{p\in\mathcal{P}}$ should be relaxed to be the summation operation by using the complexity measure of $\mathcal{P}$. It is also one of main research issues of statistical learning theory. In this paper, we are mainly concerned with the covering number of the difference function class $\mathcal{P}$, and refer to \citet{mendelson2003few,zhou2003capacity} for further details. \begin{definition}\label{def:covnum} {Let $({\cal M}, d)$ be
hydrometeors were effectively enhanced. Qie et al. (2014b) constructed an empirical relationship between the total lightning flash rate and the ice-phase particle (graupel, ice, and snow) mixing ratio, which was used to adjust the mixing ratio of ice-phase particles within the mixed-phase region. They found that this method could improve short-term precipitation forecasts of mesoscale convective systems with high, and even moderate, lightning flash rates. Wang Y. et al. (2014) converted cloud-to-ground (CG) lightning data into proxy radar reflectivity using an assumed relationship between flash density and reflectivity in the Gridded Statistical Interpolation (GSI) system, and the proxy reflectivity was assimilated by using a physical initialization method. The results showed that predictions of reflectivity and precipitation were improved. The above review of literature demonstrates that previous lightning data assimilation efforts have mostly employed the nudging method. However, a rather limited amount of research has been carried out using the more sophisticated variational or ensemble Kalman filter (EnKF) methods. Mansell (2014) performed an observing system simulation experiment, in which simulated total lightning data were assimilated by using EnKF, and the results showed that the method modulated the strength of convection and suppressed spurious convection. This method was further verified by real-data cases (Allen et al., 2016). Wang Y. D. et al. (2014) assimilated the CG-converted rainfall rates within the Weather Research and Forecasting (WRF) four-dimensional variational framework, and the results showed that both the initial field and the 6-h forecast were improved. Very recently, Fierro et al. (2016) set the water vapor mixing ratio to its saturation value in the layer between the lifted condensation level and an assumed fixed height of 15 km at observed lightning locations, and then the pseudo-observations for water vapor were assimilated by the three-dimensional variational (3DVAR) system of ARPS. The results demonstrated the potential value of assimilating total lightning data with the 3DVAR method. In the present study, total lightning data were assimilated by using the 3DVAR method in the WRF data assimilation system (Barker et al., 2003, 2004, 2012; hereafter referred to as WRFDA-3DVAR). Basically, the method of the present study is similar to that of Fierro et al. (2016), both of which assimilate lightning data in the 3DVAR framework. However, instead of assimilating the water vapor mixing ratio, we assimilate the relative humidity retrieved from the total lighting data as a sounding observation using WRFDA-3DVAR in cycling mode at 10-min intervals. Since relative humidity is a variable that can be observed directly, and can be directly assimilated into any operational assimilation system, assimilating lightning data retrieved relative humidity is generally easier for implementation in operational runs. To find the appropriate time-window length for cycled assimilation having significant improvements in both the initial condition and subsequent forecast, four experiments with different assimilation time-window lengths were conducted. After finding the appropriate time length of the assimilation window, the improvement in the subsequent forecast precipitation and reflectivity was further analyzed. It should be noted that most current operational ground-based lightning location systems can only detect CG flashes, which account for only a small fraction (about one third) of total flashes. In fact, intracloud flashes, which account for the majority of flashes, have been shown to be better correlated with updraft in clouds (MacGorman et al., 1989; Schultz et al., 2011). Hence, assimilating all flashes outperforms assimilating only CG flashes. The lightning data used in this study were from SAFIR3000 (see Section 3.1), which detects both CG and intracloud flashes. The remainder of this paper is organized as follows. In Section 2, the thunderstorm case is introduced. In Section 3, the data sources, lightning data assimilation procedure, and experimental design are described. In Section 4, a detailed analysis of the results is presented, followed by a summary and discussion in Section 5. 2 Thunderstorm case The case in the present study was a squall line that occurred on 10 July 2007 in North China. The weather process spanned a period of more than 6 h, during which it was accompanied by frequent lightning activity. The squall line initially developed in northwestern Beijing and propagated southeastwards across Beijing, Tianjin, and Hebei Province, before finally dissipating in Bohai Bay. During 1300–1500 UTC, hail events were recorded in some areas of Beijing and Tianjin. Figure 1 shows the 500-hPa geopotential height, temperature, and wind vectors at 1200 UTC 10 July 2007. It can be seen that North China and Northeast China were controlled by a trough. The low-level convergence and high-level divergence (Xu et al., 2016) resulted in the upward movement of air. In addition, there was abundant water vapor in the lower atmosphere (Xu et al., 2016), which was favorable for convection. Figure 1 Horizontal distribution of geopotential height (blue line; gpm), temperature (red line; K), and wind vectors at 500 hPa from NCEP FNL (Final) analysis data at 1200 UTC 10 July 2007. The sounding data for Beijing at 1200 UTC (Fig. 2) were analyzed to further investigate the thermodynamic and dynamic conditions. The results indicated highly unstable atmospheric conditions. In particular, the atmosphere was wet at low levels and dry at high levels. The convective available potential energy exceeded 2000 J kg–1, and the thickness of the unstable layers was more than 8 km. Moreover, there was obvious shear in wind direction and speed at low levels. From the above analy-sis, it is clear that the thermodynamic conditions were favorable for the occurrence of the squall line. Figure 2 Sounding observation of Beijing station at 1200 UTC 10 July 2007. 3 Data and methods 3.1 Data sources The lightning data used in this study were from the SAFIR3000 lightning detection system operated by the Beijing Meteorological Bureau, which consists of three substations (shown as triangles in Fig. 3) and detects total lightning. SAFIR3000 simultaneously detects lightning in two different frequency bands [very high frequency (110–118 MHz) and low frequency (300 Hz to 3 MHz)], which can discriminate intracloud and CG lightning. Its detection efficiency can reach 90% in a 200-km radius with a less than 2-km location error (Zheng et al., 2009; Liu et al., 2013). Owing to the operational running requirements, the triggering threshold of SAFIR3000 was set relatively high; hence the number of radiation sources detected per flash was only a few (typically 2–3 sources). Because of the sparseness of the radiation sources, we did not attempt to define the flashes in this study—each radiation source was regarded as a flash. The precipitation observations used in this study were from the hourly and three-hourly ground-based measurements of MICAPS (Meteorological Information Comprehensive Analysis and Process System). The radar echo data were from the mosaic of S-band Doppler weather radar in Beijing, Tianjin, and Qinhuangdao (shown as red dots in Fig. 3), which provided a reasonable coverage of the whole lifecycle of the storm. 3.2 Assimilation method Data assimilation is the technique by which observations are combined with an NWP product (called the first guess or background forecast) and their respective error statistics, to provide an improved estimate (i.e., the analysis) of the atmospheric state. The 3DVAR assimilation technique achieves this through the iterative minimization of a prescribed cost function, $\begin{split}J({x}) =& \frac{1}{2}{({x} - {{x}_{\rm b}})^{\rm{T}}}{{B}^{ - 1}}({x} - {{x}_{\rm b}}) \\ & + \frac{1}{2}{(H({x}) - {{y}^{\rm o}})^{\rm{T}}}{{R}^{ - 1}}(H({x}) - {{y}^{\rm o}}),\end{split}$ (1) where x is the analysis to be found that minimizes the cost function J (x), xb is the first guess of the NWP mo-del, yo is the assimilated observation, B is the background error covariance, R is the observation error covariance, H is the observation operator that performs the necessary interpolation and transformation from model variables to observation space, and the superscript "T" represents the transposition of the matrix. In this study, statistics of the differences between daily 24-and 12-h forecasts valid at 0000 UTC and 1200
we get the target integer. Let's say n is the number of digits in the correct region, with i being the "leftover index", the number of digits in smaller regions substracted from the sequence index. }. Range Sum Query - Immutable 160. } . The On-Line Encyclopedia of Integer Sequences® (OEIS®) Enter a sequence, word, or sequence number: Hints Welcome Video. Intersection of Two Linked Lists ... 501. (The 11th digit of the sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, ... is a 0, which is part of the number 10) Java Solution. Solution: 1. Infinite Sequence Nth Digit: The program must accept an integer value N and print the N th digit in the integer sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 and so on till infinity. The infinite sequence is like this: 2/3, 7/9, 8/9, 1, 10/9 , ..... common difference of course is 1. leetcode findNthDigit 2019-04-09 Toggle navigation Hey. (sum of digits from all the 1 digit numbers to (k - 1) digit numbers). Program to find last two digits of Nth Fibonacci number. D. DeTurck Math 104 002 2018A: Sequence and series 9/54. Do reverse inorder traversal. First we need to find lower bound that is minimum integer of k digit where k is the number of digits in n which is 100..0(k digits) say we call it as minKInt. C) The two quantities are … 0. We can apply this to our advantage. For more information about the Encyclopedia, see … Nth Digit 303. A sum-free sequence of increasing positive integers is one for which no number is the sum of any subset of the previous ones. Also, it can identify if the sequence is arithmetic or geometric. So, in order to find the $n$th digit, calculate: $$r = g(\lceil a \rceil ) - g(a) \mod \lceil a \rceil$$ The $r$ gives you the index of the $n$th digit in the number $p$. The main purpose of this calculator is to find expression for the n th term of a given sequence. Place the digit as the next digit of the root, i.e., above the two digits of the square you just brought down. CHALLENGE. Problem: Given a board with m by n cells, each cell has an initial state live (1) or dead (0). Unless you want to blow up the Clojure REPL, don’t try to evaluate it i The number is 100+810/3=370. DEFINITIONS. Min heap will be containing the nu... Zig-Zag World of Algorithm and Data Structures, Microsoft Question: Find diameter of a binary tree, Amazon Question: Set inorder successor of each node of Binary Tree, Infibeam Question: Implement T9 Dictionary, Minimum Initial Points to Reach Destination, Flipkart Question: Clone a linked list with next and random pointer, Adobe Question: Find median of infinite stream of numbers, [Google Question] Count Battleships in a Board. Find Nth term of the series where each term differs by 6 and 2 alternately 09, Apr 20 Nth term of a sequence formed by sum of current term with product of its largest and smallest digit An exercise on geometric sequences including finding the nth term and the sum of any number of terms. The Dragon curve sequence is an infinite binary sequence. ... Find Nth number in a sequence which is not a multiple of a given number. If you notice the number of set bits in each digit, you will find that each digit contains only two set bits. Don’t post just for the sake of posting and getting likes. First term from given Nth term of the equation F(N) = (2 * F(N - … Maximum function value of all rotations of an array. Take some time and pen down your words in a way that it’s simple for the reader to understand. 484 Find Permutation. 11, Nov 19. It starts with 1, and in each step, it alternatively adds 1s and 0s before and after each element of the previous term, to form the next term. Problem: Find the nth digit of the infinite integer sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, ... 100...0 (k digits) - 999...9(k digits) => k * 9 * (10 ^ k) digits. // identify the number Find the nth digit of the infinite integer sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, ... For example, given 3, output should be 3. Nth number made up of odd digits only. 39 A) Quantity A is greater. The calculator will generate all the work with detailed explanation. numbers there are in this sequence by taking square roots of powers-of-10. My Question is, How to find the nth term in the sequence? The series of final digits repeats with a cycle length of 60 (Refer this for explanations of this result). long n=m; // convert int to long Flaws: Works pretty ok but if you use this for long numbers then it'll take more and more time. Finding the nth digit in a sequence of positive integers placed in a row in ascending order.. Example 1: Input: 3 Output: 3. Now we need to find out which digit we are targeting. This should find any digit in a integer. The 5 th digit in the sequence 1234567 is 5. A series is the sum of a list of numbers. Let S be the positive number for which we are required to find the square root. Also, it can identify if the sequence is arithmetic or geometric. Now we need to find the target integer where nth digit will lie. Determine the nth term of the sequence : Find the third, sixth and ninth term of the sequence given by the formula : Find the sum of the first five terms of the sequence given by the recurrence relation : Find out whether the given sequence is bounded from below, bounded from above or bounded : Max heap will be containing the numbers which are less than median. Find Nth even length palindromic number formed using digits X and Y. n=n-len*count; This is because the first (so i=1) two-digit number is 10, not 11. Subtract y from c to form a new remainder. arithmetic sequence . leetcode findNthDigit 2019-04-09 Toggle navigation Hey. Learn how to find the nth term of an arithmetic sequence. The set of computable integer sequences is countable. I want to take the nth digit from an N digit number in python. A) Find a recurrence relation for the number of n-digit binary sequences with no pair of consecutive 1s. 348 Design Tic-Tac-Toe. This was interview question. Explanation: žt # Push the infinite list of decimal value of e (including leading 2) sè # And 0-based index the input-integer into it # (after which the result is output implicitly) Suppose we have one infinite integer sequence, we have to find the nth digit of this sequence. So if the input is 11, then the output will be 0 as if we place the numbers like 123456789101112, so the 11th digit is 0. Longest Substring with At Least K Repeating Charac... Find the different character in two strings. How to find a formula for the nth term in a linear sequence, check the formula, and how to find other terms in the sequence; here we find the 10th term. 2. The left is 811. For each positive integer n, the Nth term of the sequence S is 1 + (-1 ^n. Note: n is positive and will fit within the range of a 32-bit signed integer (n < 231). LeetCode – Number of Squareful Arrays (Java). Set
hydroxide, OH −; or carbonate, CO 3 2− ). 00⋅10−6moles NaOH = −1. pH is a measure of the [H+]. Which of the following salts, when added to water, would produce the most acidic solution?. Write the state (s, l, g, aq) for each substance. NH4NO3 acidic 8. When mixing two solutions the first thing that is found is the molecular equation which gives all the reactants and products in the reaction, for example, the two solutions silver nitrate and sodium hydroxide when mixed together the molecular equation would be AgNO3(aq. Model-fitting and model-free kinetic methods have been based on a single-step kinetic equation 22,23 where t is the time, Ris the extent of conversion, k (T ) is the rate constant, and f(R) is the reaction model. Consider the following unbalanced chemical equation for the; combustion of pentane (C5H12): C5H12 1l2 1 O2 1g2 hCO2 1g2 1 H2O1l2. 9-7 =2 7-6 = 1 Therefore compound X has a lower extent of dissociation that compound Y. Use MathJax to format equations. CaCl2 (aq) → Ca2+ (aq) + 2Cl- (aq) Calculate the number of ions obtained from CaCl2 when 222 g of it is dissolved in water. It is generally more convenient to report intensive properties, thus the heat capacity of a substance is usually reported as a specific heat capacity. What is left is the Net ionic equation. Example: Calculate the ratio of ammonium chloride to ammonia that is required to make a buffer solution with a pH of 9. pK a; sulfuric acid. The enthalpy change of solution refers to the amount of heat that is released or absorbed during the dissolving process (at constant pressure). So relatively speaking, 2 mol of N2 are released for every 2 mol of NH4NO3. Note: Water specific heat capacity at 25 °C is 4. ??? Part C FeCl2 Express your answer as a chemical equation. Enter an equation of a chemical reaction and click 'Balance'. Heat of Solution. Chemical reaction. Hasselbalch equation (Equation 17. Determine which category each of the following elements falls into: alkali metal, alkaline earth element, transition metal, halogen, noble gas, lanthanide, or actinide. 2 Calculate the a) [H3O+] b) the pH c) the pOH d) the % dissociation for a 0. These data are for ammonia in its pure gaseous state (i. Densities and Apparent Molar Volumes of Atmospherically Important Electrolyte Solutions. Li3N+NH4NO3=LiNO3+(NH4)3N: Li3N + 3NH4NO3. In this investigation, students classify chemical reactions as exothermic or endothermic. In this video we will look at the equation for HNO3 + H2O and write the products. The Virtual Titrator makes the simulation of the titration curve of any acid, base or mixture a breeze; flexibility in the selection of sample size, concentration of. Ionic charges are not yet supported and will be ignored. The acid equilibrium problems discussed so far have focused on a family of compounds known as monoprotic acids. Written by: Casey Rogers (ChemDemo) on June 8, 2011. The Virtual Titrator makes the simulation of the titration curve of any acid, base or mixture a breeze; flexibility in the selection of sample size, concentration of. Learn vocabulary, terms, and more with flashcards, games, and other study tools. EXAMPLE: Calculation of the Enthalpy of Dissolution An experiment was conducted in which 5. Ammonium Phosphate (NH4)3PO4 Molar Mass, Molecular Weight. But it is more correct to show it reacting with the water to form the hydronium ion:. 1982 Pergamon Press Ltd. Their measurements led to thermochemical values that imply an enthalpy change of D298 = 98 ± 9 kJ mol−1 for the gas-phase dissociation of ammonium nitrate into NH3 and HNO3. Submitted by jturley9300 on Wed, 01/11/2012 - 22:56. In this formula, the ammonium (NH4+) ion and nitrate. How many moles of ions are produced by the dissociation of 1 mol of MgCl2? 3 mol The reaction represented by the equation 2KClO3(s) => 2KCl(s) + 3O2 (g) is a(n). Omit water from the equation because it is understood to be present. Complete this equation for the dissociation of K3PO4(aq). You will see such a situation starting in the fifth example as well as scattered through the additional problems. Atmospheric Environment Vol. Acidity Constant. Its dissociation equation is: Ba(NO3)2 ---> Ba^2+ + 2 NO3^-1. You do so with this equation: K a K b = K w. However, the water provides most of the heat for the reaction. Introduction to the concepts of reversible reactions and a dynamic chemical equilibrium through detailed explanation and example descriptions. Enthalpy of hydration is the energy change for converting 1 mol of an anhydrous substance to 1 mol of the hydrated substance. Ammonium Nitrate Properties. 7 AgCl s −127. ChemistryComplete this equation for the dissociation of NH4NO3(aq). 8 x 10-5) and 0. 1007/s00214-013-1415-z] 5 S. The degree of ionization is defined as the fraction of a weak acid or base which ionizes. 00 liter of an aqueous solution (a water solution) of 1. Following are some equation input format examples: 1. 00 M HCl and another 1. 11 g in 1kg water. • A solution that contains a large amount of solute relative to the solvent is a concentrated solution. ChemistryComplete this equation for the dissociation of NH4NO3(aq). For each, write a balanced equation for their dissociation in water. 58 g of CuNO3 is dissolved in water to make a 0. − = + [H O ][F ] 3 a [HF] K One point is earned for the correct expression. These data are for ammonia in its pure gaseous state (i. For example, sodium chloride breaks into sodium (Na+) and chloride (Cl-) ions that exist in aqueous form in the water. Use uppercase for the first character in the element and lowercase for the second character. Solid A dissolves in water to form a conducting solution. 20 atm at a temperature of 500 C. 1mole NaOH ⋅ −63. Often, these problems are given with the K b of the base and you have to calculate the value of the K a. 36 * 10-7-0. Use known values of the dissociation constants for the various species to determine the dominant equilibrium. First we put in the K sp value: 1. Example: Calculate the ratio of ammonium chloride to ammonia that is required to make a buffer solution with a pH of 9. Favourite answer. AgCl(s) <=> Ag + (aq) + Cl-(aq) <-----Addition of NaCl Shifts this equilibrium to the left. I would imagine your products would be NH3 + NaNO3 + CH3COOH. 04344g) gives us. Examples: Fe, Au, Co, Br, C, O, N, F. Calcium nitrate, also called Norgessalpeter (Norwegian saltpeter), is an inorganic compound with the formula Ca(NO 3) 2. NaOCl basic 2. Formation reactions and their enthalpies are important because these are the thermochemical data that are tabulated for any chemical reaction. The all-in-one modular and interactive design of CurTiPot is user-friendly and lets you rapidly calculate the pH of any aqueous solution, from the simplest to the most complex one. Dissociation constant of ammonium nitrate 267 represented to within 2% by the equation 618. How many moles of ions are produced by the dissociation of 1 mol of MgCl2? 3 mol. 50g, by temperature change and heat capacity. Examples: Fe, Au, Co, Br, C, O, N, F. Show that the reaction. 40 M NH 4 Cl? Set up a reaction table for the base ionization of NH 3 with water : NH 3 + H 2O ↔ OH. Ammonium Phosphate (NH4)3PO4 Molar Mass, Molecular Weight. Omit water from the equation because it ; 2. Remeber to include physical states. 060 mol/L of SO 2 and 0. Everything Calcium Chloride A Natural Wonder. We didn't have an acid but for HCl it is: HCl
(strongly, completely, partially) orthogonal decomposition of $Y$ with $ \sum_{k=1}^r \abs{\sigma_k}^2 \leq1$, and $\|v_{kj}\|= 1$. Cauchy-Schwarz gives \begin{equation} \abs{ \langle T, Y \rangle } \leq \sum_{k=1}^r \abs{ \sigma_k \langle T, \bigotimes_{j=1}^d v_{kj}\rangle} \leq \sqrt{\sum_{k=1}^r \abs{\langle T, \bigotimes_{j=1}^d v_{kj}\rangle}^2}, \end{equation} and equality is achieved when the $\sigma_k$ are proportional to $ \langle T, \bigotimes_{j=1}^d v_{kj}\rangle$. \end{proof} The expressions \begin{equation} \max_Y \left\{ \abs{\langle T, Y \rangle}: Y \in \mathcal{A}_r, \|Y\| \leq 1 \right\} \end{equation} clearly define four different norms, which we will denote by $\| T\|_{\mathcal{ON}_r, \mathbb{F}}$, $\| T\|_{\mathcal{SON}_r, \mathbb{F}}$, $\| T\|_{\mathcal{CON}_r, \mathbb{F}}$, and $\| T\|_{\mathcal{PCON}_{r,P}, \mathbb{F}}$, respectively. For $r =1$, all four expressions coincide with the spectral tensor norm $\|T\|_{\sigma, \mathbb{F}}$. For a tensor $T$ with real-valued entries, it is known that the value of the spectral norm depends on if the tensor is seen as having base field $\mathbb{R}$ or $\mathbb{C}$, i.e., $\|T\|_{\sigma, \mathbb{R}} \neq \|T\|_{\sigma, \mathbb{C}}$ in general. The analogous statements $\| T\|_{\mathcal{ON}_r, \mathbb{R}} \neq \| T\|_{\mathcal{ON}_r, \mathbb{C}}$, $\| T\|_{\mathcal{SON}_r, \mathbb{R}} \neq \| T\|_{\mathcal{SON}_r, \mathbb{C}}$, $\| T\|_{\mathcal{CON}_r, \mathbb{R}} \neq \| T\|_{\mathcal{CON}_r, \mathbb{C}}$, $\| T\|_{\mathcal{PCON}_{r,P}, \mathbb{R}} \neq \| T\|_{\mathcal{PCON}_{r,P}, \mathbb{C}}$, in general, are also true for any $r \geq 1$, in light of Proposition~\ref{prop:blockr} below. For the sake of completeness, we note the following result, which is an analogue of a result for the case $r=1$ presented in \cite{lim2014blind}. \begin{proposition} For $\mathcal{A}_r = \mathcal{ON}_r$, $ \mathcal{SON}_r$, $ \mathcal{CON}_r$ or $ \mathcal{PCON}_{r,P}$ and any $T$ in $\mathbb{F}^{n_1 \times \ldots \times n_d}$, the following inequalities hold \begin{align} \|T\|_{\sigma, \mathbb{F}} = \|T\|_{\mathcal{A}_1, \mathbb{F}} &\leq \|T\|_{\mathcal{A}_2, \mathbb{F}} \leq \ldots \leq \|T\| \leq \ldots \leq \|T\|_{\mathcal{A}_1, \mathbb{F}}^* = \|T\|_{*,\mathbb{F}}, \\ &\|T\|_{\mathcal{CON}_r,\mathbb{F}} \leq \|T\|_{\mathcal{SON}_r,\mathbb{F}} \leq \|T\|_{\mathcal{ON}_r,\mathbb{F}}. \end{align} Moreover, the dual norm $\|T\|_{\mathcal{A}_r,\mathbb{F}}^*$ can be characterized as \begin{equation}\label{eq:dualONr} \|T\|_{\mathcal{A}_r, \mathbb{F}}^* = \inf_{N,v_k} \left\{ \sum_{k = 1}^N \|v_k\| : T = \sum_{k=1}^N v_k, v_k \in \mathcal{A}_r \right\}, \end{equation} and for $Y$ in $\mathcal{A}_r$ with corresponding (strongly, completely, partially) orthogonal decomposition $Y = \sum_{k=1}^r \sigma_k \bigotimes_{j=1}^d v_{kj}$ where $\|v_{kj}\| = 1$, it holds that $\|Y\|_{\mathcal{A}_r,\mathbb{F}} = \|Y\|_{\mathcal{A}_r,\mathbb{F}}^* = \|Y\| = \sqrt{\sum_{k=1}^r \sigma_k^2 }$. \end{proposition} \begin{proof} The statements $\|T\|_{\mathcal{A}_k, \mathbb{F}} \leq \|T\|_{\mathcal{A}_{k+1}, \mathbb{F}}$ are clear by definition. To show that $\|T\|_{\mathcal{A}_k, \mathbb{F}} \leq \|T\|$, note that $\abs{\langle T, Y \rangle} \leq \|T\|$ for $\|Y\| \leq 1$, using Cauchy-Schwarz. The remaining inequalities follow by duality and the fact that $\|\cdot\|$ is self-dual. The second set of inequalities is clear from their definitions. The remaining statements can be proven by exactly the same argument as in the case $r=1$ in \cite[Lemma 21]{lim2014blind}. \end{proof} \section{Symmetric approximations to symmetric tensors}\label{sec:mainresults} This section contains our main results. For the remainder of the section, we let $T$ be a symmetric tensor in $S^d(\mathbb{F}^{n})$. The following theorem was proven by Banach \cite{banach1938homogene} and also rediscovered recently \cite{zhang2012cubic,zhang2012best,friedland2013best}. It shows that the optimal rank-one approximation of $T$ can in this case be chosen symmetric. \begin{theorem}[\cite{banach1938homogene}]\label{thm:banach} If $T \in S^d(\mathbb{F}^{n})$ is symmetric, then \begin{equation} \max_{\|x_k\| \leq 1} \abs{ \langle T, x_1 \otimes \ldots \otimes x_d \rangle }= \max_{\|x\| \leq 1} \abs{ \langle T, x^{\otimes d}\rangle} \end{equation} \end{theorem} The remainder of the article is devoted to exploring extensions of this result to $r \geq 1$, while imposing one of our four different notions of orthogonality. Somewhat surprisingly and in contrast to the matrix case $d=2$, none of the notions of orthogonality result in the existence of symmetric global maximizers, in general. An overview of the results is provided in Table~\ref{tab:summary}. \begin{table}[tbhp] {\footnotesize \caption{Summary of results in Section~\ref{sec:mainresults}.}\label{tab:summary} \begin{center} \begin{tabular}{c|c} \textbf{Orthogonality} & \textbf{Can optimal approximations always be chosen symmetric?} \\ \hline $\mathcal{ON}_r$ & In general, no (Thm.~\ref{thm:no_ON}) \\ \hline $\mathcal{SON}_r$ & In general, no (Thm.~\ref{thm:no_SON}, Thm.~\ref{thm:no_ON}) \\ \hline $\mathcal{CON}_r$ & Yes for $n=2$ (Thm.~\ref{thm:mainn2}), no for $n > 2$ (Thm.~\ref{thm:main})\\ \hline $\mathcal{PCON}_{r,P}$ & \makecell{In general, no (Thm.~\ref{thm:no_ON}). \\ Separate symmetry of dimensions in $P$ for $n=2$ (Thm.~\ref{thm:mainn2partial}) \\ Symmetry of tensor dimensions in $\{1, \ldots , d\} \smallsetminus P$ (Thm.~\ref{theorem:mainpartial})} \\ \hline \end{tabular} \end{center} } \end{table} The proofs of these statements are given in section~\ref{sec:main_proofs}. These will require a few results on the structure of symmetric tensors under orthogonality constraints, given in section \ref{sec:struct}, as well as a semidefinite programming formulation of the orthogonal approximation problem, provided in section~\ref{sec:SDP}. In addition, section~\ref{sec:examples} contains some further examples of how orthogonal approximations in the general tensor case differ from the matrix case, and section~\ref{sec:NP} concludes by showing that the approximation problem is in general NP-hard, for any $r\geq 1$. \subsection{Symmetric tensors under orthogonality constraints}\label{sec:struct} This section contains a number of structural results that are used to prove the main results in Table~\ref{tab:summary}. We would first like to point out the following distinction between symmetric tensors and symmetric decompositions of a tensor. For a tensor $Y \in \mathcal{A}_r$, one could ask for two seemingly different notions of symmetry: (i) for $Y$ to be symmetric with rank no more than $r$, or (ii) for the seemingly stronger condition that $Y$ has a symmetric decomposition of the form $Y = \sum_{k=1}^r \sigma_k v_k^{\otimes d}$. Without imposing any orthogonality conditions, the question of whether or not the sets in (i) and (ii) are equal is known in the literature as Comon's conjecture \cite{comon2008symmetric}, which has been proven in many special cases \cite{zhang2016comon,friedland2016remarks}, but is now known to not be true in general \cite{shitov2018counterexample}. For $Y \in \mathcal{CON}_r$, these two notions are however equivalent, because the terms in a rank decomposition of an orthogonally decomposable tensor can be uniquely computed by successively computing the optimal rank-one deflations \cite{zhang2001rank}, i.e., by recursively defining $Y_0 = 0$, $\sigma_i = \langle Y, Y_i \rangle$ and \begin{equation}\label{eq:deflate} Y_{i+1} := y_{i+1,1}\otimes \ldots \otimes y_{i+1, d} = \argmax_{\|y_j\| \leq 1} \abs{ \langle Y - \sum_{k=1}^{i} \sigma_kY_k, y_1 \otimes \ldots \otimes y_d \rangle}. \end{equation} By Theorem~\ref{thm:banach} and the uniqueness of rank$-1$ approximations, when $Y$ is symmetric, each $Y_i$ is symmetric as well, i.e., $Y_i = v_i^{\otimes d}$. We now show that this statement also holds for symmetric tensors in $\mathcal{PCON}_{r,P}$, for any non-empty subset $P \subseteq \{1, \ldots , d\}$, i.e., when imposing partial orthogonality, the resulting analogue of Comon's conjecture is true. We will need the following result: \begin{lemma}\label{lemma:symoperations} For a symmetric tensor $T \in S^d(\mathbb{F}^n)$ and any vector $v \in \mathbb{F}^n$ \begin{enumerate} \item $T\times_j v \in S^{d-1}(\mathbb{F}^n)$ is a symmetric tensor for any index $j$. \item $T \times_j v = T \times_k v$ for any indices $1 \leq j,k \leq d$. \end{enumerate} \end{lemma} \begin{proof} For the first statement, let $\varphi \in S^{d-1}$ be any permutation on $d-1$ elements. We have \begin{equation} \begin{split} (T \times_j v)&(i_{\varphi(1)}, \ldots , i_{\varphi(j-1)}, i_{\varphi(j+1)}, \ldots, i_{\varphi(d)}) =\\ &= \sum_{i_j =1}^n T (i_{\varphi(1)}, \ldots, i_j, \ldots, i_{\varphi(d)}) v_{i_j} \\ &= \sum_{i_j =1}^n T (i_{1}, \ldots, i_{d}) v_{i_j} = (T \times_j v)(i_1, \ldots , i_{j-1}, i_{j+1}, \ldots, i_d), \end{split} \end{equation} where the second equality comes from $T$ being a symmetric tensor. For the second statement, let $\varphi \in S^d$ be the permutation of $(1, \ldots , d)$ that swaps $j$ and $k$ and leaves the other indices unchanged. We can assume $j \leq k$ for notational purposes, since the complementary case follows by relabeling $k \leftrightarrow j$. The symmetry of $T$ implies that \begin{equation} \begin{split} (T \times_j v)(i_1, \ldots , i_{j-1}, & i_{j+1}, \ldots, i_d) = \sum_{i_j =1}^n T (i_1, \ldots, i_d) v_{i_j} \\ &= \sum_{i_j =1}^n T (i_{\varphi(1)}, \ldots, i_{\varphi(d)}) v_{i_j} \\ &= (T \times_k v)(i_1, \ldots , i_{j-1}, i_k, i_{j+1}, \ldots, i_{k-1}, i_{k+1}, \ldots, i_d) \\ &= (T \times_k v)(i_1, \ldots , i_{j-1}, i_{j+1}, \ldots, i_d), \end{split} \end{equation} where the second equality follows from the symmetry of $T$ and the last equality by the first statement of the Lemma. \end{proof} We next prove the first main result of this section, on the structure of symmetric and partially orthogonal tensors. \begin{proposition}\label{prop:symdecomp} Take $d\geq 3$ and $v_{kj}$ vectors with $\|v_{kj}\| = 1$. Let $T \in S^d(\mathbb{F}^n)$ be a symmetric tensor with decomposition $T = \sum_{k=1}^r \sigma_k \bigotimes_{j=1}^d v_{kj}$. Assume that there is an index $1 \leq j_* \leq d$ such that $v_{kj_*} \perp v_{k'j_*}$ for all $k \neq k'$. If $r$ is the minimal integer for which such a decomposition exists, then $v_{kj} = v_{k1}$ up to multiplication by a complex phase factor, for all $1 \leq j \leq d$ and $1 \leq k \leq r$. Moreover, if there are two distinct indices $1 \leq j_*, j_{**} \leq