input
stringlengths
2.56k
275k
output
stringclasses
1 value
discussion by considering $\vv{L} = \vv{I}$, so that $\| \vv{f} \|_0 = K$. Given measurements, $\vv{g} = \vv{H}\vv{f}$, the compressive sensing problem is \begin{equation} \label{eq:CS} \argmin_\vv{f} \|\vv{H}\vv{f} - \vv{g}\|^2_2 \quad \text{s.t.} \quad \| \vv{f} \|_0 \le K, \end{equation} where the $\ell_0$ norm, $\| \vv{f} \|_0$, counts the number of nonzero entries of $\vv{f}$. In general, \eqref{eq:CS} is very challenging to solve, however under certain conditions on $\vv{H}$~\cite{donoho_compressed_2006,candes_stable_2006}, the solution is unique and the problem is equivalent to \begin{equation} \label{eq:CSl1} \argmin_\vv{f} \|\vv{H}\vv{f} - \vv{g}\|^2_2 + \lambda\| \vv{f} \|_1 , \end{equation} where $\| \vv{f} \|_1 = \sum_n |[ \vv{f} ]_n|$, for a particular choice of $\lambda$. (Note the change from the $\ell_0$ to $\ell_1$ norm; this is what makes the problem tractable.) A more general formulation of the sparse recovery problem includes a sparsifying transform, $\vv{L}$, and is formulated as \begin{equation} \label{eq:CS-L} \argmin_\vv{f} \|\vv{H}\vv{f} - \vv{g}\|^2_2 \quad \text{s.t.} \quad \| \vv{L} \vv{f} \|_0 \le K, \end{equation} which, under certain conditions~\cite{candes_robust_2006}, we can equivalently reformulate into an $\ell_1$ problem, \begin{equation} \label{eq:l1} \argmin_\vv{f} \|\vv{H}\vv{f} - \vv{g}\|^2_2 + \lambda\| \vv{L} \vv{f} \|_1. \end{equation} While the formulation \eqref{eq:l1} is quite similar to the Tikhonov formulation \eqref{eq:problem_tikhonov}, changing to the $\ell_1$ norm has important consequences; we contrast these two formulations in Figures~\ref{fig:ellipse} and \ref{fig:ellipse_sparse}. \begin{figure}[!htbp] \centering \begin{tikzpicture} % \begin{axis}[ width = 5.5cm, xmin=-2, xmax=3.5, xlabel = $f_1$, ymin=-2, ymax=2, ylabel = $f_2$, ticks = none, axis lines = middle, x=1cm, y=1cm, axis line style={latex-latex}, every axis y label/.style={ at={(ticklabel* cs:1)}, anchor=east, }, ] % \fill [name path = ell, gray, opacity=0.1] (1.75, .5) ellipse [ x radius = 1.6180, y radius = 0.6180, rotate = 121.7175 ]; \fill [gray] (1.75, .5) circle (3pt); % \draw [color=red, dashed] (1.3385, 0) -- (0, 1.3385) -- (-1.3385, 0) -- (0, -1.3385) -- cycle; \fill [red] (1.3385, 0) circle (3pt); % \coordinate (B) at (1.0363, 0.5133); \node [draw, circle through=(B), color=blue] at (0,0) {}; \fill [blue] (B) circle (3pt); \end{axis} \end{tikzpicture} \caption{Comparison of $\ell_2$ and $\ell_1$ regularization when the number of measurements is not less than the dimension of the reconstruction. The gray point is the unique unregularized solution and the gray ellipse is $\{\vv{f} : \|\vv{H}\vv{f} - \vv{g}\|_2 ^2 \le \sigma^2\}$. The solid blue line is a level set of $\|\vv{f}\|_2$, and the blue point marks the solution of the $\ell_2$-regularized problem. Likewise, the dashed red line is a level set of $\|\vv{f}\|_1$ and the red point marks the $\ell_1$ solution. As expected, the $\ell_1$ solution is sparse (because it has only one non-zero element, $f_1$). } \label{fig:ellipse} \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}[b]{.5\linewidth} \begin{tikzpicture} \begin{axis}[ clip=false, width = 5.5cm, xmin=-2, xmax=3.5, xlabel = $f_1$, ymin=-2, ymax=2, ylabel = $f_2$, ticks = none, axis lines = middle, every axis y label/.style={ at={(ticklabel* cs:1)}, anchor=east, }, x=1cm, y=1cm, axis line style={latex-latex} ] \def.618cm{.618cm} \coordinate (center) at (1.75, .5); \def135{121.7175} \path [rotate=135, fill, opacity=0.1] let \p1 = (center) in (\x1 - 2cm, \y1 -.618cm) rectangle (\x1 + 2cm, \y1 + .618cm); \draw [rotate=135, <->, name path=ell] let \p1 = (center) in (\x1-2cm , \y1 + .618cm) -- (\x1 + 2cm, \y1 + .618cm); \draw [rotate=135, <-> ] let \p1 = (center) in (\x1-2cm , \y1 - .618cm) -- (\x1 + 2cm, \y1 - .618cm); \draw [rotate=135, <->, gray] let \p1 = (center) in (\x1-2cm , \y1) -- (\x1 + 2cm, \y1); \draw [color=red, dashed] (1.3325, 0) -- (0, 1.3325) -- (-1.3325, 0) -- (0, -1.3325) -- cycle; \fill [red] (1.3325, 0) circle (3pt); \coordinate (B) at (.9642, .5959); \node [draw, circle through=(B), color=blue] at (0,0) {}; \fill [blue] (B) circle (3pt); \end{axis} \end{tikzpicture} \caption{} \end{subfigure}% \begin{subfigure}[b]{.5\linewidth} \begin{tikzpicture} \begin{axis}[ clip=false, width = 5.5cm, xmin=-2, xmax=3.5, xlabel = $f_1$, ymin=-2, ymax=2, ylabel = $f_2$, ticks = none, axis lines = middle, every axis y label/.style={ at={(ticklabel* cs:1)}, anchor=east, }, x=1cm, y=1cm, axis line style={latex-latex} ] \def.618cm{.618cm} \coordinate (center) at (1.75, .5); \def135{135} \path [rotate=135, fill, opacity=0.1] let \p1 = (center) in (\x1 - 1.5cm, \y1 -.618cm) rectangle (\x1 + 3cm, \y1 + .618cm); \draw [rotate=135, <->, name path=ell] let \p1 = (center) in (\x1-1.5cm , \y1 + .618cm) -- (\x1 + 3cm, \y1 + .618cm); \draw [rotate=135, <-> ] let \p1 = (center) in (\x1-1.5cm , \y1 - .618cm) -- (\x1 + 3cm, \y1 - .618cm); \draw [rotate=135, <->, gray] let \p1 = (center) in (\x1-1.5cm , \y1) -- (\x1 + 3cm, \y1); \coordinate (A) at (0, 1.3760); \coordinate (B) at (1.3760, 0); \draw [color=red, dashed] (1.3760, 0) -- (0, 1.3760) -- (-1.3760, 0) -- (0, -1.3760) -- cycle; \fill [red] (A) circle (3pt); \fill [red] (B) circle (3pt); \draw [color=red, line width=2pt] (A) -- (B); \coordinate (B) at (1.125-.437, 1.125-.437); % \node [draw, circle through=(B), color=blue] at (0,0) {}; \fill [blue] (B) circle (3pt); \end{axis} \end{tikzpicture} \caption{}\label{fig:ellipse_sparse:nonunique} \end{subfigure} \caption{(a) Effect of regularization when the number of measurements is less than the size of the reconstruction. In contrast to Figure~\ref{fig:ellipse}, the unregularized solution is nonunique and, specifically, is an affine subspace (gray line). Otherwise, the situation is the same as in Figure~\ref{fig:ellipse}, with a nonsparse $\ell_2$ solution and sparse $\ell_1$ solution. (b) For certain problems, the $\ell_1$ solution is nonunique (red line segment). Even in these cases, Theorem~\eqref{thm:l1-representer} states that the extreme points of the solution set (red points) are sparse. } \label{fig:ellipse_sparse} \end{figure} The formulation \eqref{eq:l1} is called the analysis form of the regularization, because the matrix $\vv{L}$ retrieves the sparse coefficients from the original signal. This is in contrast to the synthesis form, \begin{equation} \argmin_{\vv{\alpha} \in \mathbb{R}^N} \| \vv{H} \tilde{\vv{L}} \vv{\alpha} - \vv{g} \|^2_2 + \lambda \| \vv{\alpha} \|_1, \label{eq:sparse-synth} \end{equation} where the matrix $\tilde{\vv{L}}$ now acts to construct the signal from its sparse coefficients. If the synthesis transform has a left inverse, this is merely a change in notation; otherwise, the two forms are meaningfully different. For more discussion of these two forms, see \cite{elad_analysis_2007}. \section{Representer Theorems for \texorpdfstring{$\ell_2$}{l2} and \texorpdfstring{$\ell_1$}{l1} Problems} Another perspective on sparsity-promoting regularization is given by representer theorems that specify the form of solutions to certain minimization problems. For example, for Tikhonov regularization with $\vv{L}$ being the identity matrix, we can state the following representer theorem, which is a simplified special case of a result from \cite{unser_representer_2016}. \begin{theorem}[Convex Problem with $\ell_2$ Minimization] The problem \begin{equation*} \argmin_{\vv{f}} \| \vv{f} \|^2_2 \quad \text{s.t.} \quad \| \vv{H}\vv{f} - \vv{g} \|^2_2 \le \sigma^2, \end{equation*} has a unique solution of the form \begin{equation*} \tilde{\vv{f}} = \vv{H}^* \vv{a}, \end{equation*} for a suitable set of coefficients, $\vv{a} \in \mathbb{R}^M$. \label{thrm:l2-representer} \end{theorem} The useful insight here is that the solution to an $\ell_2$-regularized problem always has the form of a weighted sum of the original measurement vectors, i.e., the columns of $\vv{H}$. Moreover, the number of elements in the sum is equal to the number of measurements, $M$, so there is no reason to expect $\vv{f}$ to be sparse, unless these measurement vectors are themselves sparse in some transform domain. If we choose instead to minimize a function including an invertible regularization operator, $\| \vv{L} \vv{f} \|^2_2$, we can state a similar theorem with $\vv{H}^*$ replaced by $(\vv{L}^* \vv{L})^{-1} \vv{H}^*$. Contrast Theorem~\ref{thrm:l2-representer} with the $\ell_1$ representer theorem~\cite{unser_representer_2016}, \begin{theorem}[Convex Problem with $\ell_1$ Minimization] The set \begin{equation*} \mathcal{V} = \argmin_{\vv{f}} \| \vv{f} \|_1 \quad \text{s.t.} \quad \| \vv{H}\vv{f} - \vv{g} \|^2_2 \le \sigma^2, \end{equation*} is convex, compact, and has extreme points of the form \begin{equation*} \tilde{\vv{f}} = \sum_{k=1}^K [\vv{a}]_k \vv{e}_{[\vv{n}]_k}, \end{equation*} where $\{\vv{e}_{n}\}_{n=1}^N$ is the standard basis for $\mathbb{R}^N$ (i.e., unit vectors pointing along each of the axes) and for a suitable set of coefficients $\vv{a} \in \mathbb{R}^K$ and locations $\vv{n} \in \{1, 2, \dots, N \}^K$ with $K \le M$. \label{thm:l1-representer} \end{theorem} Looking at the form of $\tilde{\vv{f}}$, we see that it is sparse: it has fewer nonzero terms than the number of measurements. The amplitudes of its nonzero terms are given by $\vv{a}$ and their locations are given by $\vv{n}$. One complication is that the solution to the $\ell_1$ problem is not, in general, unique (though with additional conditions it is). Thus, the theorem is stated in terms of the extreme points of the solution set, i.e., those solutions that cannot be expressed as linear combinations of other solutions. When $\vv{L}$ is not the identity, a similar theorem can be stated where the extreme points of the solution set are built out of a sparse linear combination of dictionary vectors that depend on $\vv{L}$~\cite{unser_representer_2016}. \section{Bayesian View} We can turn to a statistical perspective on sparsity-promoting regularization, basing the regularization term on a statistical model of the signal. When we expect the signal model to be accurate, this approach allows us to design reconstruction algorithms that are optimal with regard to chosen statistical criteria. Even when the signal model is only an educated guess, the statistical formulation can be helpful in designing new regularization terms with
1 1 — _ I 1 1 _ I _ I 1 _ _ _ o;1 — — — — E B. REFER TO MECH. DWGS (DEFERRED SUBMITTAL) FOR > 0 6 LOCATION OF ALL HVAC UNITS AND MECHANICAL L ---- REQUIREMENTS AND SPECIFICATIONS. SEE STRUCTURAL a __ ___ _ _ _ _ _ __ _ Ei-A6 DWGS FOR TYP. FRAMING AT OPENING DETAIL. FOR TYP. CO' o �o �� EQUIPMENT CURB, SEE DETAIL 2-0-9.1 a.18 ,_ „ I I I I I I I I I I I I I I I I I I I F +36 4 >r1 6 8 -- C. REFER TO PLUMBING DWGS (DEFERRED SUBMITTAL) AND T.O. PARAPET DETAIL FOR PIPE ROOF PENETRATIONS. - O - — — — — — — - — — — — — — — — - - - - C,7/�J D. ELEV. 1 DIFFERENTGENERALCONTRACTOR TRADES FOR INSTALLATIONSHALL OFCOORDINATE ROOFWITH THE PENETRATIONS PRIOR TO INSTALLATION OF ROOF MEMBRANE. 1 ' 1/ -- E. AHH U I , iC _ _ _ 1CONNECT PG LE ANDCONTRACTOR SITE UTILITY ALL CONTRACTORSCOORDINATE WIT TO INSURE THATILDING H , STORM A 6 I 6 I I __) SYSTEMS,IONS BUILDING SANITARY ANBETWEEN AINTD ER SITEADERS SEWER SYSTEMS DRAIN ❑09.18.18 +38'-0" « I I 4 I I _ ARE COMPLETED. T.O. PARAPET 1 — — — — — -J� F. CONTRACTOR TO PROVIDE POSITIVE DRAINAGE ON ROOF TO ALL ROOF DRAINS. j G. RIGID INSULATION ATTACHMENT ON ROOF TO RESIST WIND 8 UPLIFT FORCES GREATER THAT 80 M.P.H. I I I I I I I I I I I I I I I I I I I CKD .= 0 0 w II /-� ROOF DRAINAGE CALCS . 1 I I 1 1 I I . -- 1 1 I 1 I I 1 1 I 1 _ _ _ _ L 1 A _/ ROOF AREA: ± 44,950 S.F. Alk 09.18.18 Alk +36'-4" A3.30 1.5 (2) A3.40 s T.O. PARAPET INTENSITY: RIDGE (IN/HR LASTING 5 MIN.) AI * I I I - /M 4" DOWNSPOUT: 3 1 1 1 09.18.18 - LL1 PER IPC TABLE 1106.2, 9,200 X (3) 4" DOWNSPOUTS = 37.71 +36'-4" a 0 SQ. IN. = 28,911 ALLOWABLE ROOF SQ. FOOTAGE. T.O. PARAPET j _1 II - - - - z - - - - - - - - -J CN� 1 1 1 1 + 2-0„ 00.18.1 I 1 1 I 1 1 I 1 1 1 1 GENERAL NOTES / ----- I I ROOF DECK - — — — — — — — — - — — — — — — s -- — — — — ( Q i 1. CONTRACTOR SHALL VERIFY SIZES AND LOCATIONS OF ALL MECHANICAL / EQUIPMENT PLATFORMS AND BASES, POWER, WATER AND DRAIN LLJ LOCATIONS AND INSTALLATION REQUIREMENTS WITH EQUIPMENT r n 6 I I I I I MANUFACTURERS PRIOR TO PROCEEDING WITH WORK. CHANGES TO v ACCOMMODATE FIELD CONDITIONS OR MATERIAL SUBSTITUTIONS SHALL BE 0 IC _ — — — — P MADE WITHOUT ADDITIONAL CHARGE TO OWNER. �/ A Ai. c.----..„..------„..----„.-- \ 2. MECHANICAL AND PLUMBING EQUIPMENT PLATFORM DIMENSIONS SHALL BE Et 09.18.18 III 6 +36'-4" o9.1s.,8 A MINIMUM OF 8 INCHES ABOVE ADJACENT ROOF ELEVATION. 0 T.O. PARAPET 10 4T.O. PARAPET 6 I I I I /_� 3. VENTS THROUGH THE ROOF SHALL BE INSTALLED 10'-0" FROM AND H I — — — — — — y — — — — n TERMINATED 3'-O" ABOVE ANY FRESH AIR INTAKES. �� 4. LEAD FLASHING TO BE USED AT ALL ROOF DRAIN AND OVERFLOW V J __/ PENETRATIONS. 5. SEE PLUMBING DRAWINGS FOR ALL ROOF DRAIN SIZES. I••I� _ I R i 6. ALL ROOF SURFACES SHALL SLOPE AT A MINIMUM OF 1 4 INCH PER O s — — — — — — — ELEV. 2 — — — W A �� FOOT IN ALL DIRECTIONS. CRICKETS SHALL BE USED ON THE HIGH SIDE z 6 09.18.18 OF ALL SKYLIGHTS, ROOF HATCH AND EQUIPMENT PLATFORMS. V J Q U T.O. PARAPET 7. SEE STRUCTURAL DRAWINGS FOR ROOF FRAMING. '� _1VJ w A _ 09.18.18 1 1 1 1 1 1 1 1 1 '� '� '� S 8. SEE MECHANICAL DRAWINGS FOR HVAC EQUIPMENT, SHAFT, AND J 0 09.18.18 +38'�0" fl.O. _____ JJ PARAPET 1 9. ALL INDIRECT WASTE PIPING ON ROOF SHALL BE TYPE "L" COPPER z �3 / �--� ATTACHED BY PIPE STRAPS NAILED TO 2 x 4 REDWOOD BLOCKING. I 0 N �' /—I 6 1 \ BLOCKING SHALL BE SET IN MASTIC AT 6'-0" O.C. PIPING SHALL BE � U Q 0 1 — - — — — — — — — - — — — — — — - — - — — — — T SLOPED AND TERMINATE IN AN APPROVED RECEPTOR. PIPING TO BE U U �� INSTALLED PARALLEL AND PERPENDICULAR TO BUILDING ELEVATIONS. Q I 6 I I 6 I I 0 Ct c co I 10. ALL ROOFING SHALL BE DESIGNED AND INSTALLED IN ACCORDANCE WITH Z U al-- � LOCAL CITY AGENCY WIND DESIGN ACCORDANCES. Z w O U ,- )- 11. ALL ROOF ELEVATIONS INDICATE TOP OF STRUCTURAL DECK FROM = z FINISHED FLOOR, DATUM ELEVATION 0-0 . SEE STRUCTURAL DRAWINGS. 0O I II I I I I I U z Qc� � A 12. ROOFING: UL CLASS A- FIRE RETARDANT RATING AS MANUFACTURE BY / 6 + 6-4" 09.18.18 GAF, MANVILLE OR EQUAL. A I \ „,Th / \ © �0 T.O. PARAPET Cv i 13. LEAD FLASHING TO BE USED AT ALL ROOF VENT PENETRATIONS. REVISIONS 09.18.18 / NO. DATE BY +,36-4 T.O. PARAPET 09.18.18 ' 11°HI I I I I I I I I I I ICITY 4 — A ! s ! s — I — — — s — — - — — — _) 0 0 1 � L- -- -- - A ' 1 1 A 1 I I I A I I I I ROOF PLAN LEGEND A ISSUE DATE S 09.18.18 09.18.18 » 09.18.13 +38-0 09.18.18 +36-4 09.18.18 +3 -4 +38-0 + 6-4" DESIGN APPROVAL: T.O. PARAPET T.O. PARAPET T.O. PARAPET T.O. PA APET T.O. PA APET > SLOPE FROM HIGH POINT TO LOW POINT ( I I I PERMIT SUBMITTAL: I I I I PERMIT RECEIVED: ROOF ACCESS HATCH. SEE DETAIL X/AX.XX FOR HATCH BID DOGS: AND X/AX.XX FOR RAILING. CONSTR. DOGS: A3.20 0 0 ROOF DRAIN. SEE DETAIL X/AX.XX. SCALE: AS NOTED I I I I PLOT DATE: 2018-11-19 CAD FILE: 18-002 A24 ____„ A. JOB NUMBER: r-- 2� 3� 4 i5- (6� C C �10 11 E_1 1 -, �14 \� 616) (17) El ,9) (2� (2-- E_____ /2/ 2 71 24 (2M5 (2- 6 �7 28 CHECKED: PA �� ��__/� � �� ��_ � DRAWN: STATUS: SITE ENTITLEMENT ROOF PLAN /IIQI1°' ' hIIIII' ROOF PLAN iN SCALE: 3/32" = 1'-0" A2 . 40 100/210 I 1 1 1 I KEY NOTES - 1/l4 1❑ STANDING SEAM METAL ROOF z4�v \' Q' N 7 U I �2 WALL LINE BELOW •% 3❑ ROOF SUPPORTS BELOW - REFER TO ELEVATIONS �� �I. z O © © 0 -� Nvo �I ►•• �� 4❑ SINGLE PLY COOL ROOF CRRC-1 CERTIFIED ROOFING SYSTEM WITH A REFLECTANCE OF 5 .65 AND AM EMITTANCE OF .85 OVER 1/4" DENS DECK ROOF BOARD OVER R-25 RIGID INSULATION OVER METAL DECK. VERIFY ROOF JOIST LOCATION WITH STRUCTURAL DRAWINGS. CLASS A FIRE RETARDANT, SLOPE 1/4";1'-0" MIN. SEE ADJACENT imp- ti, SPECIFICATIONS. 1 I 1 1 1 / I I \ / 5 DOWNSPOUT AND GUTTER J o 0 o o �/ ** N•TE: :OOF 'LA TO :E FI►ALI D D 'ING CONSTRUCTION DOCUMENTS PHASE. �� ���
not use it in actual articles. (followed by a space and not italicized) is preferred over circa, ca., or approx. The template {{circa}} may be used. ### Do not use unwarranted abbreviations Avoid abbreviations when they might confuse the reader, interrupt the flow, or appear informal. For example, do not use <span id="FormattingError" />Template:!xt is only for examples of style and formatting. Do not use it in actual articles. for <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles. or <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles., except in a technical passage where the term occurs many times or in an infobox or a data table to reduce width. ### Do not invent abbreviations or acronyms Generally avoid devising new abbreviations, especially acronyms (<span id="FormattingError" />Template:Xtn is only for examples of style and formatting. Do not use it in actual articles. is good as a translation of <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles., but neither it nor the reduction <span id="FormattingError" />Template:!xt is only for examples of style and formatting. Do not use it in actual articles. is used by the organization; so use the original name and its official abbreviation, <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.). If it is necessary to abbreviate in a tight space, such as a column header in a table, use widely recognized abbreviations. For example, for <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles., use <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles. and <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles., with a link if the term has not already been written out in the article: <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.. Do not make up initialisms such as <span id="FormattingError" />Template:!xt is only for examples of style and formatting. Do not use it in actual articles.. ### HTML elements Either the <abbr> element or the {{abbr}} template can be used for abbreviations and acronyms: <abbr title="World Health Organization">WHO</abbr> or {{abbr|WHO|World Health Organization}} will generate WHO; hovering over the rendered text causes a tooltip of the long form to pop up. MediaWiki, the software on which Wikipedia runs, does not support <acronym>. ### Ampersand Shortcuts: In normal text and headings, the word and should be used instead of the ampersand (&); for example <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles., not <span id="FormattingError" />Template:!xt is only for examples of style and formatting. Do not use it in actual articles.. Retain an ampersand when it is a legitimate part of a proper noun, such as in Up & Down or AT&T. Ampersands may be used with consistency and discretion where space is extremely limited (e.g. tables and infoboxes). Quotations (see also MOS:QUOTE) may be cautiously modified, especially for consistency where different editions are quoted, as modern editions of old texts routinely replace ampersands with and (just as they replace other disused glyphs, ligatures, and abbreviations). ## Italics Shortcut: Template:Further information ### Emphasis Whereas italics may be used sparingly for emphasis, boldface is normally not used for this purpose. Use italics when introducing or distinguishing terms. Overuse of emphasis reduces its effectiveness. When emphasis is intended, versus other uses of italics as described below, the semantic HTML markup <em></em>, or its template wrapper {{em}}, may be used: The vaccine is {{em|not}} a cure, but a prophylactic. This helps editors understand the intent of the markup as emphasis, allows user style sheets to distinguish emphasis and handle it in a customized way, and is an aid to re-users and translators, especially since other languages have different conventions for delineating emphasis.[1] 1. Ishida, Richard (2015). "Using b and i tags". World Wide Web Consortium. Retrieved 1 September 2016. ### Titles Use italics for the titles of works of literature and art, such as books, pamphlets, films (including short films), television series, named exhibitions, computer and video games (but not other software), music albums, and paintings. The titles of articles, chapters, songs, television episodes, research papers and other short works are not italicized; they are enclosed in double quotation marks. Italics are not used for major revered religious works (<span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles., <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles., <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.). Many of these items should also be in title case. ### Words as words Use italics when mentioning a word or letter (see Use–mention distinction) or a string of words up to one full sentence (<span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.; <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.). When a whole sentence is mentioned, quotation marks may be used instead, with consistency (<span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.; or <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.). Mentioning (to discuss grammar, wording, punctuation, etc.) is different from quoting (in which something is usually expressed on behalf of a quoted source). ### Foreign words Use italics for phrases in other languages and for isolated foreign words that are not common in everyday English. Proper names (such as place names) in other languages, however, are not usually italicized, nor are terms in non-Latin scripts. ### Scientific names Use italics for the scientific names of plants, animals and other organisms at the genus level and below (italicize <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles. but not <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.). The hybrid sign is not italicized (<span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.), nor is the "connecting term" required in three-part botanical names (<span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles.). ### Quotations in italics For quotations, use only quotation marks (for short quotations) or block quoting (for long ones), not italics. (See Quotations below.) This means that (1) a quotation is not italicized inside quotation marks or a block quote just because it is a quotation, and (2) italics are no substitute for proper quotation formatting. To distinguish block quotations from ordinary text, you can use <blockquote> or {{quote}}. (See § Block quotations, below.) ### Italics within quotations Use italics within quotations if they are already in the source material. When adding emphasis on Wikipedia, add an editorial note <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles. after the quotation. <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles. If the source has used italics (or some other styling) for emphasis and this is not otherwise evident, the editorial note <span id="FormattingError" />Template:Xt is only for examples of style and formatting. Do not use it in actual articles. should appear after the quotation. ### Effect on nearby punctuation Shortcut: Italicize only the elements of the sentence
Thus, a particularly large Dent in the space - time transferred to our example with the trampoline. Gravitation refers to the force, engaged in two bodies due to their mass to each other. We spent some time looking at special relativity, so now it's time for the general variety. Sky & Telescope, Night Sky, and skyandtelescope.org are registered trademarks of AAS Sky Publishing LLC. Socially oriented website which will help to solve your little (or not little) technical problems. 1, Yes, I would like to receive emails from Sky & Telescope. The following article is from The Great Soviet Encyclopedia (1979). The equivalence principle tells us that the effects of gravity and acceleration are indistinguishable. If they are in a spacetime that is curved, the Riemann curvature tensor tells how those initially parallel trajectories evolve as I move along those geodesics. They are an object where gravity is so powerful that spacetime – a fabric of the three dimensions of space plus the fourth dimension of time shown to be linked by Einstein’s theory of relativity – is bent so far that it becomes a hole. Black holes are a curvature in spacetime. In my opinion, The bending of Space-Time is caused from energy density or its derivatives as like mass or energy fields. Gravity is the curvature of spacetime. Spacetime curvature, Gravity would not be an attractive force between masses but an external pressure force produced by the spacetime curvature. All rights reserved. Are you interested in physics, you will find in our next post, an introduction to quantum physics. If the density is equal to the critical density, then the universe has zero curvature; it is flat. To understand the connection, let’s go closer to home and imagine a curved space we’re all familiar with: the surface of the Earth.Imagine that you’re The density of matter and energy in the universe determines whether the universe is open, closed, or flat. Note also that the curvature applies both to space and time (ergo, "spacetime") -- so both are stretched. Relativity comes in different flavors, as it happens. Gravity then provides a description of the dynamic interaction between matter and spacetime. It is here that Einstein connected the dots to suggest that gravity is the warping of space and time. The heavier the bullet is, the deeper the indentation is. If gravity is actually curved space and if falling objects are simply following the natural curves of space why does each object have its own curve? One of the balls of these very large Dent, approaching it rolls inevitably. This is, quite simply, on the basis of a room to explain. The geodetic effect as I’ve described is actually one of two effects that cause our gyroscope to precess. If the matter consists of a loose collection of falling objects, say raindrops, or a group of rocks, then each individual piece of matter will not expand or contract owing to curvature of spacetime as it falls (except a tiny bit as I will explain in a moment), but the distances between the objects will expand or contract, depending on what spacetime is locally doing. Before we deal with the concept of space-time curvature, we explain first the concept of space-time. What's the Origin of the Universe? According to Einstein’s theory of general relativity, massive objects warp the spacetime around them, and the effect a warp has on objects is what we call gravity. Sky & Telescope maintains a strict policy of editorial independence from the AAS and its research publications in reporting developments in astronomy to readers. Rather, they stretch and compress the space. It is the space-time a curvature. You can imagine a flat universe like a sheet of paper that extends infinitely in all directions. In general relativity, Einstein generalized Minkowski space-time to include the effects of acceleration. This is the principle underlying the detection of gravitational waves – ripples in the fabric of spacetime produced by accelerating massive bodies. In this way, the curvature of space-time near a star defines the shortest natural paths, or geodesics —much as the shortest path between any two points on Earth is not a straight line, which cannot be constructed on that curved surface, but the arc of a great circle route. Meet two black holes collide, they merge into an even larger black hole. In physics, there is still a further Dimension, namely. These three properties are referred to as dimensions. If such a perturbation passes by, it would change the curvature of spacetime between the two particles, leaving an imprint on the frequency of the light they exchange. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time or four-dimensional spacetime. Measurements from the Wilkinson Microwave Anisotropy Probe (WMAP) have shown the observable universe to have a density very close to the critical density (within a 0.4% margin of error). The insertion of an object will curve this spacetime (b). Black holes have a very high mass. When it comes to the notion of spacetime curvature, this is what General Relativity refers to. Newton's theories assumed that motion takes place against the backdrop of a rigid Euclidean reference frame that extends throughout all space and all time. So for now, I’ll just mention that this is a related effect that we h… The example of the trampoline this is easy to understand: Put a ball on it, arises at this point a recess. Mass also has an effect on the overall geometry of the universe. According to Einstein’s theory of general relativity, massive objects warp the spacetime around them, and the effect a warp has on objects is what we call gravity. In physics, space is three-dimensional. For example, if you drop two rocks above a massive body like the Earth, aligned radially but with slightly different altitudes, the distance between them will increase. One of the balls of these very large Dent, approaching it rolls inevitably. But that doesn't matter—"separate existence" or not, physicists working in the field understand spacetime as a useful concept. This is a great question which goes to the heart of why Einstein said gravity is the curvature of space-time, rather than just the curvature of space. CURVATURE OF SPACE-TIME The curvature of space-time is a distortion of space-time that is caused by the gravitational field of matter. It ensures that the Apple to the ground falls, the moon revolves around the earth and both around the sun, rotate. Tidal gravity/spacetime curvature in GR is geodesic deviation: the trajectories of nearby freely falling particles diverge (or converge). First, let’s try to understand what a warping of distance means. Gravity then provides a description of the dynamic interaction between matter and spacetime. Acceleration is based on either the sum of all the net forces (Newton) acting on an object, or the net curvature of spacetime (Einstein) at one particular location in the Universe. In contrast, Einstein denied that there is any background Euclidean reference frame that extends throughout space. The force of gravity, or gravity used to be considered as a so-called fundamental force in physics. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and So, locally, spacetime is curved around every object with mass. Image Source: Google Not only a black hole but every object out their you, I and even every body in a space which as its own mass and gravity don't effects, it bends or warps spacetime metric/continuum/fabric. What happened during the Big Bang. By: Maria Temming It is this curvature of spacetime that gives rise
two galaxies collide (and sometimes merge), the huge collective gravities of each stretch the other out like taffy, and an off-center collision can cause vast arcs of gas and stars to be drawn out. Interestingly, it’s not clear to me where the other galaxy is that did this. It’s possible they merged completely, forming J08 as we see it now, disturbed and weird but probably beginning to settle down after the eons-long encounter. If they didn’t merge, though, it’s difficult to say what happened to the other galaxy just from examining this image alone. There are a couple of galaxies near J08 in the full picture, but without knowing their distance they could be located much closer or farther from Earth than J08 itself, completely unrelated to it. This galaxy was observed by Hubble to find out what it looks like in ultraviolet light, as part of a study the structure of these galaxies. UV is strongly emitted by hot, massive, blue stars, which don’t live very long. As it happens, J08 is a starburst galaxy, cranking stars out at a high rate. A lot of those stars are the massive and hot kind, and they light up the gas and dust around them—these are strung out along the galaxy, which you can see as those blue regions in the Hubble picture. This is actually pretty typical after a big galaxy collision gas clouds collide, collapse, and form stars at a furious rate. However, there’s more going on here. The UV light seen in the Hubble image above is pretty much emitted by stars and warm gas. But if you look farther into the ultraviolet, a new feature comes up, a very special color of UV strongly emitted by hydrogen gas. When you hit a hydrogen atom with enough energy, its sole electron will jump from one energy level to the next, like a person hopping up a step on a staircase. In this case, the electron jumps up from the bottom energy level to the next one up. After a time, it’ll plop back down and emit a UV photon at 121.6 nanometers wavelength (way outside what the human eye can see). This light is so special it has its own name: Lyman Alpha, or Lyα. The astronomers studying J08 used the orbiting GALEX observatory to take a look at the Lyα being emitted in the galaxy. They processed the data to remove a lot of unwanted light interfering with the Lyα, and what they found is interesting: In the upper image (a combination of several observations from Hubble and GALEX), red shows light from warm gas clouds, green from the massive stars, and the Lyα (normally invisible to the human eye) is colored blue. As you can see, quite a bit of Lyα appears to be coming from the outskirts of the galaxy. It’s coming from the interior as well, but that’s overwhelmed by the other light and hard to see here. The point is that the Lyα emission is also coming from parts of the galaxy well beyond where we see visible light being emitted. J08 is an extreme example (the galaxy itself is stretched out) but they found similar results in about a dozen other galaxies they looked at as well. It turns out that many galaxies are surrounded by a thin halo of hydrogen gas, but it’s very hard to detect because it’s spread out. It doesn’t emit optical light we can see, and it’s too cold to emit UV light on its own. But those massive hot stars are sending out light at all colors of UV, including Lyα, and the gas on the outskirts absorbs and re-emits it, betraying its presence. That’s why we see Lyα coming from the outer parts of the galaxy. The actual mechanism occurring is more complicated than this—isn’t it always?—but that’s the basics of it. J08 and the others were chosen because they’re relatively nearby, and their structures could be picked out by Hubble. Once we understand how and where Lyα is emitted by them, we can use that to better understand more distant galaxies where we can’t see the structure directly. This is important to know because Lyα is used to determine how many hot, massive stars are born in these kinds of galaxies. It also reveals the structures of these galaxies, including the location of gas that is otherwise invisible. It’s also used in other ways, like finding the distances to galaxies and vast gas clouds, and even what conditions were like in the early Universe. All that from a simple quirk in the simplest atom of them all. I find it fascinating that the Universe is so accommodating to our inquisitive nature. It leaves clues everywhere about itself, and all you need to learn about it is a bit of math and physics, technology, and above all curiosity. With those features in combination, the entire cosmos can be revealed. ## Access options Get full journal access for 1 year All prices are NET prices. VAT will be added later in the checkout. Tax calculation will be finalised during checkout. All prices are NET prices. ## Galaxies The ultraluminous infrared galaxy Markarian 273, a major merger of two gas-rich spiral galaxies Dave Sanders and Joshua Barnes are studying the multiwavelength properties of a complete sample of far-infrared selected galaxies in the local universe, as part of the Great Observatories All-Sky LIRGs Survey, in order to understand the origin and evolution of the most luminous infrared systems, with infrared luminosities 10&ndash1000 times the total bolometric lumi­nosity of our Milky Way. These &ldquoluminous infrared galaxies&rdquo (LIRGs) appear to be triggered through mergers of massive gas-rich spiral galaxies, an event which leads to powerful starbursts and the growth of supermassive black holes. The end stages of the merger process lead to qua­sarlike luminosities, including the final stage marked by binary active galaxy nucleus. The eventual merger of the two supermassive black holes is accompanied by a massive &ldquoblow-out&rdquo phase, ex­pelling as much as several billion solar masses of gas and dust into the intergalactic medium, leaving a massive gas-poor elliptical as the merger remnant. This exotic process of galaxy transformation, although relatively rare in the local universe, is now believed to be one of the dominant processes of galactic evolution in the early universe, when the space density of LIRGs was 10,0000 times larger than observed locally, and coinciding with the peak epoch in the formation of quasars and superstarbursts. 1. Interactions are correlated with high-IR luminosities and mergers with very high IR luminosities. 2. Starbursts are the dominant underlying energy source in interacting galaxies. 3. The starburst initial mass function is biased against high-mass stars. 4. While optical and 2 µm spectroscopy showed no evidence for buried AGNs in a sample of 30 LIRGs, mid-IR spectroscopy revealed high excitation coronal lines in Thus interactions and mergers trigger bursts of star for­mation with a &ldquobottom-heavy&rdquo initial mass function, and the starburst dominates the bolometric luminosity: the mergers are making elliptical galaxies. ### Galaxy Collisions Joshua Barnes uses N-body methods to simulate galactic collisions and other aspects of galactic dynamics. One area of ongoing effort is improving existing techniques for force calculation, construction of initial conditions, and simulation including star formation and recycling of interstellar material. A second area of emphasis is developing accurate models of well-observed interacting galaxies. Ultimately, one objective of this research is to test dark-matter models and prescriptions for star formation by comparing detailed models of specific interacting galaxies with observations. Computer-generated model of NGC 4676 overlaid on maps of the actual HI and stellar distributions ### Chemical Abundances of Normal, Nearby Spiral Galaxies Fabio Bresolin is studying the chemical abundances of yourg stars and HII regions in spiral galaxies. Among the most interesting results is the measurement of the metallicity of HII regions in
distribution the least squares estimators are also the, This page was last edited on 1 December 2020, at 22:06. 5 Δ Polynomial least squares describes the variance in a prediction of the dependent variable as a function of the independent variable and the deviations from the fitted curve. ∑y = na + b∑x ∑xy = ∑xa + b∑x² Note that through the process of elimination, these equations can be used to determine the values of a and b. The formulas for linear least squares fitting were independently derived by Gauss and Legendre. A data point may consist of more than one independent variable. β The residuals for a parabolic model can be calculated via = + = Y   {\displaystyle \beta } ( i X . is appropriate. In some contexts a regularized version of the least squares solution may be preferable. {\displaystyle Y} The idea of least-squares analysis was also independently formulated by the American Robert Adrain in 1808. X {\displaystyle (x_{i},y_{i})\!}   +   Using the expression (3.9) for b, the residuals may be written as e ¼ y Xb ¼ y X(X0X) 1X0y ¼ My (3:11) where M ¼ I X(X0X) 1X0: (3:12) The matrix M is symmetric (M0 ¼ M) and idempotent (M2 ¼ … . ¯ An example of how to calculate linear regression line using least squares. = i It minimizes the sum of the residuals of points from the plotted curve. ¯ These are the defining equations of the Gauss–Newton algorithm. y 1 = A simple data set consists of n points (data pairs) {\displaystyle (x_{i},y_{i})\! 2 x ≈ "Least squares approximation" redirects here. i + i XXIX: The Discovery of the Method of Least Squares , γ Section 6.5 The Method of Least Squares ¶ permalink Objectives. Least-squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns. 1 Most algorithms involve choosing initial values for the parameters. + β In that work he claimed to have been in possession of the method of least squares since 1795. Least squares method, also called least squares approximation, in statistics, a method for estimating the true value of some quantity based on a consideration of errors in observations or measurements. -intercept is x y It is necessary to make assumptions about the nature of the experimental errors to statistically test the results. [10]. = {\displaystyle Y_{i}} , the L2-norm of the parameter vector, is not greater than a given value. Y X -values and the ^ ¯ is called the shift vector. . i . ¯ ) {\displaystyle \alpha \|\beta \|} , ¯ When the observations come from an exponential family and mild conditions are satisfied, least-squares estimates and maximum-likelihood estimates are identical. n   {\displaystyle \alpha \|\beta \|^{2}} Regression for prediction. X i , the gradient equations become, The gradient equations apply to all least squares problems. n [15][16][17] (As above, this is equivalent[dubious – discuss] to an unconstrained minimization of the least-squares penalty with {\displaystyle (Y_{i}=\alpha +\beta x_{i}+U_{i})} We proved it two videos ago. Use the following steps to find the equation of line of best fit for a set of x Thus, although the two use a similar error metric, linear least squares is a method that treats one dimension of the data preferentially, while PCA treats all dimensions equally. In this section, we … = Inferring is easy when assuming that the errors follow a normal distribution, consequently implying that the parameter estimates and residuals will also be normally distributed conditional on the values of the independent variables. 8.5.3 The Method of Least Squares Here, we use a different method to estimate $\beta_0$ and $\beta_1$. ,   The least square is not the only methods used in Machine Learning to improve the model, there are other about which I’ll talk about in later posts There are two rather different contexts with different implications: The minimum of the sum of squares is found by setting the gradient to zero. α x × The fit of a model to a data point is measured by its residual, defined as the difference between the actual value of the dependent variable and the value predicted by the model: The least-squares method finds the optimal parameter values by minimizing the sum, ¯ is an independent variable and   i ¯ An alternative regularized version of least squares is Lasso (least absolute shrinkage and selection operator), which uses the constraint that Do It Faster, Learn It Better.   i   , Varsity Tutors does not have affiliation with universities mentioned on its website. Consider a simple example drawn from physics. An early demonstration of the strength of Gauss's method came when it was used to predict the future location of the newly discovered asteroid Ceres. { 2 }. seen as projection the least squares seen as projection least., suppose there is a statistical method for managerial accountants to estimate production costs the variables are said to minimized... Methods and materials relation between two random variables x and y i { \displaystyle S=\sum _ { i=1 } {. Is least square method formula to provide a prediction rule for such data constant by least squares is! Approximation of the method of calculating the orbits of celestial bodies + 14.0 in. -Intercept of the method of least squares seen as projection the least square method to determine the equation y... The least-squares prediction rule for such data central limit theorem supports the idea of analysis. On its website to determine the equation is y = − 1.1 x 14.0. The distance in the sum of squares to be correlated if a linear relationship.... Laplace prior distribution on the residuals can be calculated similar to LLSQ method determine... Empirical model in two dimensions is that of the x -values and the y -intercept form. The extension from Hooke 's law, linear least squares linear regression arrived?... Best approximation of the line of best fit a data set the Gauss–Newton.. In LLSQ the solution to a non-linear least squares in correlation we study the of... An extension of this model of line of best fit for the model that best '' fits data! Developed using the method of calculating the orbits of celestial bodies a method calculating... Only considering the two-dimensional case, here. ) on CBS Local and Houston Press.. The other variables to increase by least squares is often used to study the nature the! Satisfied, least-squares estimates and maximum-likelihood estimates are identical in LLSQ the solution to a least... Advantage of Lasso over ridge regression never fully discards any features fully worked example. Regression is a straight line that is the independent variable and { \displaystyle y } direction.... Point may consist of more than one independent variable and a. Therefore, the least square method definition: the least squares in least square method formula we study the of... The distance in the sum of least square method formula to be minimized is, the least squares for a fully worked example! Geodesists of the residuals of points from the plotted curve particular expressions for the model and partial... It has a closed-form solution to a time series analysis to expand estimates and maximum-likelihood estimates are identical used... Statistics in regression analysis squares is often used to study the nature of the -values... \! of finding the line is − 1.1 and the mean of line!, suppose there is a statistical method for managerial accountants to estimate production costs b to the. So non-convergence is not the error term follows a normal distribution independently formulated by the trademark holders and not. Calculating the orbits
2.1 & 1.93 & & SNRG327.1-01.1 & -55.08 & 238.65 & -- & 0.63 \\ & RGBJ2243+203 & 20.35 & 340.98 & -- & 1.29 & & RXJ0852.0-4622 & -46.37 & 133.00 & -- & 0.65 \\ & VERJ0521+211 & 21.21 & 80.44 & 1.2 & 1.84 & & RXJ1713.7-3946 & -39.75 & 258.25 & -- & 0.67 \\ & S20109+22 & 22.74 & 18.02 & -- & 1.30 & & W28 & -23.34 & 270.43 & 0.8 & 1.43 \\ & PKS1424+240 & 23.79 & 216.75 & -- & 1.12 & & SNRG015.4+00.1 & -15.47 & 274.52 & 0.2 & 1.34 \\ & MS1221.8+2452 & 24.61 & 186.10 & -- & 1.13 & & W44 & 1.38 & 284.04 & -- & 0.97 \\ & 1ES0647+250 & 25.05 & 102.69 & -- & 1.65 & & HESSJ1912+101 & 10.15 & 288.21 & -- & 1.03 \\ & S31227+25 & 25.30 & 187.56 & -- & 1.14 & & W51C & 14.19 & 290.75 & -- & 1.07 \\ & WComae & 28.23 & 185.38 & -- & 1.20 & & IC443 & 22.50 & 94.21 & -- & 1.12 \\ & 1ES1215+303 & 30.10 & 184.45 & -- & 1.26 & Sey2 & ESO139-G12 & -59.94 & 264.41 & -- & 0.82 \\ & 1ES1218+304 & 30.19 & 185.36 & -- & 1.21 & & CentaurusA & -43.02 & 201.36 & -- & 0.62 \\ & Markarian421 & 38.19 & 166.08 & -- & 1.59 & UNID & HESSJ1507-622 & -62.34 & 226.72 & -- & 0.62 \\ Binary & CirX-1 & -57.17 & 230.17 & -- & 0.84 & & HESSJ1503-582 & -58.74 & 226.46 & -- & 0.62 \\ & GX339-4 & -48.79 & 255.70 & -- & 0.63 & & HESSJ1023-575 & -57.76 & 155.83 & 1.5 & 1.08 \\ & LS5039 & -14.83 & 276.56 & -- & 1.19 & & HESSJ1614-518 & -51.82 & 243.58 & 0.7 & 0.96 \\ & SS433 & 4.98 & 287.96 & -- & 0.99 & & HESSJ1641-463 & -46.30 & 250.26 & -- & 0.78 \\ & HESSJ0632+057 & 5.81 & 98.24 & 2.7 & 2.40 & & HESSJ1741-302 & -30.20 & 265.25 & 0.6 & 1.29 \\ FSRQ & S30218+35 & 35.94 & 35.27 & 0.7 & 2.15 & & HESSJ1826-130 & -13.01 & 276.51 & -- & 1.07 \\ & B32247+381 & 38.43 & 342.53 & -- & 1.54 & & HESSJ1813-126 & -12.68 & 273.34 & -- & 0.90 \\ GC & GalacticCentre & -29.01 & 266.42 & 1.1 & 1.36 & & HESSJ1828-099 & -9.99 & 277.24 & 0.7 & 1.45 \\ PWN & HESSJ1356-645 & -64.50 & 209.00 & 0.4 & 0.98 & & HESSJ1834-087 & -8.76 & 278.69 & -- & 0.92 \\ & HESSJ1303-631 & -63.20 & 195.75 & -- & 0.64 & & 2HWCJ1309-054 & -5.49 & 197.31 & -- & 0.92 \\ & HESSJ1458-608 & -60.88 & 224.54 & 1.2 & 1.05 & & 2HWCJ1852+013* & 1.38 & 283.01 & -- & 0.97 \\ & HESSJ1616-508 & -50.97 & 243.97 & 0.5 & 0.96 & & 2HWCJ1902+048* & 4.86 & 285.51 & -- & 0.99 \\ & HESSJ1632-478 & -47.82 & 248.04 & -- & 0.73 & & MGROJ1908+06 & 6.27 & 286.99 & -- & 1.22 \\ & VelaX & -45.60 & 128.75 & -- & 0.62 & & 2HWCJ1829+070 & 7.03 & 277.34 & -- & 1.01 \\ & HESSJ1831-098 & -9.90 & 277.85 & -- & 0.95 & & 2HWCJ1907+084* & 8.50 & 286.79 & -- & 1.02 \\ & HESSJ1837-069 & -6.95 & 279.41 & -- & 1.30 & & ICPeV & 11.42 & 110.63 & -- & 1.03 \\ & MGROJ2019+37 & 36.83 & 304.64 & 0.4 & 2.08 & & 2HWCJ1914+117 & 11.72 & 288.68 & -- & 1.16 \\ Pulsar & PSRB1259-63 & -63.83 & 195.70 & -- & 0.64 & & 2HWCJ1921+131 & 13.13 & 290.30 & -- & 1.05 \\ & Terzan5 & -24.90 & 266.95 & -- & 1.09 & & 2HWCJ0700+143 & 14.32 & 105.12 & -- & 1.48 \\ & Geminga & 17.77 & 98.47 & 0.9 & 1.75 & & VERJ0648+152 & 15.27 & 102.20 & -- & 1.57 \\ & Crab & 22.01 & 83.63 & 0.1 & 1.64 & & 2HWCJ0819+157 & 15.79 & 124.98 & -- & 1.06 \\ Quasar & PKS1424-418 & -42.10 & 216.98 & 1.1 & 1.04 & & 2HWCJ1928+177 & 17.78 & 292.15 & -- & 1.26 \\ & SwiftJ1656.3-3302 & -33.04 & 254.07 & -- & 1.10 & & 2HWCJ1938+238 & 23.81 & 294.74 & -- & 1.24 \\ & PKS1622-297 & -29.90 & 246.50 & -- & 0.80 & & 2HWCJ1949+244 & 24.46 & 297.42 & -- & 1.60 \\ & PKS0454-234 & -23.43 & 74.27 & -- & 0.84 & & 2HWCJ1955+285 & 28.59 & 298.83 & -- & 1.18 \\ & PKS1830-211 & -21.07 & 278.42 & -- & 0.86 & & 2HWCJ1953+294 & 29.48 & 298.26 & -- & 1.20 \\ & QSO1730-130 & -13.10 & 263.30 & -- & 0.94 & & 2HWCJ1040+308 & 30.87 & 160.22 & -- & 1.42 \\ & PKS0727-11 & -11.70 & 112.58 & 1.3 & 1.59 & & 2HWCJ2006+341 & 34.18 & 301.55 & -- & 1.38 \\ \end{tabular} } \end{minipage} } \end{table*} \begin{table} \centering \caption{The 13 IceCube muon track candidates from the IceCube HESE sample \cite{IC3years, IC4yproc} that are in the field of view of the ANTARES detector. The table gives the equatorial coordinates, the angular error estimate $\beta_\mathrm{IC}$ of the event and the $90\,\%$ C.L. upper limits on flux $\varPhi_0^{90\,\%}$ (in units of $10^{-8}$ GeV cm$^{-2}$ s$^{-1}$).\medskip} \label{tab:LimitsHESE} \begin{tabular}{crrcc} HESE ID &$\delta [\si{\degree}]$ & $\alpha [\si{\degree}]$ & $\beta_\mathrm{IC} [\si{\degree}]$ & $\varPhi_0^{90\,\%}$ \\ \hline 3 & -31.2 & 127.9 & 1.4 & 2.1 \\ 5 & -0.4 & 110.6 & 1.2 & 1.5 \\ 8 & -21.2 & 182.4 & 1.3 & 1.7 \\ 13 & 40.3 & 67.9 & 1.2 & 2.4 \\ 18 & -24.8 & 345.6 & 1.3 & 2.0 \\ 23 & -13.2 & 208.7 & 1.9 & 1.7 \\ 28 & -71.5 & 164.8 & 1.3 & 1.2 \\ 37 & 20.7 & 167.3 & 1.2 & 1.7 \\ 38 & 14.0 & 93.3 & 1.2 & 2.1 \\ 43 & -22.0 & 206.6 & 1.3 & 1.3 \\ 44 & 0.0 & 336.7 & 1.2 & 1.8 \\ 45 & -86.3 & 219.0 & 1.2 & 1.2 \\ 53 & -37.7 & 239.0 & 1.2 & 1.6 \\ \end{tabular}% \end{table} \subsection{Galactic Center region} The restricted search region is defined as an ellipse around the Galactic Center with semi-axes of \unit{20}{\degree} in galactic longitude and \unit{15}{\degree} in galactic latitude. Due to the smaller search area, the search for astrophysical sources is more sensitive than a full sky search because it is less probable for background events to randomly cluster together, mimicking the signature of a signal. Assuming the usual $E^{-2}$ spectrum, the most significant cluster found in this restricted region is located at $(\alpha,\delta) = (\unit{257.4}{\degree},\unit{-41.0 }{\degree})$ with a pre-trial p-value of 0.09\% and a fitted number of signal events of 2.3. The post-trial significance of this cluster, calculated as in the full sky search but in the restricted region around the Galactic Center, is 60\%. Other spectral indices ($\gamma$ = 2.1, 2.3, 2.5) and source extensions ($\sigma = \unit{0.5}{\degree}$, $\unit{1.0}{\degree}$, $\unit{2.0 }{\degree}$) are considered, yielding different most significant clusters. The source extension is quantified by the $\sigma$ of the gaussian distribution. For a spectral index of $\gamma$ = 2.5 and a point-source, the most significant cluster is found at $(\alpha,\delta) = (\unit{273.0}{\degree},\unit{-42.2 }{\degree})$, with a pre-trial p-value of 0.02\% and a post-trial significance of 30\%. The distribution of events for these
the rotational velocity measurementsthe uncertainties are typically 1.0 km s-1 for subgiants andgiants and 2.0 km s-1 for class II giants and Ib supergiants.These data will add constraints to studies of the rotational behaviourof evolved stars as well as solid informations concerning the presenceof external rotational brakes, tidal interactions in evolved binarysystems and on the link between rotation, chemical abundance and stellaractivity. In this paper we present the rotational velocity v sin i andthe mean radial velocity for the stars of luminosity classes IV, III andII. Based on observations collected at the Haute--Provence Observatory,Saint--Michel, France and at the European Southern Observatory, LaSilla, Chile. Table \ref{tab5} also available in electronic form at CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Catalogs of temperatures and [Fe/H] averages for evolved G and K starsA catalog of mean values of [Fe/H] for evolved G and K stars isdescribed. The zero point for the catalog entries has been establishedby using differential analyses. Literature sources for those entries areincluded in the catalog. The mean values are given with rms errors andnumbers of degrees of freedom, and a simple example of the use of thesestatistical data is given. For a number of the stars with entries in thecatalog, temperatures have been determined. A separate catalogcontaining those data is briefly described. Catalog only available atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Evolution of X-ray activity and rotation on G-K giantsThe recent availability of stellar parallaxes provided by the Hipparcosstar catalogue (ESA 1997) enables an accurate determination of thepositions of single field giants in a theoretical H-R diagram and areliable estimate of their masses. The present study combines these newastrometric data with previously published X-ray fluxes and rotationalvelocities. The results confirm the existence of a sharp decrease ofX-ray emission at spectral type K1 for 2.5 M_sun < M < 5 M_sungiants. The study shows that the rotational velocity of these starsreaches a minimum at the same location in the H-R diagram. However, notight relationship between X-ray luminosities and projected equatorialvelocities was found among the sample stars. I suggest that theseresults could reflect the importance of differential rotation indetermining the level of coronal emission among >= 2.5Msun G and K giants. The restoration of rigid rotation at thebottom of the red giant branch could prevent the maintenance of largescale magnetic fields, thus explaining the sharp decrease of coronalX-ray emission at spectral type K1. A reliable transformation of HIPPARCOS H_p magnitudes into Johnson V and B magnitudesA comparison of accurate UBV magnitudes, derived from numerousobservations at Hvar and SkalnatePleso, and of the mean Hipparcos \hpmagnitudes for a number of constant stars showed a very good mutualcorrespondence of these two data sets. Simple transformation formul\ae\are presented which allow calculating Johnson V and B magnitudes fromthe \hp magnitude and known B-V and U-B colours. For constant stars withwell-known values of both colours the accuracy of the transformation isclearly better than 0\m01. At the same time, the transformation is notcritically sensitive to the exact values of B-V and U-B. It isapplicable over a wide range of colours (B-V between -0\m25 and 2\m0)and works well also for reddened stars. However, since it was definedfor stars brighter than about 8\m0 and for reddenings smaller than about1\m0, its application outside these limits should be made with somecaution and further tested. Since the B-V and U-B colours are known forthe majority of brighter stars and since there are many classes ofvariable stars which do change colours only very mildly during theirlight changes (like the majority of Be stars) or for which theinstantaneous colours can be predicted or estimated from existingoptical observations, the transformations presented here may turn out tobe very useful for many researchers who need to combine Hipparcos andoptical photometry into one homogeneous data set. Classification and Identification of IRAS Sources with Low-Resolution SpectraIRAS low-resolution spectra were extracted for 11,224 IRAS sources.These spectra were classified into astrophysical classes, based on thepresence of emission and absorption features and on the shape of thecontinuum. Counterparts of these IRAS sources in existing optical andinfrared catalogs are identified, and their optical spectral types arelisted if they are known. The correlations between thephotospheric/optical and circumstellar/infrared classification arediscussed. ICCD Speckle Observations of Binary Stars. XVII. Measurements During 1993-1995 From the Mount Wilson 2.5-M Telescope.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114.1639H&db_key=AST ICCD Speckle Observations of Binary Stars. XVI. Measurements During 1982-1989 from the Perkins 1.8-M Telescope.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114.1623F&db_key=AST A catalogue of [Fe/H] determinations: 1996 editionA fifth Edition of the Catalogue of [Fe/H] determinations is presentedherewith. It contains 5946 determinations for 3247 stars, including 751stars in 84 associations, clusters or galaxies. The literature iscomplete up to December 1995. The 700 bibliographical referencescorrespond to [Fe/H] determinations obtained from high resolutionspectroscopic observations and detailed analyses, most of them carriedout with the help of model-atmospheres. The Catalogue is made up ofthree formatted files: File 1: field stars, File 2: stars in galacticassociations and clusters, and stars in SMC, LMC, M33, File 3: numberedlist of bibliographical references The three files are only available inelectronic form at the Centre de Donnees Stellaires in Strasbourg, viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5), or viahttp://cdsweb.u-strasbg.fr/Abstract.html MSC - a catalogue of physical multiple starsThe MSC catalogue contains data on 612 physical multiple stars ofmultiplicity 3 to 7 which are hierarchical with few exceptions. Orbitalperiods, angular separations and mass ratios are estimated for eachsub-system. Orbital elements are given when available. The catalogue canbe accessed through CDS (Strasbourg). Half of the systems are within 100pc from the Sun. The comparison of the periods of close and widesub-systems reveals that there is no preferred period ratio and allpossible combinations of periods are found. The distribution of thelogarithms of short periods is bimodal, probably due to observationalselection. In 82\% of triple stars the close sub-system is related tothe primary of a wide pair. However, the analysis of mass ratiodistribution gives some support to the idea that component masses areindependently selected from the Salpeter mass function. Orbits of wideand close sub-systems are not always coplanar, although thecorresponding orbital angular momentum vectors do show a weak tendencyof alignment. Some observational programs based on the MSC aresuggested. Tables 2 and 3 are only available in electronic form at theCDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Barium stars, galactic populations and evolution.In this paper HIPPARCOS astrometric and kinematical data together withradial velocities from other sources are used to calibrate bothluminosity and kinematics parameters of Ba stars and to classify them.We confirm the results of our previous paper (where we used data fromthe HIPPARCOS Input Catalogue), and show that Ba stars are aninhomogeneous group. Five distinct classes have been found i.e. somehalo stars and four groups belonging to disk population: roughlysuper-giants, two groups of giants (one on the giant branch, the otherat the clump location) and dwarfs, with a few subgiants mixed with them.The confirmed or suspected duplicity, the variability and the range ofknown orbital periods found in each group give coherent resultssupporting the scenario for Ba stars that are not too highly massivebinary stars in any evolutionary stages but that all were previouslyenriched with Ba from a more evolved companion. The presence in thesample of a certain number of false'' Ba stars is confirmed. Theestimates of age and mass are compatible with models for stars with astrong Ba anomaly. The mild Ba stars with an estimated mass higher than3Msun_ may be either stars Ba enriched by themselves ortrue'' Ba stars, which imposes new constraints on models. Absolute magnitudes and kinematics of barium stars.The absolute magnitude of barium stars has been obtained fromkinematical data using a new algorithm based on the maximum-likelihoodprinciple. The method allows to separate a sample into groupscharacterized by different mean absolute magnitudes, kinematics andz-scale heights. It also takes into account, simultaneously, thecensorship in the sample and the
Thus the distance d betw… In the figure given below, the distance between the point P and the line LL can be calculated by figuring out the length of the perpendicular. We know that the slopes of two parallel lines are the same; therefore the equation of two parallel lines can be given as: $$y$$ = $$mx~ + ~c_1$$ and $$y$$ = $$mx ~+ ~c_2$$. Distance between the two lines represented by the line x 2 + y 2 + 2 x y + 2 x + 2 y = 0 is: View Answer. Let P(x 1, y 1) be any point. Report. 4x + 6y = -5. The distance between parallel lines is the distance along a line perpendicular to them. Regarding your example, the answer returned is 0.980581. The line at 40 degrees north runs through the middle of the United States and China, as well as Turkey and Spain. The distance between two parallel lines is equal to the perpendicular distance between the two lines. A variable line passes through P (2, 3) and cuts the co-ordinates axes at A and B. Therefore, two parallel lines can be taken in the form y = mx + c1… (1) and y = mx + c2… (2) Line (1) will intersect x-axis at the point A (–c1/m, 0) as shown in figure. T. tigerleo. To find distance between two parallel lines find the equation for a line that is perpendicular to both lines and find the points of intersection of that line with the parallel lines. The two lines may not be the same length, and the parallel lines could be at an angle. This is one technique on finding the shortest distance between two parallel lines Summary. In terms of Co-ordinate Geometry, the area of the triangle is given as: Area of Δ MPN = $$\frac{1}{2} \left [ x_{1} (y_{2}-y_{3}) + x_{2} (y_{3}-y_{1}) + x_{3} (y_{1}-y_{2})\right ]$$. [6] 2019/11/19 09:52 Male / Under 20 years old / High-school/ University/ Grad student / A little / Purpose of use 0 Likes Reply. The distance from a line, r, to another parallel line, s, is the distance from any point from r to s. Distance Between Skew Lines The distance between skew lines is measured on the common perpendicular. The distance between any two parallel lines can be determined by the distance of a point from a line. Thus, we can conclude that the distance between two parallel lines is given by: $$d$$ = $$\frac{|c_1 ~- ~c_2|}{√1 + m^2}$$. Area of Δ MPN = $$\frac{1}{2}~×~Base~×~Height$$, $$\Rightarrow Area~ of~ Δ~MPN$$ = $$\frac{1}{2}~×~PQ~×~MN$$, $$\Rightarrow PQ$$ = $$\frac{2~×~Area~ of~ Δ~MPN}{MN}$$   ………………………(i). If they intersect, then at that line of intersection, they have no distance -- 0 distance -- between them. This is what I’m talking about.. Let the equations of the lines be ax+by+c 1 =0 and ax+by+c 2 =0. IMPORTANT: Please click here and read this first, before asking for help. If so, the answer is simply the shortest of the distance between point A and line segment CD, B and CD, C and AB or D and AB. Numerical: Find the distance between the parallel lines 3x – 4y +7 = 0 and 3x – 4y + 5 = 0. The shortest distance between the two parallel lines can be determined using the length of the perpendicular segment between the lines. Consider line L and point P in a coordinate plane. Distance between two lines is equal to the length of the perpendicular from point A to line (2). Thread starter tigerleo; Start date Jan 7, 2017; Tags distance lines parallel; Home. Find the distance between the following two parallel lines. I think that the average distance between the two blue lines (because they are straight) is actually just the average length of the two yellow lines. 4x + 6y + 7 = 0. Distance between two parallel lines y = mx + c 1 & y = mx + c 2 is given by D = |c 1 –c 2 | / (1+ m 2) 1/2. So it's a fairly simple "distance between point and line" calculation (if the distances are all the same, then the lines are parallel). Post here for help on using FreeCAD's graphical user interface (GUI). (lying on opposite sides of the given line.) Solved Examples for You Distance Between Parallel Lines. Top. Think about that; if the planes are not parallel, they must intersect, eventually. Main article: Distance between two lines Because parallel lines in a Euclidean plane are equidistant there is a unique distance between the two parallel lines. Videos. The distance between two parallel lines is equal to the perpendicular distance between the two lines. Obviously I can't speak for the OP about whether it doesn't to do what he wants in some cases. Find the distance between parallel lines whose equations are y = -x + 2 and y = -x + 8.-----Draw the given lines. Given the equations of two non-vertical, non-horizontal parallel lines, y = m x + b 1 {\displaystyle y=mx+b_{1}\,} If you have two lines that on a two-dimensional surface like your paper or like the screen never intersect, they stay the same distance apart, then we are talking about parallel lines. The vertical distance between the two given parallel lines is from the point (0,3) to the point (0,-3) [the two y-intercepts], which is 6. The distance between two parallel lines ranges from the shortest distance (two intersection points on a perpendicular line) to the horizontal distance or vertical distance to an infinite distance. The distance from the point to the line, in the Cartesian system, is given by calculating the length of the perpendicular between the point and line. Distance between two parallel lines. If that were the case, then there would be no need to discretize the line into points. Message 7 of 20 *Joe Burke. Now make the line perpendicular to the parallel lines and set its length. Example 19 Find the distance between the parallel lines 3x – 4y + 7 = 0 and 3x – 4y + 5 = 0 We know that , distance between two parallel lines Ax + By + C1 = 0 & Ax + By + C2 = 0 is d = |_1 − _2 |/√(^2 + ^2 ) Distance between the parallel lines 3x − 4y + 7 = Using the distance formula, we can find out the length of the side MN of ΔMPN. To find a step-by-step solution for the distance between two lines. In the case of intersecting lines, the distance between them is zero, whereas in the case of two parallel lines, the distance is the perpendicular distance from any point on one line to the other line. The Cartesian plane having the coordinates ( x1, y1 ) d = | \vec { PT |... A coordinate plane on one of the given line will be PA – PB solution for the distance formula we... Between these two lines may not be padded these two lines the middle of the lines can out... The line perpendicular to the length of the vertical distance from any.... Could be at an angle thus, we can find out the length of the side MN of.... Here and this line right over here, the distance d betw… the distance between a point from a pretty... Here and read this first, before asking for help c 1 = and. Have no distance -- between them it is equivalent to the perpendicular distance between two straight in. At the points N and m respectively the vertical distance from any point on one the. Two lines sides of the
# Personality Theory In the previous chapter, Yoga and Buddhism were presented as lifestyle choices, but it was acknowledged that they developed within a religious context.  In this chapter we continue that trend, but for Kabbalah, Christian mysticism, and Sufism, we cannot separate the lifestyle from the religion.  However, one can easily make the argument that we should not ignore the influence of religion on psychology.  After all, both spirituality and formal religion are significant factors in the lives of many people, regardless of whether some may not believe in the existence of God, or any other divine being(s).  It is also true that religion was a significant factor in the lives of many of the theorists we have examined in this book, and as a result, their spiritual beliefs helped to shape the nature of their personality theories. We will examine the mystical approaches that have developed within the Jewish, Christian, and Muslim religions.  These are the three Abrahamic religions, in the order in which they were established, and together they cover an extraordinary range of cultural groups, including some 3½ billion people (55 percent of the world’s population; Haviland, Prins, Walrath, & McBride, 2005).  Mysticism refers to the belief that one can know the spiritual truths of life and the universe that are beyond the intellect by being absorbed into the Deity through contemplation and self-surrender.  In practice, they share common elements with Yoga and Buddhism (particularly meditation), and by bringing these five practices together, we have truly begun to take a look at the personalities, within a cultural context, of people around the entire world. It is important to keep in mind, however, that any of the theories we have examined so far might play a role in personality development in any cultural group, in conjunction with the cultural influences of spirituality and religion.  Thus, the ideas presented in these last two chapters are not meant to offer alternatives to what we have discussed within traditional Western psychology.  In addition, there are other significant cultural factors beside spirituality and religion, though few of them have been studied or contemplated as deeply as religion.  And undoubtedly, no other cultural phenomenon has been actively promoted and spread around the world by missionaries of many different faiths, as has been the case with religion.  It is important to be open-minded and aware of some of the major factors underlying the dramatic cultural differences that exist around the world.  Only then can we honestly connect with other people in a global community. ## Judaism and Kabbalah We will begin our examination of spiritual/religious guidelines for living one’s life with the oldest, but smallest, of the Abrahamic religions.  Judaism holds a special place in the history of psychology, since nearly all of the early and most significant psychodynamic theorists were Jewish (even if they did not practice their faith).  In addition, since many of those Jewish psychoanalysts came to America during the 1930s, they then had a significant effect on the continued development of psychodynamic theory and psychoanalysis here in the United States. ### The Foundation of the Jewish Faith Judaism is one of the oldest religions in the world, and today some 14 million people practice this faith.  It is a monotheistic religion, thus believing that there is only one god:  Yahweh.  They believe that Yahweh called Abraham out of his homeland to establish a new home, in the general area of modern-day Israel.  This occurred in approximately the year 1900 B.C.  However, the formal foundation of Judaism involved the establishment of Yahweh’s laws, known as the Torah.  The Torah is not merely a set of laws or cultural guidelines, but rather, they are a pattern for living that transforms the Jewish people into Yahweh’s people (Wilkins, 1967).  The Torah is quite long, consisting of five books, which include many complex rules for both the people and the priesthood.  However, the rules were greatly simplified in Yahweh’s special revelation to Moses on Mt. Sinai, around the year 1300 B.C., and these simplified guidelines for how to live one’s life are known as the Ten Commandments: I am the Lord your God…You shall have no other gods before me. You shall not make for yourself a graven image…you shall not bow down to them or serve them… You shall not take the name of the Lord your God in vain… Remember the sabbath day, to keep it holy. You shall not kill. You shall not steal. You shall not bear false witness against your neighbor. from Exodus, Chapter 20; Holy Bible As simple as it might seem to follow these ten guidelines for living one’s life, it is just as easy to ignore them.  Unfortunately, ignoring them has often been the case, even among some of the most famous people in Jewish history.  Thus, the mystical practice of Kabbalah has arisen, to both help people live a righteous life, and to help them do so without having to guide their behavior by simple, yet strict, commandments.  In other words, there was, and is, a need to transform people’s minds.  In order to effect real change, we cannot simply expect people to follow the rules, we need to help them make the rules a part of their life.  In this sense, Kabbalah, like Yoga, Buddhism, and as we shall see for Christian mysticism and Sufism, can be viewed as a sort of cognitive psychology, a redirection of one’s conscious personality development. Discussion Question:  If the Ten Commandments are simply rules, as opposed to being an inherent part of our lives, is anything missing?  Are there things we would still be allowed to do that would harm other people, or harm ourselves?  What can we do to make the Ten Commandments a way of life, how can we be mindful of them? ### Kabbalah Kabbalah is a path designed to teach people about their place in life and in the universe, particularly with regard to the divine.  It emphasizes that one’s daily life should not be separated from one’s spiritual life.  In more practical terms, Kabbalah deals with the everyday experience that we have unlimited desires, but only limited resources to satisfy them.  Thus, there will always be some degree of suffering in our lives if we focus only on the material world.  Kabbalah teaches a pathway toward experiencing something beyond simple materialism.  And yet, that path remains obscured in a certain degree of secrecy.  The principal books are available only in the Hebrew and Aramaic languages, and some believe that Kabbalists who are qualified to teach Kabbalah are all in the country of Israel (Besserman, 1997; Laitman, 2005).  Accordingly, a distinct degree of difficulty in the study of Kabbalah is to be expected: Whoever delves into mysticism cannot help but stumble, as it is written:  “This stumbling block is in your hand.”  You cannot grasp these things unless you stumble over them. (pg. 163; Matt, 1995) Kabbalah is as old as Judaism itself, perhaps older.  Kabbalistic legend suggests that it may have begun with Enoch, the great-grandfather of Noah (as in Noah’s Ark; Halevi, 1986), but its formal practice recognizes a few key historical events.  In the sixth century B.C., a collection of manuals called Maaseh Merkavah emerged, and these manuals included a formal  practice.   For those who engaged in this practice, the goal was to directly experience the Deity by concentrating on mandala-like images that showed a path to the Throne of God (remember that Carl Jung also meditated on Mandala images).  Their emphasis on out-of-body experiences distinguished them from similar Babylonian schools of spirituality that emphasized inner-directed visualizations and, therefore, were not as mystical as the Kabbalists.  In the second century A.D., Rabbi Shimon Bar Yochai wrote an important Kabbalist text called the Zohar (translated as the Book of Splendor or the
(Show Source): You can put this solution on YOUR website!It turns out that anything raised to the one half power is equivalent to taking the square root. In other words So... So the solution is Linear-equations/193200: need to graph and find y intercept y=1/5x1 solutions Answer 145040 by jim_thompson5910(28550)   on 2009-04-26 16:44:22 (Show Source): You can put this solution on YOUR website! Looking at we can see that the equation is in slope-intercept form where the slope is and the y-intercept is note: really looks like Since this tells us that the y-intercept is .Remember the y-intercept is the point where the graph intersects with the y-axis So we have one point Now since the slope is comprised of the "rise" over the "run" this means Also, because the slope is , this means: which shows us that the rise is 1 and the run is 5. This means that to go from point to point, we can go up 1 and over 5 So starting at , go up 1 unit and to the right 5 units to get to the next point Now draw a line through these points to graph So this is the graph of through the points and Quadratic-relations-and-conic-sections/193197: How do i do question number 77 on 10.3 Write an equation of the line that is tangent to the circle at that point. 77) x2+ y2= 244; (-10, -12) Please explain the steps..1 solutions Answer 145039 by jim_thompson5910(28550)   on 2009-04-26 16:42:39 (Show Source): You can put this solution on YOUR website!To find the tangent line, we need the slope of the tangent line. To find that, we first need the first derivative of "y": ... Start with the given equation. ... Derive both sides with respect to "x" ... Derive the left and right sides. Note: remember, y is a function of "x", so use the chain rule. ... Subtract 2x from both sides. ... Divide both sides by 2y. ... Reduce So the slope of any tangent line at the point (x,y) (on the circle) is Now just plug in the values and to find the tangent slope at (-10,-12): ... Reduce So the slope of the tangent line is Now let's find the equation of the line that has a slope of and goes through (-10, -12): If you want to find the equation of line with a given a slope of which goes through the point (-10,-12), you can simply use the point-slope formula to find the equation: ---Point-Slope Formula--- where is the slope, and is the given point So lets use the Point-Slope Formula to find the equation of the line Plug in , , and (these values are given) Rewrite as Rewrite as Distribute Multiply and to get Subtract 12 from both sides to isolate y Combine like terms and to get ------------------------------------------------------------------------------------------------------------ Answer: So the equation of the tangent line is Graphs/193194: Can someone please help me. I am pulling my hair out. What does this question mean and how do you do it? What does this thing look like? Create a graph with precisely two odd vertices and two even vertices. State which of your vertices are even and which are odd. Use a dot and letter to mark each vertex. 1 solutions Answer 145037 by jim_thompson5910(28550)   on 2009-04-26 16:14:30 (Show Source): You can put this solution on YOUR website!Here's some terminology: Vertex: A point representing a location (city, town, etc...). In general, what the vertex represents isn't important. Edge: A line connecting two vertices (plural for vertex) Odd Vertex: A vertex with an odd number of edges connecting to it Even Vertex: A vertex with an even number of edges connecting to it So here's one way to draw the graph: Take note that... Vertex A: even vertex with 2 edges connecting to it Vertex B: odd vertex with 3 edges connecting to it Vertex C: even vertex with 2 edges connecting to it Vertex D: odd vertex with 3 edges connecting to it Pythagorean-theorem/193186: I looked at the solution for question #193175: A diagonal walk through a rectangular rose garden 18 meters by 24 meters can be built at $12 per linear meter. How much will the walk cost? But I was unable to figure out what ankor was talking about. I am not as versed in using a calculator as some are and did not understand. Can someone help please. 1 solutions Answer 145032 by jim_thompson5910(28550) on 2009-04-26 15:44:59 (Show Source): You can put this solution on YOUR website!First, we need to find the length of diagonal: If we cut the garden in half along the diagonal, we'll have this triangle set up (where "x" is the length of the diagonal): Remember, the Pythagorean Theorem is where "a" and "b" are the legs of a triangle and "c" is the hypotenuse. Since the legs are and this means that and Also, since the hypotenuse is , this means that . Start with the Pythagorean theorem. Plug in , , Square to get . Square to get . Combine like terms. Rearrange the equation. Take the square root of both sides. Note: only the positive square root is considered (since a negative length doesn't make sense). Take the square root of 900 to get 30. So the diagonal is 30 meters long. Now just multiply the length of the diagonal by the cost per linear meter to get: So it will cost$360 to build the diagonal walkway. Linear-systems/193189: Two angles are complimentary. The sum of the measure of the first angle and helf the second angle is 68 degrees. Find the measures of the angles.1 solutions Answer 145027 by jim_thompson5910(28550)   on 2009-04-26 15:34:21 (Show Source): You can put this solution on YOUR website!"Two angles are complimentary." translates to . Solve for "y" to get "The sum of the measure of the first angle and helf the second angle is 68 degrees." translates to Start with the second equation. Plug in Distribute Subtract 45 from both sides. Combine like terms. Multiply both sides by 2. Multiply ------------------------ Go back to the first equation Plug in Subtract So the solutions are and which means that the first and second angles are 46 and 44 degrees. Radicals/193166: sqrt(4x-5)=sqrt(x+9) 1 solutions Answer 145006 by jim_thompson5910(28550)   on 2009-04-26 12:50:06 (Show Source): You can put this solution on YOUR website! Start with the given equation. Square both sides Square the square roots to eliminate them Add to both sides. Subtract from both sides. Combine like terms on the left side. Combine like terms on the right side. Divide both sides by to isolate . ---------------------------------------------------------------------- Answer: So the answer is which approximates to . Pythagorean-theorem/193132: Liz is flying a kite. she let out 80 ft of string and attached the string to a stake in the ground, the kite is now directly above her brother mike who is 32 ft away from liz. find the height of the kite to the nearest foot. 1 solutions Answer 144974 by jim_thompson5910(28550)   on 2009-04-26 01:20:24 (Show Source): You can put this solution on YOUR website! First, let's draw a picture: We can see that a triangle forms where the legs are "x" and 32 ft with a hypotenuse of 80 ft. To find the unknown length of the leg "x", we need to use the Pythagorean Theorem Remember, the Pythagorean Theorem is where "a" and "b" are the legs of a triangle and "c" is the hypotenuse. Since the legs are and this means that and Also, since the hypotenuse is , this means that . Start with the Pythagorean theorem. Plug in , , Square to get . Square to get . Subtract from
Babyspullen voor de beste prijs Close 0655219315 [email protected] Open House on the 24th, - 12 mid day to 5 pm. # can interpolation and curve fitting be used interchangeably The main difference between these two is that in interpolation we need to exactly fit all the data points whereas it's not the case in curve fitting. 11.1 Spatial and Temporal Interpolation. With it, we saw that we leveraged linear interpolation to remove keys that could easily be predicted. Spatial interpolation or temporal interpolation methods can be used for infilling missing data in any time-series. Curve fitting builds on what we last saw with linear key reduction. = /(!). Cubic splines means a third-order polynomial is generated connecting the points rather than a straight line. This gardener is a very curious person, and she would like to estimate how tall her plant was on the fourth day. Interpolation is to connect discrete data points so that one can get reasonable estimates of data points between the given points. So, it may be possible that all the points might not pass through the curve. The purpose of curve fitting is to find a function f(x) in a function class Φ for the data (xi, yi) where i=0, 1, 2,…, n–1. If you have previously obtained access with your personal account, please log in. Linear interpolation on a set of data points (x 0, y 0), (x 1, y 1), ..., (x n, y n) is defined as the concatenation of linear interpolants between each pair of data points.This results in a continuous curve, with a discontinuous derivative (in general), thus of differentiability class.. Linear Fit VI 2. Commonly Polynomials are used for the process of interpolation because they are much easier to evaluate, differentiate, and integrate and are known as polynomial interpolation. Now , if we go back to the tomato plant example, the first set of values for day three are given as (3,4), the second set of values for day five are given as (5,8), and the value for x is 4 since we want to find the height of the tomato plant, y, on the fourth day. Well, that is where the interpolation formula comes into picture. Now to help us remember what it means, we should think of the first part of the word, which is 'inter,' and which means 'enter,' and that  reminds us to look 'inside' the data we originally had. So, it can be understood that the formula for interpolation is a method of curve fitting using the linear polynomials and hence to construct new data points within the given range of a discrete set of known data points(the data points). Power Fit VI 4. Curve fitting is to find a curve that could best indicate the trend of a given set of data. • It would be more convenient to model the data as a mathematical function . Curve fitting can involve either interpolation, where an exact fit to the data is required, or smoothing, in which a "smooth" function is constructed that approximately fits the data. Unlimited viewing of the article/chapter PDF and any associated supplements and figures. What if its growth looked more like that in the picture given below? Linear pattern basically means that the points created a straight line. Linear interpolation can be used since very early antiquity for filling the unknown values in any table. Sorry!, This page is not available for now to bookmark. The Interpolant fit category fits an interpolating curve or surface that passes through every data point. Interpolation can be defined as the process of finding a value between two points on a line or curve. Of course, the unknown Y values must be in the same units as the Y values you entered for the standard curve. In these coming blogs, I'll try to show some ways to do exactly this, i.e., find a curve that passes through your data. (Image to be added soon)But what if the plant does not grow with a convenient linear pattern? Linear interpolation as approximation. Fitting a standard curve and interpolating. Return to Figure. Solving this equation for y, which is the unknown value at x, gives which is the formula for linear interpolation in the interval . The curve can be 1 Polynomials of degree n 2 Trigonometric 3 Exponential Interpolation and Curve tting Spring 2019 10 / 19 Solutions – Definition, Examples, Properties and Types, Vedantu In the Curve Fitting app, select Interpolant from the model type list.. If you use outside resources or ideas that are not your own to help make your case, be sure that they are properly cited in the citation style of your choice. This method preserves the monotonicity and the shape of thegiven data. Curve fitting is to find a curve that could best indicate the trend of a given set of data. So, it can be understood that the formula for interpolation is a method of curve fitting using the linear polynomials and hence to construct new data points within the given range of a discrete set of known data points(the data points). Pro Lite, Vedantu Linear interpolation can be used since very early antiquity for filling the unknown values in any table. It introduces interpolation and curve fitting. It is for curves only. Learn more. Interpolation is a method of estimating values between known data points. The full text of this article hosted at iucr.org is unavailable due to technical difficulties. Interpolation is a useful mathematical and statistical tool that is used to estimate values between any two given points. Using the Curve Fitting app or the fit function, you can fit cubic spline interpolants, smoothing splines, and thin-plate splines. The instance of this class defines a __call__ method and can … Use interpolation to smooth observed data, fill in missing data, and make predictions. Interpolation is a method of estimating values between known data points. Nothing stops you from choosing the curve that perfectly fits to your data. Interpolation can really be thought of as a special case of curve fitting where the function is forced to pass through every data point. Consider a program involving either the use of interpolation or involving the use of curve fitting that could be used in your intended) field or that could be of use to you as a student. If you place the unknowns above the standard curve, Prism will not interpolate. Rational functions may also be used for interpolation. Curve Fitting • In the previous section we found interpolated points, i.e., we found values between the measured points using the interpolation technique. Gaussian Peak … Linear interpolation has been used since very early time antiquity for filling the unknown values in tables. The Shape-preservation method is also known as Piecewise cubic Hermite interpolation (PCHIP). In geometry, curve fitting is a curve y=f(x) that fits the data (xi, yi) where i=0, 1, 2,…, n–1. Here's an example which will illustrate the concept of interpolation and give you a better understanding of the concept of interpolation. The moral here is that cubic interpolation should really be used only if gaps between x points are roughly the same. Four analyses in Prism let you interpolate values from curves. Extrapolation can be defined as guessing data points from beyond the range of your data set (beyond the data what you have been provided you with). Interpolation can be defined as an estimation of a value within two known values in a given sequence of values. 1. The interpolation formula can be written as -, y - $y_{1}$ = $\frac{y_{2}-y_{1}}{x_{2}-x_{1}}(x-x_{1})$. The concept of Interpolation is used to simplify complicated functions by sampling any given data points and interpolating these data points using a simpler function. Curve Fitting Toolbox™ functions allow you to perform
on structural scene representation~\cite{yi2018neural}, neuro-symbolic concept learner~\cite{mao2019neuro}. Besides, \cite{dong2019neural} proposed neural logic machines for relational reasoning and decision-making tasks. Inspired by this line of methods, we use Deep Neural Networks (DNN) to represent the activity primitives as symbols meanwhile borrow the power of symbolic reasoning to use logic rules to program these primitives. \section{Bottleneck of Canonical Direct Mapping} \label{sec:bottleneck-test} Firstly, we conduct an experiment to demonstrate that the canonical direct mapping method suffers from severe performance bottleneck problems. We choose a canonical action detection method TIN~\cite{interactiveness} as a representation of the direct mapping methods. Then, we train it with a different number of images from HAKE data and evaluate its performance on HICO-DET~\cite{hicodet} test set. For each run, the TIN model is trained for 30 epochs using an SGD optimizer with a learning rate (LR) of 0.001 and a momentum of 0.9. The ratio of positive and negative samples is 1:4. A late fusion strategy is adopted. For HICO-DET, we adopt the Default mAP metric as proposed in HICO-DET~\cite{hicodet}. The result is illustrated in Fig.~\ref{Fig:1}b. It shows that, in the beginning, the increasing amount of data indeed helps boost the performance of the direct mapping method in a nearly linear manner. However, as more and more data get involved, the performance gain gets less significant and rapidly saturates. This indicates that it is hard for canonical direct mapping methods to make the best use of the increasing data without the help of knowledge and reasoning. Even providing more labeled data, the direct mapping may not achieve the success of object detection on MS-COCO~\cite{coco} (CNN-based, 55+ mAP) given the same magnitude ($10^5$) of training images. Besides, activity images are also much more difficult to annotate than object images, considering the complex patterns and ambiguities of human activities. \section{Method} \label{sec:hake-pipeline} \subsection{Overview} As depicted in Fig.~\ref{Fig:3}, HAKE casts activity understanding into two sub-problems: (a) \textbf{knowledge base building}, (b) \textbf{neuro-symbolic reasoning}. \begin{figure*}[!ht] \begin{center} \includegraphics[width=\textwidth]{Fig3.pdf} \end{center} \vspace{-15px} \caption{HAKE Overview. \textbf{a.} We cast activity understanding into: \textbf{a(1)} Knowledge base construction: annotating large-scale activity-primitive labels to afford accurate primitive detection. \textbf{a(2)} Reasoning: Given detected primitives, adopting neuro-symbolic reasoning to program them into semantics. \textbf{b.} Detailed pipeline. \textbf{b(1)} Primitive detection and Activity2Vec. Given an image, we utilize the detectors~\cite{faster,fang2017rmpe} to locate the human/object and human body parts. Then we use a simple CNN model together with Bert~\cite{devlin2018bert} to extract the visual and linguistic features of primitives via primitive detection. \textbf{b(2)} Primitive-based logical reasoning. With the two kinds of representations from Activity2Vec, we operate logical reasoning in neuro-symbolic paradigm following the prior and auto-discovered logic rules. Here, $NOT(\cdot)$ and $OR(\cdot,\cdot)$ modules are shared by all events but \textbf{drawn separately} here for clarity.} \label{Fig:3} \vspace{-0.5cm} \end{figure*} \subsubsection{Building Human Activity Knowledge Base} Firstly, HAKE should detect primitives ``unconsciously'' like System 1~\cite{bengio-system}. Thus, we built a knowledge base including abundant activity-primitive labels. To discover and define primitives, we conduct a beforehand user study: given activity images/videos, participants should give decisive primitives for activities from an \textbf{initial primitive dictionary} including 200 human body Part States (PaSta~\cite{li2020pastanet}), 80 common objects~\cite{coco}, and 400 scenes~\cite{zhou2017places}. Each primitive is depicted as a phrase token, \textit{e.g.}, \textit{hand-touch-sth}, \textit{chair}, \textit{classroom}. This dictionary grows as the annotation progresses, and participants can supplement primitives to better explain their decisions. After annotating 122 K+ images and 234 K+ video frames~\cite{ava} covering 156 activities, we found that: 1) most participants believed body parts were the foremost activity semantic carriers. 2) PaSta classes were limited. After exhaustively checking 26.2 M+ manually labeled PaSta, only approximately 100 classes were prominent. 3) Object sometimes makes a small difference, \textit{e.g.}, in \textit{inspect-sth, touch-sth}. 4) Few participants believed that scene always matters, \textit{e.g.}, though rarely, we can \textit{play football} in \textit{living room}. In light of these, our dictionary would contain as many cues as possible and leaves the choice of use to reasoning policy. Though scalable, it already contains enough primitives to compose common activities, so we usually do not need to supplement primitives for new activities. In total, HAKE includes \textbf{357 K+} images/frames, \textbf{673 K+} persons, \textbf{220 K+} object primitives, and \textbf{26.4 M+} \textit{PaSta} primitives. With abundant annotations, a simple CNN-based detector detects primitives well and achieves \textbf{42.2} mAP (Sec.~\ref{sec:primitive_result}). \subsubsection{Constructing Logic Rule Base and Reasoning Engine} Second, to determine activities, a causal protocol consisting of a \textbf{logic rule base} and a \textbf{reasoning engine} is proposed. After detecting primitives, we use a DNN to extract visual and linguistic representations to represent primitives $Pri=\{P_i\}_{i=1}^N$ and activities $Act=\{A_m\}_{m=1}^M$. As interpretable symbolic reasoning can capture causal primitive-activity relations, we leverage it to program primitives following logic rules. A logic rule base is initialized to import common sense: participants are asked to describe the \textbf{causes} (primitives) of \textbf{effects} (activities). Each activity has \textit{initial} multi-rule from different participants to ensure diversity. For example (Fig.~\ref{Fig:3}b), $P_i, P_j, P_k$ represent \textit{head-read-sth}, \textit{hand-hold-sth}, \textit{newspaper} and $A_m$ indicates \textit{read newspaper}, a rule is expressed as $P_i\land P_j\land P_k\rightarrow A_m$ ($\land$: AND, $\rightarrow$: implication). $P_i,P_j,P_k,A_m$ are seen as \textbf{events} that are occurring/True or not/False. When $P_i,P_j,P_k$ are True simultaneously, $A_m$ is True. For simplicity, we turn $\rightarrow, \land$ into $\vee, \lnot$ via $x\rightarrow y \Leftrightarrow\lnot x\ \vee y$. $\lnot$ and $\vee$ are implemented as functions $NOT(\cdot)$, \ $OR(\cdot,\cdot)$ with Multi-Layer Perceptrons (MLPs) that are reusable for all events. We set logic laws (idempotence, complementation, \textit{etc}.) as optimized objectives~\cite{shi2020neural} imposed on all events to attain logical operations via backpropagation. Then, the expression output is fed into a discriminator to estimate the true probability of an event. Given a sample, multi-rule predictions of all activities are generated concurrently. We use voting to combine multi-decision into the final prediction via multi-head attention~\cite{attention}. Besides, to better capture the causal relations, we propose an \textbf{inductive-deductive policy} to make the rule base scalable instead of using rule templates or enumerating possible rules~\cite{rocktaschel2017end}. Although annotating rules is much easier than annotating images/videos, we can discover novel rules with the initial prior rules (Sec.~\ref{sec:rule_up_eval}): 1) Inductively concluding rules from observations. As activity-primitive annotations can be seen as rule instantiations, we randomly select them as rule candidates or heuristically generate candidates according to the human prior rule distribution. 2) Deductively evaluating rule candidates in practical training. Good candidates inducing minor losses are selected via backpropagation since they are more compatible and general with various contexts. They are updated into the rule base and in turn boost the performance. The bidirectional process can robustly discover rules and is less insensitive to the lack of activity annotation shortage than direct activity classification. Finally, we discover \textbf{4,090} rules for 156 activities. From the experiments, we find that HAKE requires only several rules for each activity to perform well. Next, we detail our framework in Sec.~\ref{sec:primitive_det}-\ref{sec:unified_infer}. \subsection{Primitive Detection} \label{sec:primitive_det} Given an image, detected body part boxes, and object boxes $\mathcal{I}, b_p, b_o$ (with object classification), we operate primitive detection following Fig.~\ref{Fig:3}b. We assume the number of body parts is $m$. In detail, we first get the \textbf{part} features $f^{(i)}_p$ and \textbf{object} feature $f_o$ via ROI pooling from a feature extractor $\mathcal{F}$ (Faster R-CNN~\cite{faster} pre-trained on COCO~\cite{coco} for image and 3D convolution backbone~\cite{Kinetics} pre-trained on Kinetics~\cite{Kinetics} for video). For body-only motions~\cite{ava} (\textit{e.g.}, \textit{walk}, \textit{run}) that are not involved with objects, $b_o$ is replaced with the whole image coordinates. That is, we input the whole image feature $f_c$ as $f_o$. Besides, $f_c$ can also represent the \textbf{scene} primitive. Then, we use these features to predict the \textbf{part relevance} indicating the contribution of a body part for recognizing an activity. For example, feet usually have weak correlations with \textit{drink with cup}. And in \textit{eat apple}, only hands and head are essential. For each part, we concatenate the part feature $f^{(i)}_p$ from $b^{(i)}_p$ and object feature $f_o$ from $b_o$ as the inputs. All features will be input to a part relevance predictor $\mathcal{P}_{pa}(\cdot)$, which contains FC layers and Sigmoids, getting $a=\{a_{i}\}^{m}_{i=1}$ ($a_{i} \in [0,1]$) following \begin{eqnarray} \label{eq:pa-recog} a & = & \mathcal{P}_{pa}(\{f^{(i)}_p\}_{i=1}^{m},
the non-integrable phase shift $\Delta \theta$. Because of this phase shift, the wave front of a plane wave is pulled and the plane wave changes directions. Fig.~\ref{magdisk}a shows the case of the magnetic field where the loop is all spatial. Likewise, Fig.~\ref{elecdisk}b shows the electric field is equal to parallel transporting the wave function or matter field around a closed loop on a part spatial and part temporal slice of space-time. As we delve into a review of the three schools, there will be a proliferation of notations for each of the schools and the past papers. To help clarify this, we provide a table in appendix \ref{AppendixVariableDefs} to help define the different variables as we use it to express the basis vectors in each school and the various index types. \subsection{The Kaluza-Klein School} \label{Sec2.1} Kaluza Klein theories unify classical electromagnetism with Einstein's general relativity \cite{Kaluza:1921ar} \cite{Klein:1926ar}. They posit extra spatial dimensions that are compactified within ordinary space-time along a very small radius $R$. All tensor quantities are independent of this fifth coordinate (the cylinder condition). In traditional Kaluza-Klein theory the line element of the five-dimensional space is\footnote{Throughout this paper we use the convention that lower-case Latin letters near the beginning of the alphabet $a,b,...$ will be gauge-theory color indices, Greek letters $\mu, \nu,...$ will be space-time coordinates, upper-case Latin letters $A,B,...$ will be used for Kaluza-Klein metric indices, and lower-case Latin letters towards the middle of the alphabet $i,j,...$ will be used for the variables corresponding to subspaces of space-time and the embedding dimensions, where context will keep them distinct. The Kaluza-Klein index values 0 through 3 are the usual space-time coordinates $t,x,y,z$ and the index value 5 is the fifth dimension coordinate $x^5$, which is used to parameterize the tiny compact dimension. The appendix provides a summary.} \begin{equation} ds^2 = g_{\mu\nu}dx^\mu dx^\nu + (R\, A_\mu \,dx^\mu + dx^5)^2, \end{equation} where we omit the dilaton field for simplicity of presentation. Here $g_{\mu\nu}$ is the familiar four-dimensional metric from general relativity, $A_\mu$ is the four-vector potential, $x^5$ is the fifth dimension's coordinate, and $R$ is the radius of the curled up fifth dimension. In Kaluza-Klein theory charge is explained as motion of a neutral particle along the fifth dimension, where the two directions it can go in $x^5$ explain the two different types of charge. Electric fields are four-dimensional manifestations of the inertial-dragging effect in the fifth dimension \cite{Gron85} \cite{Gron92} \cite{Gron:2005aw}. Furthermore coordinate transformations of the fifth dimension are shown to be $U(1)$ gauge transformations. One pitfall of the classical theory is that there are no measurable new predictions. Another pitfall occurs with quantum mechanics. The wave function around the fifth dimension gives particles a mass-spectrum tower of $m^2= (n/R)^2$, where $n$ is an arbitrary integer. For an $R$ near the Planck scale, particles would be either massless or have Planck-scale masses, which implies that the model must be modified to be used in new physical theories. Modified Kaluza-Klein theories play a large role in string theory. For a further review of Kaluza-Klein theory see references \cite{Salam:1981xd,Schwarz:1992zt} and the references therein. \subsection{The Grassmannian School} \label{Sec2.2} \label{embedcamps} Grassmannian representations of gauge fields started in 1961 when Narasimhan and Ramanan showed that every $U(n)$ gauge theory could be represented by a section of a Grassmannian $Gr(n,N)$ fiber bundle \cite{Narasimhan1961,Narasimhan63}. A Grassmannian manifold $Gr(n,N)$ is the set of orientations an $n$-plane can take in a larger $N$-dimensional space with a fixed origin. Another way to view $Gr(n,N)$ fiber bundle is as a $n$-plane embedded into a higher-dimensional $N$-Euclidean space that is inserted into each point in space and time. The Grassmannian school is essentially a gauge theory version of the 1956 Nash embedding theorem which proved that every Riemannian manifold could be embedded in a higher-dimensional Euclidean space \cite{Nash56}. In the language of bundles, Narasimhan and Ramanan proved for any $U(n)$ gauge field and $d$ space-time dimensions, the gauge field can be constructed by inserting a $\mathbb{C}^n$ vector bundle into a trivial $\mathbb{C}^N$ vector bundle if $N \geq (d+1)(2d+1)n^3$. Narasimhan's condition guarantees us an embedding for this $N$, but we can sometimes represent the embedding for specific field configurations for smaller $N$ as we will do in section \ref{Sec5}. For an $O(n)$ gauge field $\mathbb{R}^n$ vector bundles are embedded in a trivial $\mathbb{R}^N$ vector-bundle. In the Grassmannian school, wave functions are sections of the $n$-dimensional vector bundle. That is, they are a vector on the $\mathbb{C}^n$ or $\mathbb{R}^n$ vector space. By definition the vector bundles have a fixed origin. All the embedding-school approaches have a set of $n$ orthonormal gauge basis vectors $\vec{e}_a$ that span the gauge fiber internal to each space-time point. The dual basis vectors $\vec{e}^{\;a}$ satisfy $\vec{e}^{\;a} \cdot \vec{e}_b = \delta^a_b$. There are $n$ vectors $\vec{e}_a$ in a real or complex Euclidean $N$-dimensional embedding space. The matter field (wave function) exists as a vector on the gauge fiber spanned by the gauge-fiber basis vectors: \begin{equation} \vec{\phi} = \phi^a \vec{e}_a. \end{equation} The projection operator is the outer product $P^j_k = {e}^j_a {e}^{\;a}_k$. The gauge field is then \begin{equation} \label{A} (A_{\mu})_{\;b} ^{a}= i\vec{e}^{\;a}\cdot \partial_\mu \vec{e}_b. \end{equation} Notice that if $n=N$ then $A_\mu$ is a pure gauge with a vanishing $F_{\mu\nu}$. In all the cases that we study here, $N>n$. Fig.~\ref{Standardpic} shows the $Gr(2,3)$ Grassmannian model visually. The bubbles show the $N=3$ trivial vector space inside each space-time point. The red vectors are the gauge-fiber basis vectors $\vec{e}_a$ which span the displayed disk. The gauge fields depend on all the ways one can orient the $R^2$ space within the trivial $R^3$ space. The wave function or matter field $\vec{\phi}$ is the black vector that lives on the disks. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{Standardpic} \caption{A graphical representation of the Grassmannian school. A set of two basis vectors span the internal vector space attached to every point in space-time. How they vary determines the electromagnetic field.} \label{Standardpic} \end{figure} A gauge transformation is a rotation of the basis vectors $\vec{e}_a$ accompanied by the inverse rotation on the matter vector coefficients $\phi^a$ that preserves their inner product and leaves the wave function $\vec{\phi} = \phi^a \vec{e}_a$ and the projection operator $P^j_k = e^j_a {e}^{\;a}_k$ invariant. It is very central to our argument to understand that gauge transformations leave two objects invariant: (1) the plane spanned by the basis vectors $\vec{e}_a$, and (2) the vector formed by the wave function $\vec{\phi}^a$. Global transformations on the embedding space do not affect Eq.(\ref{A}). A long list of notable physicists have employed the Grassmannian-model as a part of their gauge theory research. In the following review, we map the notation used in these previous approaches onto the notation introduced above. Atiyah in 1979 \cite{79Atiyah} defined the linear maps $u_x: \mathbb{R}^n \rightarrow \mathbb{R}^N$, whose image was in the trivial space $\mathbb{R}^N$. Atiyah's $u$'s play the role of the gauge-fiber basis vectors $\vec{e}_a$. The projection operator is written as $P = uu^*$, with $u^*u = 1$, and the gauge potential is $A_\mu = u^* \partial_\mu u$, where $u^*$ is the dual to $u$. Atiyah, Drineld, Hitchin, and Manin (ADHM) used the rectangular matrices of the Grassmannian school as one of the tools in their construction of self-dual instanton solutions in Euclidean Yang-Mills Theory \cite{Atiyah1978185}. Corrigan and followers \cite{Corrigan:1978ce,Alekseevsky:2002pi} used the embedding representation in finding Green's functions for self-dual gauge fields. Dubois-Violette \cite{Dubois-Violette:1979it} created a formulation of gauge theory using only globally defined complex $N \times n$ matrices $V$ (analogous to $e^j_{\,a}$) such that $V^\dagger V=I$ and $VV^\dagger=P$, and $A_\mu =V^\dagger (x) \partial_\mu V(x).$ An independent research line refers to the Grassmannian school as $CP^{N-1}$ models \cite{Eichenherr:1978qa,Gava:1979sp,1980NuPhB.174..397D,Balakrishna:1993ja,Palumbo:1993vu,PhysRevD.66.025022,Marsh:2007qp,2006JPhA...39.9187G,2010JMP....51j3509H}. In the $CP^{N-1}$ models a setup is created with $z^\dagger \cdot z =1$ where $z$, which is sometimes called a zweibein, is a complex $N$-vector. The gauge field $A_\mu = z^\dagger \partial_\mu \cdot z$ is discovered in the equations of motion. Here the complex vector $z$ plays the role of a gauge basis vector $e^j_a$ with complex dimensions $1 \times N$. Felsager, Leinaas, and Gliozzi \cite{Gliozzi:1978xe,Felsager:1979fq} had a similar approach. In a manner very similar to Fig.~\ref{Standardpic} and section \ref{Sec5},
continuous parameter space and then applying reference methodology (J.-M. Bernardo [J. Roy. Statist. Soc. Ser. B 41 (1979), no. 2, 113–147; MR0547240]). The main contribution here is to propose an all purpose objective prior based on the Kullback–Leibler (KL) divergence. More specifically, the prior $\pi(\theta)$ at any parameter value $\theta$ is obtained as follows: (i) compute the minimum KL divergence over $\theta'\neq \theta$ between models indexed by $\theta'$ and $\theta$; (ii) set $\pi(\theta)$ proportional to a sound transform of the minimum obtained in (i). A good property of the proposed approach is that it is not problem specific. This objective prior is derived in five models (including binomial and hypergeometric) and is compared to the priors known in the literature. The discussion suggests possible extension to the continuous parameter setting. A. Lijoi, R. H. Mena and I. Prünster [Biometrika 94 (2007), no. 4, 769–786; MR2416792] recently introduced a Bayesian nonparametric methodology for estimating the species variety featured by an additional unobserved sample of size $m$ given an initial observed sample. This methodology was further investigated by S. Favaro, Lijoi and Prünster [Biometrics 68 (2012), no. 4, 1188–1196; MR3040025; Ann. Appl. Probab. 23 (2013), no. 5, 1721–1754; MR3114915]. Although it led to explicit posterior distributions under the general framework of Gibbs-type priors [A. V. Gnedin and J. W. Pitman (2005), Teor. Predst. Din. Sist. Komb. i Algoritm. Metody. 12, 83–102, 244–245;MR2160320], there are situations of practical interest where $m$ is required to be very large and the computational burden for evaluating these posterior distributions makes impossible their concrete implementation. This paper presents a solution to this problem for a large class of Gibbs-type priors which encompasses the two parameter Poisson-Dirichlet prior and, among others, the normalized generalized Gamma prior. The solution relies on the study of the large $m$ asymptotic behaviour of the posterior distribution of the number of new species in the additional sample. In particular a simple characterization of the limiting posterior distribution is introduced in terms of a scale mixture with respect to a suitable latent random variable; this characterization, combined with the adaptive rejection sampling, leads to derive a large $m$ approximation of any feature of interest from the exact posterior distribution. The results are implemented through a simulation study and the analysis of a dataset in linguistics. A novel prior distribution is proposed for adaptive Bayesian estimation, meaning that the associated posterior distribution contracts to the truth with the exact optimal rate and at the same time is adaptive regardless of the unknown smoothness. The prior is termed \textit{block prior} and is defined on the Fourier coefficients $\{\theta_j\}$ of a curve $f$ by independently assigning 0-mean Gaussian distributions on blocks of coefficients $\{\theta_j\}_{j\in B_k}$ indexed by some $B_k$, with covariance matrix proportional to the identity matrix; the proportional coefficient is itself assigned a prior distribution $g_k$. Under conditions on $g_k$, it is shown that (i) the prior puts sufficient prior mass near the true signal and (ii) automatically concentrates on its effective dimension. The main result of the paper is a rate-optimal posterior contraction theorem obtained in a general framework for a modified version of a block prior. Compared to the closely related block spike and slab prior proposed by M. Hoffmann, J. Rousseau and J. Schmidt-Hieber [Ann. Statist. 43 (2015), no. 5, 2259–2295; MR3396985] which only holds for the white noise model, the present result can be applied in a wide range of models. This is illustrated through applications to five mainstream models: density estimation, white noise model, Gaussian sequence model, Gaussian regression and spectral density estimation. The results hold under Sobolev smoothness and their extension to more flexible Besov smoothness is discussed. The paper also provides a discussion on the absence of an extra log term in the posterior contraction rates (thus achieving the exact minimax rate) with a comparison to other priors commonly used in the literature. These include rescaled Gaussian processes [A. W. van der Vaart and H. van Zanten, Electron. J. Stat. 1 (2007), 433–448; MR2357712; Ann. Statist. 37 (2009), no. 5B, 2655–2675; MR2541442] and sieve priors [V. Rivoirard and J. Rousseau, Bayesian Anal. 7 (2012), no. 2, 311–333; MR2934953; J. Arbel, G. Gayraud and J. Rousseau, Scand. J. Stat. 40 (2013), no. 3, 549–570; MR3091697]. ## Collegio Carlo Alberto Posted in General by Julyan Arbel on 12 September 2016 The Collegio in the center. I have spent three years as a postdoc at the Collegio Carlo Alberto. This was a great time during which I have been able to interact with top colleagues and to prepare my applications in optimal conditions. Now that I have left for Inria Grenoble, here is a brief picture presentation of the Collegio. (more…) ## Coupling of particle filters: smoothing Posted in Statistics by Pierre Jacob on 20 July 2016 Two trajectories made for each other. Hi again! In this post, I’ll explain the new smoother introduced in our paper Coupling of Particle Filters with Fredrik Lindsten and Thomas B. Schön from Uppsala University. Smoothing refers to the task of estimating a latent process $x_{0:T} = (x_0,\ldots, x_T)$ of length $T$, given noisy measurements of it, $y_{1:T} = (y_0,\ldots, y_T)$; the smoothing distribution refers to $p(dx_{0:T}|y_{1:T})$. The setting is state-space models (what else?!), with a fixed parameter assumed to have been previously estimated. ## Coupling of particle filters: likelihood curves Posted in Statistics by Pierre Jacob on 19 July 2016 Hi! In this post, I’ll write about coupling particle filters, as proposed in our recent paper with Fredrik Lindsten and Thomas B. Schön from Uppsala University, available on arXiv; and also in this paper by colleagues at NUS. The paper is about a methodology with multiple direct consequences. In this first post, I’ll focus on correlated likelihood estimators; in a later post, I’ll describe a new smoothing algorithm. Both are described in detail in the article. We’ve been blessed to have been advertised by xi’an’s og, so glory is just around the corner. ## Back to blogging Posted in General by Pierre Jacob on 9 July 2016 My new desk. My last post dates back to May 2015… thanks to JB and Julyan for keeping the place busy! I’m not (quite) dead and intend to go back to posting stuff every now and then. And by the way, congrats to both for their new jobs! Last July, I’ve also started a new job,  as an assistant professor in the Department of Statistics at Harvard University, after having spent two years in Oxford. At some point, I might post something on the cultural difference between the European English and American communities of statisticians. In the coming weeks, I’ll tell you all about a new paper entitled Coupling of Particle Filters,  co-written with Fredrik Lindsten and Thomas B. Schön from Uppsala University in Sweden. We are excited about this coupling idea because it’s simple and yet brings massive gains in many important aspects of inference for state space models (including both parameter inference and smoothing). I’ll be talking about it at the World Congress in Probability and Statistics in Toronto next week and at JSM in Chicago, early in August. I’ll also try to write about another exciting project, joint work with Christian Robert, Chris Holmes and Lawrence Murray, on modularization, cutting feedback, the infamous cut function of BUGS and all that funny stuff. I’ve talked about it in ISBA 2016, and intend to put the associated tech report on arXiv over the summer. Stay tuned! ## 3D density plot in R with Plotly Posted in General, R by Julyan Arbel on 30 June 2016 In Bayesian nonparametrics, many models address the problem of density regression, including covariate dependent processes. These were settled by the pioneering works by [current ISBA president] MacEachern (1999) who introduced the general class of dependent Dirichlet processes. The literature on dependent processes was developed in numerous models, such as nonparametric regression, time series data, meta-analysis, to cite but a few, and applied to a wealth of fields such as, e.g., epidemiology, bioassay problems, genomics, finance. For references, see for
np.zeros Python function is used to create a matrix full of zeroes. By continuing you agree to the use of cookies. Well, it might be pretty straight forward, if you just had a ton of zeros here, when you multiply this out, you're going to get this - you date the dot product of this row and … Since the characteristic function of Sn converges to e−t2/2, the characteristic function of the standard normal, nSn=n(μ−μ)/σ, is asymptotically normally distributed with zero mean and unit variance. Intro to zero matrices. MATLAB is an abbreviation for "matrix laboratory." It is sometimes useful to know which linear combinations of parameter estimates have zero covariances. These are the top rated real world C++ (Cpp) examples of e_zero_matrix extracted from open source projects. Let c be a scalar. All entries above the main diagonal of a skew-symmetric matrix are reflected into opposite entries below the diagonal. It is sometimes useful to know which linear combinations of parameter estimates have zero covariances. If f(A) is a null matrix, then A is called the zero or root of the matrix polynomial f(A). If you add the m × n zero matrix to another m × n matrix A, you get A: In symbols, if 0 is a zero matrix and A is a matrix of the same size, then A + 0 = A and 0 + A = A A zero matrix is said to be an identity element for matrix addition. Two sample tests are commonly used to determine whether the samples come from the same unknown distribution. That is, if AB = AC, with A ≠ O, it does not necessarily follow that B = C. For example, if. Fred E. Szabo PhD, in The Linear Algebra Survival Guide, 2015. (So each row can have zero or one pivot.) What is numpy.zeros()? It's easy to verify that for all i and j in {1,2,3}, a_i,j = a_j,i, since every element is the same. Example: O is a zero matrix of order 2 × 3 A square matrix is a matrix with an equal number of rows and columns. In this case (8.9) becomes, and the solution of the normal equations is, We have previously used the result that for any linear combination of yi, say PTY, with P a constant vector, Applying (8.27) to Θ^ as given by (8.26), we have, This is the variance matrix of the parameters and is given by a quantity that appears in the solution (8.26) for the parameters themselves. The second sample can be generated using the same procedure. numpy.zeros() in Python can be used when you initialize the weights during the first iteration in TensorFlow and other statistic tasks. Note that the Weyl pair (X, Z) can be deduced from the master matrix Va via. S = sparse(m,n) generates an m-by-n all zero sparse matrix. The non-diagonal matrix elements of the operator 2Ŝz are. A zero matrix has all its elements equal to zero. For example… Specifically, we are concerned with the null hypothesis H0:μx=μy+Δ versus H0:μx≠μy+Δ. Hence it is necessary to have mr≥2. Copyright © 2020 Elsevier B.V. or its licensors or contributors. There is no common notation for empty matrices, but most computer algebra systems allow creating and computing with them. where A1 is r × r, A2 is (n – r) × (n – r); A1 contains the “good” eigenvalues and A2 contains the “bad” eigenvalues. A row matrix (row vector) is a matrix that contains only one row. Let A=[1378] and B=[3−12046], then AB=[1378][3−12046]=[1×3+3×(−1)7×3+8×(−1)1×2+3×07×2+8×01×4+3×67×4+8×6]=[0132142276]. The matrices [012134245] and [100010001] are symmetric matrices. Here, Rank Of Matrix Calculator . Consider a first order minor |−5| ≠ 0. Find the rank of the matrix. If you add the m × n zero matrix to another m × n matrix A, you get A: In symbols, if 0 is a zero matrix and A is a matrix of the same size, then A + 0 = A and 0 + A = A A zero matrix is said to be an identity element for matrix addition. Similarly, since M is the variance matrix of Θ^, and consequently an unbiased estimate for the variance matrix of Θ^ is, Equation (8.32) looks rather complicated, but RTWR can be calculated in a straightforward way from, using the measured and fitted values. It also serves as the additive identity of the additive group of $${\displaystyle m\times n}$$ matrices, and is denoted by the symbol $${\displaystyle O}$$ or $${\displaystyle 0}$$—followed by subscripts corresponding to the dimension of the matrix as the context sees fit. For example, if A is a 3-by-0 matrix and B is a 0-by-3 matrix, then AB is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space V to itself, while BA is a 0-by-0 matrix. If ϑi=(X¯(i)−μ(i)), then (ϑ1,…,ϑk) converges in distribution to a multivariate normal distribution with mean vector zero and covariance matrix diag(σ(1)2/m1,…,σ(k)2/mk) where σ(i)2=∫(x−μ(i))2dF(i)(x) and μ(i)=∫xdF(i)(x). Videos, solutions, examples, and lessons to help High School students understand that the zero and identity matrices play a role in matrix addition and multiplication similar to the role of 0 and 1 in the real numbers. random variable with the mean μ and finite variance σ2. The Matrix class is the work-horse for all dense matrices and vectors within Eigen. [ 0. For example If most of the elements of the matrix have 0 value, then it is called a sparse matrix. For example, It should be clear from Eq. In mathematics, more specifically in linear algebra and functional analysis, the kernel of a linear mapping, also known as the null space or nullspace, is the set of vectors in the domain of the mapping which are mapped to the zero vector. \begin{align} \quad \begin{bmatrix} 0\\ 0 \end{bmatrix} = \begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} \end{align} Nevertheless, for large matrices, MATLAB programs may execute faster if the zeros function is used to set aside storage for a matrix whose elements are to be generated one at a time, or a row or column at a time. Example: O is a zero matrix of order 2 × 3 A square matrix is a matrix with an equal number of rows and columns. Zero Matrices allow for simple solutions to algebraic equations involving matrices. Special types of matrices include square matrices, diagonal matrices, upper and lower triangular matrices, identity matrices, and zero matrices. which shows a further interest of the matrix Va. Properties of matrix addition & scalar multiplication. A row having atleast one non -zero element is called as non-zero row. The estimate of the variance for small sample sizes would be very inaccurate, suggesting that a pivotal statistic might be unreliable. If A=[5241] and −A=[−5−2−4−1], then A+(−A)=[5241]+[−5−2−4−1]=[5+(−5)2+(−2)4+(−4)1+(−1)]=[0000]=0. It is the additive identity for matrix addition. 5. Null matrix: A matrix having all elements zero. Vectors are matrices with one column, and row-vectors are matrices with one row. As mentioned above, this cannot be done with only knowledge of the relative errors on the observations, but instead requires the absolute values of these quantities. A zero matrix serves many of the same functions in matrix arithmetic that 0 does in regular arithmetic. You have lost information. Google Classroom Facebook Twitter. The number of non zero rows is 2 ∴ Rank of A is 2. ρ (A) = 2. for matrix addition. A diagonal matrix is a matrix in which all of the elements not on the diagonal of a square matrix are 0. You can rate examples to help us improve the quality of examples. The diagonal matrix elements of the magnetic moment for the two states |
# Economy of Pollona Currency Pollonan Koruna April 1 - 31 March $2,435 bn 7th 0.4%$34,008 agriculture: 3.5% industry: 31% services: 65.5% 3.6% 13% (2014 est.) 38.1 .849 46.1 Million 6.2% $2,350 monthly List Financial Services Consumer Goods Petrochemicals Biotech Metallurgy Shipbuilding Food Processing$750.5 Billion List Consumer Goods, Natural Gas, Coal, Refined Metals, Foodstuffs, Rare Earth Minerals, Machinery and Equipment Rochehaut 9.6%,  Silgadin 7.3%,  Borgosesia 6.6%,  Berry 5.1%,  Jungastia 4.1% $818.5 Billion List Machinery and Equipment, Automobiles, Oil, Aircraft, Plastics, Lumber, Industrial Supplies, Textiles Berry 9%, Rochehaut 8.5%, Prekonate 6.3%, Varnia 4.8% Jungastia 4%$900 Billion -$68 Billion$1,498 Billion 61.5% of GDP (2015 -$3 Billion (2015)$480 Billion (2015) $477 Billion (2015 AA$215 billion The economy of Pollona is the 7th largest national economy in Maredoratica and the 4th largest in Alisna (in PPP), with a GDP of $2.435 trillion ($2,435 billion). Pollona is one of several post-industrial economies in Maredoratica heavily interconnected in regional finance and trade. Its GDPPC of $34,000 is the 4th highest in Alisna and the 8th highest in Maredoratica. Approximately 8% of Maredoratica's total trade, around$1.5 trillion in goods and services, is exchanged within the country's borders. The service sector is the largest portion of the Pollonan economy accounting for 65.5% of GDP. Financial Services, expanding 20% per decade since 1975, play an important role in the Pollonan economy, having access to most Maredoratican markets. Consumer Goods manufacturers are an increasing component of national exports due to greater foreign investment. Refining of industrial metals like steel and cooper remain important domestic heavy industries, as well as foodstuffs. Recent discoveries of offshore natural gas have boosted productivity in the refining industries; potential reserves were valued at \$150 billion in 2013. Pollona is heavily dependent on foreign trade, with its trade volume equaling 65% of GDP. Pollona's socio-economic policy is a hybrid of radical free market economics and common workplace security. Pollona has a prosperous open economy, both internally and internationally. It is ranked in the top 5 nations in Maredoratica for economic freedom, regulatory environment, and ease of doing business. Since the 1980s, Pollona has kept a strict policy of free trade, with gradual retrenchment from the private sector. Rather than direct intervention, the government provides social insurance schemes and supports co-determination policies between employers and labor organizations. The Pollonan government actively encourages cooperatives and other forms of industrial democracy within the confines of a market economy. Approximately 75% of all Pollonan businesses belong to the Podnik, a term for small and medium-sized enterprises, family-owned companies, start ups, and local cooperatives. The Podnik culture is an important factor in Pollona's large start-up culture. Larger Pollonan companies have made it on the list of Maredoratica's 500 most profitable firms. Pollona is a member of the Brezier Group, the Group of Twenty, and the Maredoratic League. ## History Following the end of the Wars of Religion in 1601, the Moravian Empire experienced a century of steady economic growth. The massive depopulation of the countryside meant an extension of landownership to new tenants, increased wages for rural labourers, and the adoption of new technologies. Imperial agriculture was one of the most efficient in Alisna in the 17th and 18th centuries; though accounting for less than 4% of the continent's land area, Moravia produced 10% of Alisna's agricultural output. These large surpluses were key to feeding the country's growing cities, as by 1750 Moravia's population exceeded pre-Reformation levels. Spurred by innovations in shipbuilding, trade mechanics, and capital financing, the Empire became a major trading power in the late 1600s. Moravia maintained this advantage through a system of free ports, allowing commerce to flow freely between Eastern Alisna and Wilassia. The Sazavou Entrepot for example, hosted over 200,000 metric tonnes of shipping annually between 1650 and 1680, one of the continent's busiest ports. The tax revenue from the Moravian coastal cities exceeded the income from all other cities in 1670. Wealthier merchants, spurred by gains in trade, financed the expansion of the merchant marine. In 1682 at the request of city merchants, Emperor Peter II granted a charter for the Brno Stock Exchange, establishing the city's future in finance. In the mid 1700s, restricted by expensive wars, failed colonization attempts, and inefficient imperial monopolies, countries such as Questers and Styria soon eclipsed Moravia. The economic havoc wrought by the Fisheries Bubble of 1729, and the loss of the Vladzemi to Morieux in 1747 exacerbated Moravia's problems. Industrialization only really began in Moravia in the early 1830s, brought by the introduction of the railroad. Previously, critical raw materials in the north and Highlands, like coal, were highly expensive to transport. Waterpower, though available, was not suited to most of Pollona's slower moving rivers. Now, with the railroad, industrialists in southern an interior cities could cheaply access raw materials. In addition Moravia's first commercial railway in 1831, linking the outer suburbs of Kralové, proved wildly popular. In less than 25 years more than 8,000 km of railroad track were constructed across the country. Industrial production doubled every 20 years in Pollona between 1830 and 1900, outstripping its neighbor Questers. On the eve of the Great Maredoratic War, 58% of GNP was devoted to industry. The Pollonan Revolution disrupted economic development in Pollona. The new republic improved working conditions, instituted land reform, and simplified commercial law. Pollona introduce social insurance programs including universal healthcare, compulsory education, disability insurance, and retirement pensions. The most successful, the government's education policies, provided more people in the professional services and white collar industries; the degree holding population increased from 5% in 1910 to 25% in 1940. The government's radical policies on trade unions actually inspired industrial peace and sizable cooperative industries. In 1930, 20% of Pollonan businesses were worker's cooperatives and over 25% of workers were unionized, this grew to 45% by 1960. Governments in the post-Red Spring period were far more interventionist, engaging in massive nationalizations of critical industries like steel, petrochemicals, and shipbuilding. Larger scale state planning and tariff restrictions were instituted to protect heavy Pollonan industries, which were already declining by the late 1930s early 40s. These government controls continued, and indeed escalated, for the next three decades. The period from 1960 to 1980s featured long periods of labor unrest, stagflation, debt crises, and industrial decline. At its low point in 1972, Pollona was in the bottom 10 of Alisnan countries for GDPPC and productivity, scoring poorly in the ease of doing business, tax complexity, and contract law. Government economic policy gradually shifted to a more neoliberal stance in 1980. Pollona founded the Maredoratic Trade Organization in the mid 1970s, an organization which helped lower international trade barriers across the region. Other domestic policies accelerated in the 1980s: state enterprises were sold, quotas and tariffs were abolished, and tax collection was simplified. The government promoted outside investments in technological firms, biotechnical companies, and emerging startups. It further supported mechanization and automation in its industrial and agricultural sectors to stay globally competitive, and encouraged the revitalization of Pollonan trade. These policies dramatic re-balanced the economy in favor of post-industrial sectors. For example, Pollona's financial services sector exploded: the banking industry doubled its activity every decade between 1980 and 2010. Trade as a % of GDP increased from 33% in 1975 to 61% in 2005. Much wider effects were felt from this "long boom" period. The country was one of the region's fastest growing economies in the 1980s and 1990s, averaging 3-4% growth per year. Unemployment fell from its record high of 12% in the 1973 to 5% in 1995; inflation declined from 18% per year in 1977 to 4% in 1992. Growth waned in the early 2000s as Pollona caught up to Maredoratica's other advanced economies. Real growth since has largely mirrored the Brezier Group average, though slumped in a recession in 2012,
65.135 Pages Θεώρημα Noether - Ένας Νόμος της Φυσικής. - Ακριβέστερα, είναι ένας νόμος της Ρευστοδυναμικής ## Ετυμολογία Πρότυπο:Physical Laws Η ονομασία "νόμος" σχετίζεται ετυμολογικά με το όνομα του φυσικού επιστήμονα "[[ ]]". ## Διατύπωση Για κάθε συνεχή συμμετρία που διέπει ένα ιδιαίτερο φυσικό φαινόμενο, (αναλλοίωτη η αντίστοιχη Lagrange), υπάρχει μία φυσική ποσότητα η οποία διατηρείται και δίνεται απο το γεννήτορα της αντίστοιχης ομάδας. ## Σύνοψη Το θεώρημα της Noether εκφράζει την ισοδυναμία που υπάρχει μεταξύ των νόμων διατήρησης και του αναλλοίωτου των φυσικών νόμων κάτω από ορισμένους μετασχηματισμούς (συνήθως αναφέρονται ως "συμμετρίες"). Για παράδειγμα: Οι νόμοι διατήρησης επομένως είναι απόρροια αλλά και θεμέλιο της κατανόησης του Χωρόχρονου όπως τον ξέρουμε σήμερα και ισχύουν, τουλάχιστον μέχρι εκεί που μπορεί να φθάσει η ακρίβεια των πειραμάτων μας. ## Ανάλυση Noether's (first) theorem states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. The theorem was proved by German mathematician Emmy Noether in 1915 and published in 1918.[1] The action of a physical system is the integral over time of a Lagrangian function (which may or may not be an integral over space of a Lagrangian density function), from which the system's behavior can be determined by the principle of least action. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalization of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law. For illustration, if a physical system behaves the same regardless of how it is oriented in space, its Lagrangian is rotationally symmetric; from this symmetry, Noether's theorem shows the angular momentum of the system must be conserved. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry – it is the laws of motion that are symmetric. As another example, if a physical experiment has the same outcome regardless of place or time (having the same outcome, say, somewhere in Asia on a Tuesday or in America on a Wednesday), then its Lagrangian is symmetric under continuous translations in space and time; by Noether's theorem, these symmetries account for the conservation laws of linear momentum and energy within this system, respectively. (These examples are just for illustration; in the first one, Noether's theorem added nothing new – the results were known to follow from Lagrange's equations and from Hamilton's equations.) Noether's theorem is important, both because of the insight it gives into conservation laws, and also as a practical calculational tool. It allows researchers to determine the conserved quantities from the observed symmetries of a physical system. Conversely, it allows researchers to consider whole classes of hypothetical Lagrangians to describe a physical system. For illustration, suppose that a new field is discovered that conserves a quantity X. Using Noether's theorem, the types of Lagrangians that conserve X because of a continuous symmetry can be determined, and then their fitness judged by other criteria. There are numerous different versions of Noether's theorem, with varying degrees of generality. The original version only applied to ordinary differential equations (particles) and not partial differential equations (fields). The original versions also assume that the Lagrangian only depends upon the first derivative, while later versions generalize the theorem to Lagrangians depending on the nth derivative. There is also a quantum version of this theorem, known as the Ward–Takahashi identity. Generalizations of Noether's theorem to superspaces also exist. ## Informal statement of the theorem All fine technical points aside, Noether's theorem can be stated informally If a system has a continuous symmetry property, then there are corresponding quantities whose values are conserved in time.[2] A more sophisticated version of the theorem states that: To every differentiable symmetry generated by local actions, there corresponds a conserved current. The word "symmetry" in the above statement refers more precisely to the covariance of the form that a physical law takes with respect to a one-dimensional Lie group of transformations satisfying certain technical criteria. The conservation law of a physical quantity is usually expressed as a continuity equation. The formal proof of the theorem uses only the condition of invariance to derive an expression for a current associated with a conserved physical quantity. The conserved quantity is called the Noether charge and the flow carrying that 'charge' is called the Noether current. The Noether current is defined up to a solenoidal vector field. ## Historical context A conservation law states that some quantity X describing a system remains constant throughout its motion; expressed mathematically, the rate of change of X (its derivative with respect to time) is zero: Such quantities are said to be conserved; they are often called constants of motion, although motion per se need not be involved, just evolution in time. For example, if the energy of a system is conserved, its energy is constant at all times, which imposes a constraint on the system's motion and may help to solve for it. Aside from the insight that such constants of motion give into the nature of a system, they are a useful calculational tool; for example, an approximate solution can be corrected by finding the nearest state that satisfies the necessary conservation laws. The earliest constants of motion discovered were momentum and energy, which were proposed in the 17th century by René Descartes and Gottfried Leibniz on the basis of collision experiments, and refined by subsequent researchers. Isaac Newton was the first to enunciate the conservation of momentum in its modern form, and showed that it was a consequence of Newton's third law; interestingly, conservation of momentum still holds even in situations when Newton's third law is incorrect. Modern physics has revealed that the conservation laws of momentum and energy are only approximately true, but their modern refinements – the conservation of four-momentum in special relativity and the zero covariant divergence of the stress-energy tensor in general relativity – are rigorously true within the limits of those theories. The conservation of angular momentum, a generalization to rotating rigid bodies, likewise holds in modern physics. Another important conserved quantity, discovered in studies of the celestial mechanics of astronomical bodies, was the Laplace-Runge-Lenz vector. In the late 18th and early 19th centuries, physicists developed more systematic methods for discovering conserved quantities. A major advance came in 1788 with the development of Lagrangian mechanics, which is related to the principle of least action. In this approach, the state of the system can be described by any type of generalized coordinates q; the laws of motion need not be expressed in a Cartesian coordinate system, as was customary in Newtonian mechanics. The action is defined as the time integral I of a function known as the Lagrangian L where the dot over q signifies the rate of change of the coordinates q Hamilton's principle states that the physical path q(t) – the one truly taken by the system – is a path for which infinitesimal variations in that path cause no change in I, at least up to first order. This principle results in the Euler–Lagrange equations Thus, if one of the coordinates, say qk, does not appear in the Lagrangian, the right-hand side of the equation is zero, and the left-hand side shows that where the conserved momentum pk is defined as the left-hand quantity in parentheses. The absence of the coordinate qk from the Lagrangian implies that the Lagrangian is unaffected by changes or transformations of qk; the Lagrangian is invariant, and is said to exhibit a kind of symmetry. This is the seed idea from which Noether's theorem was born. Several alternative methods for finding conserved quantities were developed in the 19th century, especially by William Rowan Hamilton. For example, he developed a theory
U^H_{\triangle X'_{11}}$} where, {\small$\Lambda_{\triangle X'_{11}}$} denotes the eigen value matrix whose eigen values in non-increasing order are given by $\lambda_j\left(\triangle X'_{11}\right)$, $j=1,2,\cdots,M$. Note that $\lambda_M\left(\triangle X'_{11}\right)>0$ as it is assumed that $\triangle X'_{11}$ is full rank. Now, substitution of this eigen decomposition in (\ref{eqn-div14}) gives (\ref{eqn-div15}). The inequality (\ref{eqn-div16}) follows from the fact that $\lambda_M\left(\triangle X'_{11}\right)$ is the minimum eigen value of $\triangle X'_{11}$, and (\ref{eqn-div17}) follows from {\small$V_{11}$} being equal to {\small$\frac{H^{-1}_{12}}{\sqrt{\text{tr}\left(H^{-1}_{12}H^{-H}_{12}\right)}}$} and the fact that the distribution of $V_{11}$ is invariant to multiplication by the unitary matrix $U_{\triangle X'_{11}}$ (because $H_{12}$ is Gaussian distributed). Using the eigen decomposition of $\left(V^T_{11}\right)^H V_{11}=U_{V_{11}} \Lambda_{V_{11}} U_{V_{11}}$ and some straight-forward techniques involved in evaluating diversity as in \cite{TSC}, we obtain (\ref{eqn-div21})$(a)$. Now, note that the eigen values of $V_{11}$ are given by \begin{align*} \lambda_j\left(V_{11}\right)=\frac{\frac{1}{\lambda_{M+1-j}\left(H_{12}\right)}}{\sum_{j=1}^{M}\frac{1}{\lambda_j\left(H_{12}\right)}} \end{align*}where, $\lambda_j\left(H_{12}\right)$ denote the eigen values of $H_{12}H^H_{12}$ in non-increasing order from $j=1$ to $j=M$. Thus, $\lambda_j\left(V_{11}\right)$ can be lower bounded as \begin{align*} &\lambda_j\left(V_{11}\right) \geq \frac{\frac{1}{\lambda_{M+1-j}\left(H_{12}\right)}}{\sum_{j=1}^{M}\frac{1}{\lambda_M\left(H_{12}\right)}} = \frac{\lambda_M\left(H_{12}\right)}{M\lambda_{M+1-j}\left(H_{12}\right)}. \end{align*}For $j=1$, the above lowerbound is equal to $\frac{1}{M}$, and for $j=2,3,\cdots, M$ the above lowerbound is in turn trivially lowerbounded by $0$. Hence, we obtain the inequality in (\ref{eqn-div21})$(b)$, and the approximation in (\ref{eqn-div21})$(c)$ holds good at high values of $P$, where the constant $c=\frac{(2\sigma^2M)^M}{c_1^M\lambda^M_M\left(\triangle X'_{11}\right)}$. \end{IEEEproof} \section{Proof of Lemma \ref{lem1}} \label{appen_lem1} \begin{IEEEproof} We prove that for every difference matrix there exists atmost a finite number of values of $\theta$ for which it is not full-rank. Since there are infinite possible values of $\theta$, there always exists $\theta$ such that all the difference matrices are full-rank. Without loss of generality, we consider the difference matrix $\triangle {X'^{k_1,k_2}_{11}}$ for some $k_1\neq k_2$. Let the entries of the difference matrix be given by (\ref{eqn-diff_mat}) (at the top of the next page). \begin{figure*} \begin{align}\label{eqn-diff_mat} \triangle {X'^{k_1,k_2}_{11}}= \begin{bmatrix} \triangle x^{1R}_{11}+j\triangle x^{3I}_{11} & -\triangle x^{2R}_{11}+j\triangle x^{4I}_{11} &e^{j\theta}\left(\triangle x^{5R}_{11}+j\triangle x^{6I}_{11}\right) & e^{j\theta}\left(-\triangle x^{3R}_{11}+j\triangle x^{1I}_{11}\right) \\ \triangle x^{2R}_{11}+j\triangle x^{4I}_{11} & \triangle x^{1R}_{11}-j\triangle x^{3I}_{11} & e^{j\theta}\left(\triangle x^{4R}_{11}+j\triangle x^{2I}_{11}\right) & e^{j\theta}\left(\triangle x^{5R}_{11}-j\triangle x^{6I}_{11}\right) \\ e^{j\theta}\left(\triangle x^{6R}_{11}+j\triangle x^{5I}_{11}\right) & e^{j\theta}\left(-\triangle x^{6R}_{11}+j\triangle x^{5I}_{11}\right) & \triangle x^{3R}_{11}+j\triangle x^{1I}_{11} & -\triangle x^{4R}_{11}+j\triangle x^{2I}_{11}\\ \end{bmatrix} \end{align} \hrule \end{figure*}Consider the matrices $A,B \in \mathbb C^{3\times 3}$ comprised of the first three columns and the last three columns of $\triangle {X'^{k_1,k_2}_{11}}$ respectively. Expanding along the last column, the determinant of the matrix $A$ is given by (\ref{eqn-det_A}). \begin{figure*} \small \begin{align} \nonumber |A|=&e^{2j\theta}\left(\triangle x^{5R}_{11}+j\triangle x^{6I}_{11}\right)\left(\left(\triangle x^{2R}_{11}+j\triangle x^{4I}_{11} \right)\left(-\triangle x^{6R}_{11}+j\triangle x^{5I}_{11}\right) -\left(\triangle x^{1R}_{11}-j\triangle x^{3I}_{11}\right) \left(\triangle x^{6R}_{11}+j\triangle x^{5I}_{11}\right)\right)\\ \label{eqn-det_A} &-e^{2j\theta}\left(\triangle x^{4R}_{11}+j\triangle x^{2I}_{11}\right)\left(\left(\triangle x^{1R}_{11}+j\triangle x^{3I}_{11} \right)\left(-\triangle x^{6R}_{11}+j\triangle x^{5I}_{11}\right) -\left(-\triangle x^{2R}_{11}+j\triangle x^{4I}_{11}\right) \left(\triangle x^{6R}_{11}+j\triangle x^{5I}_{11}\right)\right)\\ \nonumber &+\left(\triangle x^{3R}_{11}+j\triangle x^{1I}_{11}\right)\left(\triangle {x^{1R}_{11}}^2+{\triangle x^{3I}_{11}}^2+{\triangle x^{2R}_{11}}^2+{\triangle x^{4I}_{11}}^2\right)\\ \nonumber |B|=&e^{j\theta}\left(-\triangle x^{2R}_{11}+j\triangle x^{4I}_{11}\right)\left(\left(\triangle x^{4R}_{11}+j\triangle x^{2I}_{11} \right)\left(-\triangle x^{4R}_{11}+j\triangle x^{2I}_{11}\right) -\left(\triangle x^{5R}_{11}-j\triangle x^{6I}_{11}\right) \left(\triangle x^{3R}_{11}+j\triangle x^{1I}_{11}\right)\right)\\ \label{eqn-det_B} &-e^{j\theta}\left(\triangle x^{1R}_{11}-j\triangle x^{3I}_{11}\right)\left(\left(\triangle x^{5R}_{11}+j\triangle x^{6I}_{11} \right)\left(-\triangle x^{4R}_{11}+j\triangle x^{2I}_{11}\right) -\left(-\triangle x^{3R}_{11}+j\triangle x^{1I}_{11}\right) \left(\triangle x^{3R}_{11}+j\triangle x^{1I}_{11}\right)\right)\\ \nonumber &+e^{2j\theta}\left(-\triangle x^{6R}_{11}+j\triangle x^{5I}_{11}\right)\left(\left(\triangle {x^{5R}_{11}}+j\triangle {x^{6I}_{11}}\right)\left(\triangle {x^{5R}_{11}}-j\triangle {x^{6I}_{11}}\right)-\left(-\triangle {x^{3R}_{11}}+j\triangle {x^{1I}_{11}}\right)\left(\triangle {x^{4R}_{11}}+j\triangle {x^{2I}_{11}}\right)\right) \end{align} \hrule \end{figure*} Expanding along the first column, the determinant of the matrix $B$ is given by (\ref{eqn-det_B}). Since it is assumed that the CPD of the constellation involved is non-zero, $\triangle x^{iR}_{11}$ and $\triangle x^{iI}_{11}$ are either both zero or both non-zero, for some $i$. Now, consider the following cases. \textbf{Case 1:} {\small$\left(\triangle x^{1R}_{11},\triangle x^{3R}_{11}\right)$$=$$(0,0)$} and {\small$\left(\triangle x^{5R}_{11},\triangle x^{6R}_{11}\right)$$=$$(0,0)$}. Here, the determinant of the matrix $B$ is given by \begin{align*} |B|=e^{j\theta}\left(-\triangle x^{2R}_{11}+j\triangle x^{4I}_{11}\right)\left(-{\triangle x^{4R}_{11}}^2-{\triangle x^{2I}_{11}}^2\right). \end{align*} Since $k_1\neq k_2$, either $\triangle x^{2R}_{11}$ or $\triangle x^{4I}_{11}$ or both of them are non-zero. Hence, $|B|\neq 0$ and $\triangle {X'^{k_1,k_2}_{11}}$ is of rank $3$. \textbf{Case 2:} {\small$\left(\triangle x^{1R}_{11},\triangle x^{3R}_{11}\right)$$\neq$$(0,0)$} and {\small$\left(\triangle x^{5R}_{11},\triangle x^{6R}_{11}\right)$$=$$(0,0)$}. The determinant of the matrix $A$ is given by {\small \begin{align*} |A|=\left(\triangle x^{3R}_{11}+j\triangle x^{1I}_{11}\right)\left(\triangle {x^{1R}_{11}}^2+{\triangle x^{3I}_{11}}^2+{\triangle x^{2R}_{11}}^2+{\triangle x^{4I}_{11}}^2\right). \end{align*}}Since $\triangle x^{3R}_{11}$ or $\triangle x^{1I}_{11}$ or both are non-zero, $|A| \neq 0$ for this case. Hence, $\triangle {X'^{k_1,k_2}_{11}}$ is of rank $3$. \textbf{Case 3:} {\small$\left(\triangle x^{1R}_{11},\triangle x^{3R}_{11}\right)$$=$$(0,0)$} and {\small$\left(\triangle x^{5R}_{11},\triangle x^{6R}_{11}\right)$$\neq$$(0,0)$}. In this case, the coefficient of $e^{2j\theta}$ in the determinant of the matrix $B$ is given by {\small$\left(-\triangle x^{6R}_{11}+j\triangle x^{5I}_{11}\right)\left({\triangle x^{5R}_{11}}^2+{\triangle x^{6I}_{11}}^2\right) \neq 0$}. Now, $|B|$ is a quadratic polynomial in $e^{j\theta}$ which can have atmost two roots for $e^{j\theta}$ and hence, atmost a finite number of values of $\theta$ for which $|B|=0$. Therefore, there exists infinite values of $\theta$ for which $|B| \neq 0$ in this case. \textbf{Case 4:} {\small$\left(\triangle x^{1R}_{11},\triangle x^{3R}_{11}\right)$$\neq$$(0,0)$} and {\small$\left(\triangle x^{5R}_{11},\triangle x^{6R}_{11}\right)$$\neq$$(0,0)$}. If the first two terms of $|A|$ given in (\ref{eqn-det_A}) do not sum to zero then, $|A|$ is clearly a quadratic polynomial in $e^{j\theta}$. Thus, there exist infinite values of $\theta$ for which $|A|$ is non-zero. If the first two terms of $|A|$ sum to zero then, $|A| \neq 0$ for the same reason as in Case $2$. Hence, $\triangle {X'^{k_1,k_2}_{11}}$ is of rank $3$ in this case also. \end{IEEEproof} \section{Proof of Theorem \ref{thm2}} \label{appen_thm2} \begin{IEEEproof} Consider a modified system where a Gaussian noise matrix is added to $Y'_1$ so that the entries of the effective noise matrix in (\ref{eqn-after_int_cal}) are distributed as i.i.d. ${\cal CN}(0,2)$. Define the matrices $H$ and $G$ by $H=H_{11}V_{11}$ and $G=H_{21}V_{21}$. The entries of the matrices $H$ and $G$ are denoted by $h_{ij}$ and $g_{ij}$ respectively, for $i,j=1,2,3$. Let the symbols $x^{k}_{i1}$ take values from the distribution ${\cal CN}(0,1)$. A processed and vectorized version of the first two columns of $Y'_1$ is given by (\ref{eqn-vec_y'_1}) \begin{figure*} \begin{align}\label{eqn-vec_y'_1} \begin{bmatrix} y'_{1_{11}}\\ \overline{y'_{1_{12}}}\\ y'_{1_{21}}\\ \overline{y'_{1_{22}}}\\ y'_{1_{31}}\\ \overline{y'_{1_{32}}} \end{bmatrix}=\underbrace{ \begin{bmatrix} h_{11} & h_{12} & e^{j\theta}h_{13} & g_{11} & g_{12} & e^{j\theta}g_{13}\\ \overline{h_{12}} & -\overline{h_{11}} & -e^{-j\theta}\overline{h_{13}} & \overline{g_{12}} & -\overline{g_{11}} & -e^{-j\theta}\overline{g_{13}}\\ h_{21} & h_{22} & e^{j\theta}h_{23} & g_{21} & g_{22} & e^{j\theta}g_{23}\\ \overline{h_{22}} & -\overline{h_{21}} & -e^{-j\theta}\overline{h_{23}} & \overline{g_{22}} & -\overline{g_{21}} & -e^{-j\theta}\overline{g_{23}}\\ h_{31} & h_{32} & e^{j\theta}h_{33} & g_{31} & g_{32} & e^{j\theta}g_{33}\\ \overline{h_{32}} & -\overline{h_{31}} & -e^{-j\theta}\overline{h_{33}} & \overline{g_{32}} & -\overline{g_{31}} & -e^{-j\theta}\overline{g_{33}} \end{bmatrix}}_{R}\begin{bmatrix} p^1_{11}\\ p^2_{11}\\ p^3_{11}\\ p^1_{21}\\ p^2_{21}\\ p^3_{21} \end{bmatrix} + N''_1 \end{align} \hrule \end{figure*}where, $p^1_{i1}=x^{1R}_{i1}+jx^{3I}_{i1}$, $p^2_{i1}=x^{2R}_{i1}+jx^{4I}_{i1}$, $p^3_{i1}=x^{6R}_{i1}+jx^{5I}_{i1}$, and $y'_{1_{ij}}$ denotes the $i^{\text{th}}$ row, $j^{\text{th}}$ column element of $Y'_1$. Define the sub-matrices of the effective transfer matrix $R$ defined in (\ref{eqn-vec_y'_1}) by {\footnotesize\begin{align*} &A_1= \begin{bmatrix} h_{11} & h_{12} & e^{j\theta}h_{13}\\ \overline{h_{12}} & -\overline{h_{11}} & -e^{-j\theta}\overline{h_{13}}\\ h_{21} & h_{22} & e^{j\theta}h_{23} \end{bmatrix}, B_1=\begin{bmatrix} g_{11} & g_{12} & e^{j\theta}g_{13}\\ \overline{g_{12}} & -\overline{g_{11}} & -e^{-j\theta}\overline{g_{13}}\\ g_{21} & g_{22} & e^{j\theta}g_{23} \end{bmatrix}\\ &C_1= \begin{bmatrix} \overline{h_{22}} & -\overline{h_{21}} & -e^{-j\theta}\overline{h_{23}}\\ h_{31} & h_{32} & e^{j\theta}h_{33}\\ \overline{h_{32}} & -\overline{h_{31}} & -e^{-j\theta}\overline{h_{33}} \end{bmatrix},D_1= \begin{bmatrix} \overline{g_{22}} & -\overline{g_{21}} & -e^{-j\theta}\overline{g_{23}}\\ g_{31} & g_{32} & e^{j\theta}g_{33}\\ \overline{g_{32}} & -\overline{g_{31}} & -e^{-j\theta}\overline{g_{33}} \end{bmatrix}. \end{align*}}If it is shown that the matrix $R$ is almost surely full-rank for any value of $\theta$ then, the symbols $p^k_{i1}$, for $k=1,2,3$, can be decoded symbol-by-symbol, by zero-forcing the rest of the symbols almost surely. If it is proven that the determinant of $R$ is a non-zero polynomial in $h_{ij}$ and $g_{ij}$, $i,j=1,2,3$, for any value of $\theta$ then, the determinant is non-zero almost surely. This is because $h_{ij}$ and $g_{ij}$ are non-zero rational polynomial functions in the entries of the matrices $H_{ij}$, for $i,j=1,2$, which are continuously distributed. We now prove this by showing that, for any value of $\theta$, there exists an assignment of values to $h_{ij}$ and $g_{ij}$ such that the determinant of $R$ is non-zero\footnote{If the determinant of $R$ is a zero polynomial in $h_{ij}$ and $g_{ij}$, $i,j=1,2,3$, for some value of $\theta$ then, for any assignment to $h_{ij}$ and $g_{ij}$ the determinant would be equal to zero for that value of $\theta$.}. Consider the following assignment of values to $h_{ij}$ and $g_{ij}$. \begin{align*} H=\begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 1\\ 1 & 0 & 1 \end{bmatrix},~G=\begin{bmatrix} 0 & 0 & 0\\ 1 & 0 & 0\\ 1 & -e^{2j\theta} & 1 \end{bmatrix}. \end{align*}The determinant of the matrix $R$ can be evaluated to be \begin{align*} |R|=|A||D-CA^{-1}B|=-2 \neq 0. \end{align*}Thus, the symbols $p^k_{i1}$, for $k=1,2,3$, can be decoded symbol-by-symbol almost surely, for any value of $\theta$. We still need to decode the symbols $p^k_{i1}$, $k=4,5,6$, given by $p^4_{i1}=x^{5R}_{i1}+jx^{6I}_{i1}$, $p^5_{i1}=x^{3R}_{i1}+jx^{1I}_{i1}$, and $p^6_{i1}=x^{4R}_{i1}+jx^{2I}_{i1}$. Consider a processed and vectorized version of the last two columns of $Y'_1$, given by (\ref{eqn-vec_y'_1_last_sym}). \begin{figure*} \begin{align}\label{eqn-vec_y'_1_last_sym} e^{-j\theta} \begin{bmatrix} y'_{1_{13}}\\ \overline{y'_{1_{14}}}\\ y'_{1_{23}}\\ \overline{y'_{1_{24}}}\\ y'_{1_{33}}\\ \overline{y'_{1_{34}}} \end{bmatrix}=\underbrace{ \begin{bmatrix} h_{11} & e^{-j\theta}h_{13} & h_{12} & g_{11} & e^{-j\theta}g_{13} & g_{12}\\ \overline{h_{12}} &
only the simple Riemann integral is being used, or the exact type of integral is immaterial. For instance, one might write $\int_a^b (c_1f+c_2g) = c_1\int_a^b f + c_2\int_a^b g$ to express the linearity of the integral, a property shared by the Riemann integral and all generalizations thereof. # Interpretations Integrals appear in many practical situations. For instance, from the length, width and depth of a swimming pool which is rectangular with a flat bottom, one can determine the volume of water it can contain, the area of its surface, and the length of its edge. But if it is oval with a rounded bottom, integrals are required to find exact and rigorous values for these quantities. In each case, one may divide the sought quantity into infinitely many infinitesimal In mathematics, infinitesimals or infinitesimal numbers are quantities that are closer to zero than any standard real number, but are not zero. They do not exist in the standard real number system, but do exist in many other number systems, such a ... pieces, then sum the pieces to achieve an accurate approximation. For example, to find the area of the region bounded by the graph of the function between and , one can cross the interval in five steps (), then fill a rectangle using the right end height of each piece (thus ) and sum their areas to get an approximation of :$\textstyle \sqrt\left\left(\frac-0\right\right)+\sqrt\left\left(\frac-\frac\right\right)+\cdots+\sqrt\left\left(\frac-\frac\right\right)\approx 0.7497,$ which is larger than the exact value. Alternatively, when replacing these subintervals by ones with the left end height of each piece, the approximation one gets is too low: with twelve such subintervals the approximated area is only 0.6203. However, when the number of pieces increase to infinity, it will reach a limit which is the exact value of the area sought (in this case, ). One writes :$\int_^ \sqrt \,dx = \frac,$ which means is the result of a weighted sum of function values, , multiplied by the infinitesimal step widths, denoted by , on the interval . # Formal definitions There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but also occasionally for pedagogical reasons. The most commonly used definitions are Riemann integrals and Lebesgue integrals. ## Riemann integral The Riemann integral is defined in terms of Riemann sum In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ... s of functions with respect to ''tagged partitions'' of an interval. A tagged partition of a closed interval In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... on the real line is a finite sequence : $a = x_0 \le t_1 \le x_1 \le t_2 \le x_2 \le \cdots \le x_ \le t_n \le x_n = b . \,\!$ This partitions the interval into sub-intervals indexed by , each of which is "tagged" with a distinguished point . A ''Riemann sum'' of a function with respect to such a tagged partition is defined as : $\sum_^n f\left(t_i\right) \, \Delta_i ;$ thus each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the width of sub-interval, . The ''mesh'' of such a tagged partition is the width of the largest sub-interval formed by the partition, . The ''Riemann integral'' of a function over the interval is equal to if: : For all $\varepsilon > 0$ there exists $\delta > 0$ such that, for any tagged partition , b The comma is a punctuation Punctuation (or sometimes interpunction) is the use of spacing, conventional signs (called punctuation marks), and certain typographical devices as aids to the understanding and correct reading of written text, ... /math> with mesh less than $\delta$, : $\left, S - \sum_^n f\left(t_i\right) \, \Delta_i \ < \varepsilon.$ When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral In real analysis, a branch of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematic ... . ## Lebesgue integral It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated. Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue Henri Léon Lebesgue (; June 28, 1875 – July 26, 1941) was a French mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (a ... introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel Paul Antoine Aristide Montel (29 April 1876 – 22 January 1975) was a French mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Greek: ) includes the study of such topics as number ... : As Folland puts it, "To compute the Riemann integral of , one partitions the domain into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of ". The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure In Measure (mathematics), measure theory, a branch of mathematics, the Lebesgue measure, named after france, French mathematician Henri Lebesgue, is the standard way of assigning a measure (mathematics), measure to subsets of ''n''-dimensional Eucli ... of an interval is its width, , so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals. Using the "partitioning the range of " philosophy, the integral of a non-negative function should be the sum over of the areas between a thin horizontal strip between and . This area is just . Let . The Lebesgue integral of is then defined by : $\int f = \int_0^\infty f^*\left(t\right)\,dt$ where the integral on the right is an ordinary improper Riemann integral ( is a strictly decreasing positive function, and therefore has a well-defined In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ... improper Riemann integral). For a suitable class of functions (the measurable function In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ... s) this defines the Lebesgue integral. A general
concentration as described in Fig. 2a. In this setup, the system is initially in a non-equilibrium stationary state (for t < 0), and the signal change at t = 0 drives the system to a different non-equilibrium stationary state. We show the results of the estimation of the entropy production rate and the thermodynamic force in Fig. 2b, c, respectively. Because of the perturbation at t = 0, the non-equilibrium properties change sharply at the beginning. Nonetheless, the model function d(x, tθ*) estimates the thermodynamic force well for the whole time interval (Fig. 2c), and thus the entropy production rate as well (Fig. 2b). In particular, we plot the result of a single trial in Fig. 2b, which means that the statistical error is negligible with only 104 trajectories. We note that the entropy production rate is orders of magnitude higher than that of the breathing parabola model. The results of Figs. 1 and 2 demonstrate the effectiveness of our method in estimating a wide range of entropy production values accurately. In the numerical experiments, we have used Δt = 10−4s. We note that sampling resolutions in the range Δt = 10−6s to 10−3s have been shown to be feasible in realistic biological experiments67. We also note that an order of 103 realizations are typical in DNA pulling experiments68. The thermodynamic force in Fig. 2c has information about the spatial trend of the dynamics as well as the associated dissipation since it is proportional to the mean local velocity F(x, t) j(x, t)/p(x, t) when the diffusion constant is homogeneous in space. At the beginning of the dynamics (t = 0), the state of the system tends to expand outside, reflecting the sudden increase of the noise intensity Δa. Then, the stationary current around the distribution gradually emerges as the system relaxes to the new stationary state. Interestingly, the thermodynamic force aligns along the m-axis at t = 0.01, and thus the dynamics of a becomes dissipationless. The dissipation associated with the jumps of a tends to be small for the whole time interval, which might have some biological implications as discussed in Refs. 21,66. So far, we have shown that our inference scheme estimates the entropy production very well in ideal data sets. Next, we demonstrate the practical effectiveness of our algorithm by considering the dependence of the inference scheme on (i) the sampling interval, (ii) the number of trajectories, (iii) measurement noise, and (iv) time-synchronization error. The analysis is carried out in the adaptation model, for times t = 0 and t = 0.009, at which the degrees of non-stationarity are different. The results are summarized in Fig. 3. In most of the cases, we find that the estimation error defined by $$\left|\widehat{\sigma }(t)-\sigma (t)\right|/\sigma (t)$$ is higher at t = 0 when the system is highly non-stationary. In Fig. 3a, b, we demonstrate the effect of the sampling interval Δt on the estimation. For both the t values, we find that the estimation error does not significantly depend on the sampling interval Δt in the range 10−5 to 10−3, which demonstrates the robustness of our method against Δt. In Fig. 3c, d, we consider the dependence of the estimated entropy production rate on N—the number of trajectories used for the estimation. We find that roughly 103 trajectories are required to get an estimate that is within 0.25 error of the true value for t = 0.009. On the other hand, we need at least 104 trajectories at t = 0 to get an estimate within the same accuracy. This is because the system is highly non-stationary at t = 0 and thus the benefit of the continuous-time inference decreases. In Fig. 3e, f, the effect of measurement noise is studied. Here, the measurement noise is added to trajectory data as follows: $${{{{{{{{\boldsymbol{y}}}}}}}}}_{j{{\Delta }}t}={{{{{{{{\boldsymbol{x}}}}}}}}}_{j{{\Delta }}t}+\sqrt{{{\Lambda }}}{{{{{{{\boldsymbol{{\eta }}}}}}}}}^{\,{j}},$$ (19) where Λ is the strength of the noise, and η is a Gaussian white noise satisfying $$\langle {\eta }_{a}^{i}{\eta }_{b}^{j} \rangle ={\delta }_{a,b}{\delta }_{i,j}$$. The strength Λ is compared to Λ0 = 0.03 which is around the standard deviation of the variable m in the stationary state at t > 0. We find that the estimate becomes lower in value as the strength Λ increases, while a larger time interval for the generalized current can mitigate this effect. This result can be explained by the fact that the measurement noise effectively increases the diffusion matrix, and its effect becomes small as Δt increases since the Langevin noise scales as $$\propto \sqrt{{{\Delta }}t}$$ while the contribution from the measurement noise is independent of Δt. Since the bias in $$\widehat{{{{{{{{\rm{Var}}}}}}}}({J}_{{{{{{{{\boldsymbol{d}}}}}}}}})}$$ is the major source of the estimation error, we expect that the use of a bias-corrected estimator31,69 will reduce this error. Indeed, we do find that the bias-corrected estimator, star symbols in Fig. 3e, f, significantly reduces the estimation error (see Supplementary Note 1 for the details). Finally, in Fig. 3g, h, the effect of synchronization error is studied. We introduce the synchronization error by starting the sampling of each trajectory at $$\tilde{t}$$ and regarding the sampled trajectories as the states at t = 0, Δt, 2Δt, . . . (actual time series is $$t=\tilde{t},\tilde{t}+{{\Delta }}t,...$$). Here, $$\tilde{t}$$ is a stochastic variable defined by $$\tilde{t}=\left\lfloor \frac{{{{{{{{\rm{uni}}}}}}}}(0,{{\Pi }})}{{{\Delta }}t^{\prime} }\right\rfloor {{\Delta }}t^{\prime} ,$$ (20) where uni(0, Π) returns the value x uniformly randomly from 0 < x < Π, the brackets are the floor function, and $${{\Delta }}t^{\prime} =1{0}^{-4}$$ is used independent of Δt. The strength Π is compared to Π0 which approximately satisfies σ0) ≈ σ(0)/2. We find that the estimate becomes an averaged value in the time direction, and the time interval dependence is small in this case. In conclusion, we find that our inference scheme is robust to deviations from an ideal dataset for experimentally feasible parameter values and even steep rates of change of the entropy production over short-time intervals. ## Conclusion The main contribution of this work is the insight that variational schemes can be used to estimate the exact entropy production rate of a non-stationary system under arbitrary conditions, given the constraints of Markovianity. The different variational representations of the entropy production rate: σNEEP, σSimple, and σTUR, as well as their close relation to each other, are clarified in terms of the range of applicability, the optimal coefficient field, and the tightness of the bound in each case, as summarized in Table 1. Our second main contribution is the algorithm we develop to implement the variational schemes, by means of continuous-time inference, namely using the constraint that d* has to be continuous in time, to infer it in one shot for the full-time range of interest. In addition, we find that the variance-based estimator of the entropy production rate, performs significantly better than other estimators, in the case when our algorithm is optimized to take full advantage of the continuous-time inference. We expect that this property will be of practical use in estimating entropy production for non-stationary systems. The continuous-time inference is enabled by the representation ability of the neural network and can be implemented without any prior assumptions on the functional form of the thermodynamic force F(x, t). Our work shows that the neural network can effectively learn the field even if it is time-dependent, thus opening up possibilities for future applications to non-stationary systems. Our studies regarding the practical effectiveness of our scheme when applied to data that might conceivably contain one of several sources of noise, indicate that these tools could also be applied to the study of biological19 or active matter systems70. It will also be interesting to test whether these results can be used to infer new information from existing empirical data from molecular motors such as kinesin71 or F1-ATPase72,73. The thermodynamics of cooling or warming up in classical systems74 or the study of quantum systems being monitored by a sequence of measurements75,76,77,78 are other promising areas to which these results can be applied. ## Data availability Trajectory data and the estimation results can be accessed online at https://doi.org/10.5281/zenodo.571699579. ## Code availability Computer codes implementing our algorithm and interactive demo programs are available online at https://github.com/tsuboshun/LearnEntropy. ## References 1. Harada, T. & Sasa, S.-i Equality connecting energy dissipation with a
measurements from such a large-scale power plant. The scheme is based on a Karhunen-Lo\{e}ve analysis of the data from the plant. The proposed scheme is subsequently tested on two sets of data: a set of synthetic data and a set of data from a coal-fired power plant. In both cases the scheme detects the beginning of the oscillation within only a few samples. In addition the oscillation localization has also shown its potential by localizing the oscillations in both data sets.}, BookTitle = {Proceedings of the {A}merican {C}ontrol {C}onference 2007}, Publisher = {American Automatic Control Council ({AACC})}, Pages = {5893--5898}, Month = {11--13 July}, Year = {2007}) 2006 @Article(osawm:fbhsfcdp, Author = {Peter Fogh Odgaard and Jakob Stoustrup and Palle Andersen and Mladen Victor Wickerhauser and Henrik Fl\o e Mikkelsen}, Title = {Feature Based Handling of Surface Faults in Compact Disc Players}, URL = {http://www.math.wustl.edu/~victor/papers/osawm.pdf}, DOI = {http://dx.doi.org/10.1016/j.conengprac.2006.01.002}, Abstract = { Compact Disc Players have been on the market for more than two decades and a majority of the control problems involved have been solved. However, there are still problems with playing Compact Discs related to surface faults like scratches and fingerprints. two servo control olops are used to keep the Optical Pick-up Unit focused on the information track of the Compact Disc. The problem is to design servo controllers which are well suited for handling surface faults that disturb position measurements, yet still react sufficiently against normal disturbances like mechanical shocks. In this paper a novel method called feature based control is presented. The method is based on a fault tolerant control scheme, which uses extracted features of the surface faults to remove those from the detector signals used for control during the occurence of surface faults. The extracted features are Karhunen--Lo\eve approximations of the surface faults. The performance of the feature based control scheme is validated by experimental work with Compact Discs having known surface defects. }, Journal = {Control Engineering Practice}, Volume = {14}, Number = {12}, Pages = {1495--1509}, Institution = {Washington University}, Month = {December}, Year = {2006}) @InProceedings(ow:fpccdp, Author = {Peter Fogh Odgaard and Mladen Victor Wickerhauser}, Title = {Fault Predictive Control of Compact Disk Players}, URL = {http://www.math.wustl.edu/~victor/papers/fpccdp.pdf}, Abstract = {Optical disc players such as CD-players have problems playing certain discs with surface faults like scratches and fingerprints. The problem is to be found in the servo controller which positions the optical pick-up, such that the laser beam is focused on the information track. A scheme handling this problem, called feature based control, has been presented in an other publications of the first author. This scheme is based on an assumption that the surface faults do not change from encounter to encounter. This assumption is unfortunately not entirely true. This paper proposes an improvement of the feature based control scheme, such that a prediction step is included. The proposed scheme is compared with the feature based control scheme, in the perspective of handling surface faults, by simulations. These simulations show the improvements given by the proposed algorithm.}, BookTitle = {Proceedings of 6th {IFAC} Symposium on Fault Detection, Supervision and Safety of Technical Processes.}, Publisher = {IFAC}, Month = {30 August to 1 September}, Pages = {1063--1068}, Year = {2006}) @InProceedings(osw:wpbdsfcd, Author = {Peter Fogh Odgaard and Jakob Stoustrup Title = {Wavelet Packet based Detection of Surface Faults on Compact Discs}, URL = {http://www.math.wustl.edu/~victor/papers/wpbdsfcd.pdf}, Abstract = {In this paper the detection of faults on the surface of a compact disc is addressed. Surface faults like scratches and fingerprints disturb the on-line measurement of the pick-up position relative to the track. This is critical since the pick-up is focused on and tracked at the information track based on these measurements. A precise detection of the surface fault is a prerequisite to a correct handling of the faults in order to protect the pick-up of the compact disc player from audible track losses. The actual fault handling which is addressed in other publications can be carried out by the use of dedicated filters adapted to remove the faults from the measurements. In this paper detection using wavelet packet filters is demonstrated. The filters are designed using the joint best basis method. Detection using these filters shows a distinct improvement compared to detection using ordinary threshold methods.}, BookTitle = {Proceedings of 6th {IFAC} Symposium on Fault Detection, Supervision and Safety of Technical Processes.}, Publisher = {IFAC}, Month = {30 August to 1 September}, Pages = {1165--1170}, Year = {2006}) 2005 @Article(elnetal:cmcdcdt, Title = {A Comparison of {M}onte {C}arlo Dose Calculation Denoising Techniques}, Author = {I. El Naqa and I. Kawrakow and M. Fippel and J. V. Siebers and P. E. Lindsay and M. V. Wickerhauser and M. Vicic and K. Zakarian and N. Kauffmann and J. O. Deasy}, URL = {http://stacks.iop.org/0031-9155/50/909}, DOI = {doi:10.1088/0031-9155/50/5/014}, Abstract = {Recent studies have demonstrated that Monte Carlo (MC) denoising techniques can reduce MC radiotherapy dose computation time significantly by preferentially eliminating statistical fluctuations (noise') through smoothing. In this study, we compare new and previously published approaches to MC denoising, including 3D wavelet threshold denoising with sub-band adaptive curve-fitting (LASG), anisotropic diffusion (AD) and an iterative reduction of noise (IRON) method formulated as an optimization problem. Several challenging phantom and computed-tomography-based MC dose distributions with varying levels of noise formed the test set. Denoising effectiveness was measured in three ways: by improvements in the mean-square-error (MSE) with respect to a reference (low noise) dose distribution; by the maximum difference from the reference distribution and by the Van Dyk' pass/fail criteria of either adequate agreement with the reference image in low-gradient regions (within 2 percent in our case) or, in high-gradient regions, a distance-to-agreement-within-2-percent of less than 2 mm. Results varied significantly based on the dose test case: greater reductions in MSE were observed for the relatively smoother phantom-based dose distribution (up to a factor of 16 for the LASG algorithm); smaller reductions were seen for an intensity modulated factors of 2.4). Although several algorithms reduced statistical noise for all test geometries, the LASG method had the best MSE reduction for three of the four test geometries, and performed the best for the Van Dyk criteria. However, the wavelet thresholding method performed better for the head and neck IMRT geometry and also decreased the maximum error more effectively than LASG. In almost all cases, the evaluated methods provided acceleration of MC results towards statistically more accurate results.}, Journal = {Physics in Medicine and Biology}, Volume = {50}, Year = {2005}, Pages = {909--922}) @Article(cmwj:fwewb, Author = {Elvir \v{C}au\v{s}evi\'c and Robert E. Morley and M. Victor Wickerhauser and Arnaud E. Jacquin}, Title = {Fast Wavelet Estimation of Weak Biosignals}, URL = {http://www.math.wustl.edu/~victor/papers/cmwj.pdf}, Abstract = {Wavelet based signal processing has become commonplace in the signal processing community over the past decade and wavelet based software tools and integrated circuits are now commercially available. One of the most important applications of wavelets is in removal of noise from signals, called denoising, accomplished by thresholding wavelet coefficients in order to separate signal from noise. Substantial work in this area was summarized by Donoho and colleagues at Stanford University, who developed a variety of algorithms for conventional denoising. However, conventional denoising fails for signals with low signal-to-noise ratio (SNR). Electrical signals acquired from the human body, called biosignals, commonly have below 0 dB SNR. Synchronous linear averaging of a large number of acquired data frames is universally used to increase the SNR of weak biosignals. A novel wavelet-based estimator is presented for fast estimation of such signals. The new estimation algorithm provides a faster rate of convergence to the underlying signal than linear averaging. The algorithm is implemented for processing of auditory brainstem response (ABR) and of auditory middle latency evoked potential response (AMLR) signals. Experimental results with both simulated data and human subjects demonstrate that the novel wavelet estimator achieves superior performance to that of linear averaging.}, Journal = {IEEE Transactions on Biomedical Engineering}, Month = {June}, Volume = {52}, Number = {6}, Pages = {19}, Year = {2005}) 2004 @TechReport(gw:ptcnii, Author = {William F. Gossling and Mladen Victor Wickerhauser}, Title = {Prices, the Trade Cycle, and the Nature of Industrial Interdependence}, Abstract = {The continuance of a rise in prices in a Western economy well after a downturn in final demands, termed Stagflation,'' has been a puzzle to economists in the 20th century: we hope, from the results set out in this paper, that any stagflation encountered in the 21st century will have been understood and even anticipated (in the proper sense of that word) by appropriate economic policies which include the use of input-output (or interindustry) tables. In conclusion, at the end of this paper, attention is drawn to the computability of projections of industrial price-levels and rates of return: the duals'' to the familiar primal'' projections (embracing industrial outputs and growth rates) into the future. This leads one to a conjoint forward view'' (see Gielnik, 1980, in Input, Output, and Marketing,'' London, I.-O. P. C.) which should be applicable at least to most economies which have enough of the required data.}, URL = {http://www.math.wustl.edu/~victor/papers/gw.pdf}, Software = {http://www.math.wustl.edu/~victor/software/cycles/index.html}, Pages = {13}, Institution = {Washington University}, Year = {2004}) @TechReport(osawm:smfrs, Author = {Peter Fogh Odgaard and Jakob Stoustrup and Palle Andersen and Mladen Victor Wickerhauser and Henrik Fl\o e Mikkelsen}, Title = {A Simulation Model of Focus and Radial Servos in {C}ompact {D}isc players with Disc Surface Defects}, URL = {http://www.math.wustl.edu/~victor/papers/cdsim.pdf}, Abstract = {Compact Disc players have
plate 3', None)] input_df = pandas.DataFrame(input_vals, columns=[plate_col_name, "blue"], index=input_indexes) with self.assertRaises(AssertionError): Sheet._set_control_values_to_plate_value( input_df, plate_col_name, projname_col_name) def test_format_sample_sheet(self): tester2 = SequencingProcess(2) tester2_date = datetime.strftime( tester2.date, Process.get_date_format()) # Note: cannot hard-code the date in the below known-good text # because date string representation is specific to time-zone in # which system running the tests is located! exp2 = ( '# PI,Dude,[email protected]', '# Contact,Demo,Shared', '# Contact emails,[email protected],[email protected]', '[Header]', 'IEMFileVersion\t4', 'Investigator Name\tDude', 'Experiment Name\tTestExperimentShotgun1', 'Date\t' + tester2_date, 'Workflow\tGenerateFASTQ', 'Application\tFASTQ Only', 'Assay\tMetagenomics', 'Description\t', 'Chemistry\tDefault', '', '[Reads]', '151', '151', '', '[Settings]', 'ReverseComplement\t0', '', '[Data]\n' 'Sample_ID\tSample_Name\tSample_Plate\tSample_Well' '\tI7_Index_ID\tindex\tI5_Index_ID\tindex2\tSample_Project' '\tWell_Description', 'sam1\tsam1\texample\tA1\tiTru7_101_01\tACGTTACC\tiTru5_01_A' '\tACCGACAA\texample_proj\t', 'sam2\tsam2\texample\tA2\tiTru7_101_02\tCTGTGTTG\tiTru5_01_B' '\tAGTGGCAA\texample_proj\t', 'blank1\tblank1\texample\tB1\tiTru7_101_03\tTGAGGTGT\t' 'iTru5_01_C\tCACAGACT\texample_proj\t', 'sam3\tsam3\texample\tB2\tiTru7_101_04\tGATCCATG\tiTru5_01_D' '\tCGACACTT\texample_proj\t') data = ( 'Sample_ID\tSample_Name\tSample_Plate\tSample_Well\t' 'I7_Index_ID\tindex\tI5_Index_ID\tindex2\tSample_Project\t' 'Well_Description\n' 'sam1\tsam1\texample\tA1\tiTru7_101_01\tACGTTACC\t' 'iTru5_01_A\tACCGACAA\texample_proj\t\n' 'sam2\tsam2\texample\tA2\tiTru7_101_02\tCTGTGTTG\t' 'iTru5_01_B\tAGTGGCAA\texample_proj\t\n' 'blank1\tblank1\texample\tB1\tiTru7_101_03\tTGAGGTGT\t' 'iTru5_01_C\tCACAGACT\texample_proj\t\n' 'sam3\tsam3\texample\tB2\tiTru7_101_04\tGATCCATG\t' 'iTru5_01_D\tCGACACTT\texample_proj\t' ) exp_sample_sheet = "\n".join(exp2) sp_id = tester2.id assay_t = PoolComposition.get_assay_type_for_sequencing_process(sp_id) params = {'include_lane': tester2.include_lane, 'pools': tester2.pools, 'principal_investigator': tester2.principal_investigator, 'contacts': tester2.contacts, 'experiment': tester2.experiment, 'date': tester2.date, 'fwd_cycles': tester2.fwd_cycles, 'rev_cycles': tester2.rev_cycles, 'run_name': tester2.run_name, 'sequencer': tester2.sequencer, 'assay_type': assay_t, 'sequencing_process_id': sp_id} sheet = SampleSheet.factory(**params) obs_sample_sheet = sheet._format_sample_sheet(data, sep='\t') self.assertEqual(exp_sample_sheet, obs_sample_sheet) def test_generate_sample_sheet_amplicon_single_lane(self): # Amplicon run, single lane tester = SequencingProcess(1) tester_date = datetime.strftime(tester.date, Process.get_date_format()) # Note: cannot hard-code the date in the below known-good text # because date string representation is specific to time-zone in # which system running the tests is located! obs = tester.generate_sample_sheet() exp = ('# PI,Dude,[email protected]\n' '# Contact,Admin,Demo,Shared\n' '# Contact emails,[email protected],[email protected],' '[email protected]\n' '[Header]\n' 'IEMFileVersion,4\n' 'Investigator Name,Dude\n' 'Experiment Name,TestExperiment1\n' 'Date,' + tester_date + '\n' 'Workflow,GenerateFASTQ\n' 'Application,FASTQ Only\n' 'Assay,TruSeq HT\n' 'Description,\n' 'Chemistry,Amplicon\n\n' '[Reads]\n' '151\n' '151\n\n' '[Settings]\n' 'ReverseComplement,0\n' 'Adapter,AGATCGGAAGAGCACACGTCTGAACTCCAGTCA\n' 'AdapterRead2,AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT\n\n' '[Data]\n' 'Sample_ID,Sample_Name,Sample_Plate,Sample_Well,I7_Index_ID,' 'index,I5_Index_ID,index2,Sample_Project,Well_Description,,\n' 'Test_sequencing_pool_1,,,,,NNNNNNNNNNNN,,,,3080,,,') self.assertEqual(obs, exp) def test_generate_sample_sheet_amplicon_multiple_lane(self): # Amplicon run, multiple lane user = User('[email protected]') tester = SequencingProcess.create( user, [PoolComposition(1), PoolComposition(2)], 'TestRun2', 'TestExperiment2', Equipment(19), 151, 151, user, contacts=[User('[email protected]')]) tester_date = datetime.strftime(tester.date, Process.get_date_format()) obs = tester.generate_sample_sheet() exp = ('# PI,Dude,[email protected]\n' '# Contact,Shared\n' '# Contact emails,[email protected]\n' '[Header]\n' 'IEMFileVersion,4\n' 'Investigator Name,Dude\n' 'Experiment Name,TestExperiment2\n' 'Date,' + tester_date + '\n' 'Workflow,GenerateFASTQ\n' 'Application,FASTQ Only\n' 'Assay,TruSeq HT\n' 'Description,\n' 'Chemistry,Amplicon\n\n' '[Reads]\n' '151\n' '151\n\n' '[Settings]\n' 'ReverseComplement,0\n' 'Adapter,AGATCGGAAGAGCACACGTCTGAACTCCAGTCA\n' 'AdapterRead2,AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT\n\n' '[Data]\n' 'Lane,Sample_ID,Sample_Name,Sample_Plate,Sample_Well,I7_Index_' 'ID,index,I5_Index_ID,index2,Sample_Project,Well_Description,' ',\n1,Test_Pool_from_Plate_1,,,,,NNNNNNNNNNNN,,,,3079,,,\n' '2,Test_sequencing_pool_1,,,,,NNNNNNNNNNNN,,,,3080,,,') self.assertEqual(obs, exp) def test_generate_sample_sheet_shotgun(self): # Shotgun run tester = SequencingProcess(2) tester_date = datetime.strftime(tester.date, Process.get_date_format()) obs = tester.generate_sample_sheet() exp = SHOTGUN_SAMPLE_SHEET.format(date=tester_date) self.assertEqual(obs, exp) def test_generate_amplicon_prep_information(self): # Sequencing run tester = SequencingProcess(1) obs = tester.generate_prep_information() exp_key = 'Test Run.1' exp = {exp_key: COMBINED_SAMPLES_AMPLICON_PREP_EXAMPLE} self.assertEqual(len(obs), len(exp)) self.assertEqual(obs[exp_key], exp[exp_key]) def test_generate_metagenomics_prep_information(self): tester = SequencingProcess(2) obs = tester.generate_prep_information() exp_key = 'TestShotgunRun1' exp = {exp_key: COMBINED_SAMPLES_METAGENOMICS_PREP_EXAMPLE} # extract encoded TSV from dictionaries obs = obs['TestShotgunRun1'] exp = exp['TestShotgunRun1'] # convert encoded TSVs into lists of rows obs = obs.split('\n') exp = exp.split('\n') # the row order of the expected output is fixed, but the order of the # observed output is random. Sorting both lists in place will allow # the two outputs to be compared for equality. obs.sort() exp.sort() self.assertListEqual(obs, exp) # The ordering of positions in this test case recapitulates that provided by # the wet-lab in known-good examples for plate compression and shotgun library # prep primer assignment, following an interleaved pattern. See the docstring # for get_interleaved_quarters_position_generator for more information. INTERLEAVED_POSITIONS = [ GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=0, input_plate_order_index=0, input_row_index=0, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=0, input_plate_order_index=0, input_row_index=1, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=0, input_plate_order_index=0, input_row_index=2, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=0, input_plate_order_index=0, input_row_index=3, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=0, input_plate_order_index=0, input_row_index=4, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=0, input_plate_order_index=0, input_row_index=5, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=0, input_plate_order_index=0, input_row_index=6, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=0, input_plate_order_index=0, input_row_index=7, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=2, input_plate_order_index=0, input_row_index=0, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=2, input_plate_order_index=0, input_row_index=1, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=2, input_plate_order_index=0, input_row_index=2, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=2, input_plate_order_index=0, input_row_index=3, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=2, input_plate_order_index=0, input_row_index=4, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=2, input_plate_order_index=0, input_row_index=5, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=2, input_plate_order_index=0, input_row_index=6, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=2, input_plate_order_index=0, input_row_index=7, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=4, input_plate_order_index=0, input_row_index=0, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=4, input_plate_order_index=0, input_row_index=1, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=4, input_plate_order_index=0, input_row_index=2, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=4, input_plate_order_index=0, input_row_index=3, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=4, input_plate_order_index=0, input_row_index=4, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=4, input_plate_order_index=0, input_row_index=5, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=4, input_plate_order_index=0, input_row_index=6, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=4, input_plate_order_index=0, input_row_index=7, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=6, input_plate_order_index=0, input_row_index=0, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=6, input_plate_order_index=0, input_row_index=1, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=6, input_plate_order_index=0, input_row_index=2, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=6, input_plate_order_index=0, input_row_index=3, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=6, input_plate_order_index=0, input_row_index=4, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=6, input_plate_order_index=0, input_row_index=5, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=6, input_plate_order_index=0, input_row_index=6, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=6, input_plate_order_index=0, input_row_index=7, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=8, input_plate_order_index=0, input_row_index=0, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=8, input_plate_order_index=0, input_row_index=1, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=8, input_plate_order_index=0, input_row_index=2, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=8, input_plate_order_index=0, input_row_index=3, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=8, input_plate_order_index=0, input_row_index=4, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=8, input_plate_order_index=0, input_row_index=5, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=8, input_plate_order_index=0, input_row_index=6, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=8, input_plate_order_index=0, input_row_index=7, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=10, input_plate_order_index=0, input_row_index=0, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=10, input_plate_order_index=0, input_row_index=1, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=10, input_plate_order_index=0, input_row_index=2, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=10, input_plate_order_index=0, input_row_index=3, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=10, input_plate_order_index=0, input_row_index=4, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=10, input_plate_order_index=0, input_row_index=5, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=10, input_plate_order_index=0, input_row_index=6, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=10, input_plate_order_index=0, input_row_index=7, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=12, input_plate_order_index=0, input_row_index=0, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=12, input_plate_order_index=0, input_row_index=1, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=12, input_plate_order_index=0, input_row_index=2, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=12, input_plate_order_index=0, input_row_index=3, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=12, input_plate_order_index=0, input_row_index=4, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=12, input_plate_order_index=0, input_row_index=5, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=12, input_plate_order_index=0, input_row_index=6, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=12, input_plate_order_index=0, input_row_index=7, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=14, input_plate_order_index=0, input_row_index=0, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=14, input_plate_order_index=0, input_row_index=1, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=14, input_plate_order_index=0, input_row_index=2, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=14, input_plate_order_index=0, input_row_index=3, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=14, input_plate_order_index=0, input_row_index=4, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=14, input_plate_order_index=0, input_row_index=5, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=14, input_plate_order_index=0, input_row_index=6, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=14, input_plate_order_index=0, input_row_index=7, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=16, input_plate_order_index=0, input_row_index=0, input_col_index=8), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=16, input_plate_order_index=0, input_row_index=1, input_col_index=8), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=16, input_plate_order_index=0, input_row_index=2, input_col_index=8), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=16, input_plate_order_index=0, input_row_index=3, input_col_index=8), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=16, input_plate_order_index=0, input_row_index=4, input_col_index=8), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=16, input_plate_order_index=0, input_row_index=5, input_col_index=8), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=16, input_plate_order_index=0, input_row_index=6, input_col_index=8), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=16, input_plate_order_index=0, input_row_index=7, input_col_index=8), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=18, input_plate_order_index=0, input_row_index=0, input_col_index=9), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=18, input_plate_order_index=0, input_row_index=1, input_col_index=9), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=18, input_plate_order_index=0, input_row_index=2, input_col_index=9), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=18, input_plate_order_index=0, input_row_index=3, input_col_index=9), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=18, input_plate_order_index=0, input_row_index=4, input_col_index=9), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=18, input_plate_order_index=0, input_row_index=5, input_col_index=9), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=18, input_plate_order_index=0, input_row_index=6, input_col_index=9), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=18, input_plate_order_index=0, input_row_index=7, input_col_index=9), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=20, input_plate_order_index=0, input_row_index=0, input_col_index=10), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=20, input_plate_order_index=0, input_row_index=1, input_col_index=10), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=20, input_plate_order_index=0, input_row_index=2, input_col_index=10), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=20, input_plate_order_index=0, input_row_index=3, input_col_index=10), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=20, input_plate_order_index=0, input_row_index=4, input_col_index=10), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=20, input_plate_order_index=0, input_row_index=5, input_col_index=10), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=20, input_plate_order_index=0, input_row_index=6, input_col_index=10), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=20, input_plate_order_index=0, input_row_index=7, input_col_index=10), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=22, input_plate_order_index=0, input_row_index=0, input_col_index=11), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=22, input_plate_order_index=0, input_row_index=1, input_col_index=11), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=22, input_plate_order_index=0, input_row_index=2, input_col_index=11), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=22, input_plate_order_index=0, input_row_index=3, input_col_index=11), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=22, input_plate_order_index=0, input_row_index=4, input_col_index=11), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=22, input_plate_order_index=0, input_row_index=5, input_col_index=11), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=22, input_plate_order_index=0, input_row_index=6, input_col_index=11), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=22, input_plate_order_index=0, input_row_index=7, input_col_index=11), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=1, input_plate_order_index=1, input_row_index=0, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=1, input_plate_order_index=1, input_row_index=1, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=1, input_plate_order_index=1, input_row_index=2, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=1, input_plate_order_index=1, input_row_index=3, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=1, input_plate_order_index=1, input_row_index=4, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=1, input_plate_order_index=1, input_row_index=5, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=1, input_plate_order_index=1, input_row_index=6, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=1, input_plate_order_index=1, input_row_index=7, input_col_index=0), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=3, input_plate_order_index=1, input_row_index=0, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=3, input_plate_order_index=1, input_row_index=1, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=3, input_plate_order_index=1, input_row_index=2, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=3, input_plate_order_index=1, input_row_index=3, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=3, input_plate_order_index=1, input_row_index=4, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=3, input_plate_order_index=1, input_row_index=5, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=3, input_plate_order_index=1, input_row_index=6, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=3, input_plate_order_index=1, input_row_index=7, input_col_index=1), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=5, input_plate_order_index=1, input_row_index=0, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=5, input_plate_order_index=1, input_row_index=1, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=5, input_plate_order_index=1, input_row_index=2, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=5, input_plate_order_index=1, input_row_index=3, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=5, input_plate_order_index=1, input_row_index=4, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=5, input_plate_order_index=1, input_row_index=5, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=5, input_plate_order_index=1, input_row_index=6, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=5, input_plate_order_index=1, input_row_index=7, input_col_index=2), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=7, input_plate_order_index=1, input_row_index=0, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=7, input_plate_order_index=1, input_row_index=1, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=7, input_plate_order_index=1, input_row_index=2, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=7, input_plate_order_index=1, input_row_index=3, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=7, input_plate_order_index=1, input_row_index=4, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=7, input_plate_order_index=1, input_row_index=5, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=7, input_plate_order_index=1, input_row_index=6, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=7, input_plate_order_index=1, input_row_index=7, input_col_index=3), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=9, input_plate_order_index=1, input_row_index=0, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=9, input_plate_order_index=1, input_row_index=1, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=9, input_plate_order_index=1, input_row_index=2, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=9, input_plate_order_index=1, input_row_index=3, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=9, input_plate_order_index=1, input_row_index=4, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=9, input_plate_order_index=1, input_row_index=5, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=9, input_plate_order_index=1, input_row_index=6, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=9, input_plate_order_index=1, input_row_index=7, input_col_index=4), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=11, input_plate_order_index=1, input_row_index=0, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=11, input_plate_order_index=1, input_row_index=1, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=11, input_plate_order_index=1, input_row_index=2, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=11, input_plate_order_index=1, input_row_index=3, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=11, input_plate_order_index=1, input_row_index=4, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=11, input_plate_order_index=1, input_row_index=5, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=11, input_plate_order_index=1, input_row_index=6, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=11, input_plate_order_index=1, input_row_index=7, input_col_index=5), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=13, input_plate_order_index=1, input_row_index=0, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=13, input_plate_order_index=1, input_row_index=1, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=13, input_plate_order_index=1, input_row_index=2, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=13, input_plate_order_index=1, input_row_index=3, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=13, input_plate_order_index=1, input_row_index=4, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=13, input_plate_order_index=1, input_row_index=5, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=13, input_plate_order_index=1, input_row_index=6, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=13, input_plate_order_index=1, input_row_index=7, input_col_index=6), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=0, output_col_index=15, input_plate_order_index=1, input_row_index=0, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=2, output_col_index=15, input_plate_order_index=1, input_row_index=1, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=4, output_col_index=15, input_plate_order_index=1, input_row_index=2, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=6, output_col_index=15, input_plate_order_index=1, input_row_index=3, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=8, output_col_index=15, input_plate_order_index=1, input_row_index=4, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=10, output_col_index=15, input_plate_order_index=1, input_row_index=5, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=12, output_col_index=15, input_plate_order_index=1, input_row_index=6, input_col_index=7), GDNAPlateCompressionProcess.InterleavedPosition(output_row_index=14, output_col_index=15, input_plate_order_index=1, input_row_index=7, input_col_index=7),
both the necessary dynamical systems structure and elements that are chosen randomly. The tensors were constructed as follows. First, vectors $\{ \bar{\textbf{v}}_0,...,\bar{\textbf{v}}_{R - 1}\}$, with $\bar{\textbf{v}}_r \in (0, 1)^{n}$ sampled uniformly, were generated and then made to be orthonormal. Next, exponential functions $\{\exp(2\pi i \alpha_0/10 + \beta_0/10), ...,\exp(2\pi i \alpha_{R-1}/10 + \beta_{R - 1}/10) \}$, with $\alpha_r, \beta_r \in (-1,1)$ sampled uniformly, were generated. Vectors $\bar{\textbf{s}}_r^T = [\exp(2\pi i \alpha_r/10 + \beta_r/10),...,\exp(2\pi i N \alpha_r/10 + \beta_r/10)]$, for a chosen $N \in \mathbb{Z}^+$, were created. Finally, random polynomial functions $\{\bar{\phi}_0,...,\bar{\phi}_{R-1} \}$, with $\bar{\phi}_r(x) = x^{\gamma_r}$ and $\gamma_r \in (0, K)$ sampled uniformly, were generated. Vectors $\bar{\bm{\varphi}}_r^T = [\bar{\phi}_r(0.1),...,\bar{\phi}_r(1)]$ were then created. Letting $\bar{\textbf{s}}'_r$ be the time shifted version of $\bar{\textbf{s}}_r$, i.e. $\bar{\textbf{s}}^{'T}_r = \exp(2\pi i \alpha_r/10 + \beta_r/10) \bar{\textbf{s}}_r^T$, the tensors \begin{equation*} \begin{split} \pmb{\mathscr{X}} = \sum_{r = 0}^{R-1}\bar{\textbf{v}}_r \otimes \bar{\textbf{s}}_r \otimes \bar{{\bm \varphi}}_r \\ \pmb{\mathscr{Y}} = \sum_{r = 0}^{R-1}\bar{\textbf{v}}_r \otimes \bar{\textbf{s}}'_r \otimes \bar{{\bm \varphi}}_r \end{split} \end{equation*} were constructed. We note here that this data can be interpreted as coming from a linear dynamical system, where the $\bar{\textbf{v}}_i$ are the eigenvectors of the dynamical map, the $\bar{\textbf{s}}_i$ are the time evolution of each mode, and the $\bar{\phi}_i$ are the dependence of the amplitude of each mode on initial condition. Note the bars have been used to distinguish these sources from the computed Exact DMD modes (although we expect, and do indeed find, that they should be the same). The TCA modes were computed using Tensorlab 3.0\cite{tensorlab}, a MATLAB package that implements various possible TCA decomposition algorithms. We used the nonlinear least squares based method. The Exact DMD modes were computed using Algorithm 2 and Appendix 1 from Tu et al. 2014\cite{tu14}. The code that generated these examples has been made freely available online \cite{wtr_git}. The computed modes from one instance of the randomly generated data using $R = 2$, $n = 2$, $N = 100$, and $K = 3$ is shown in Fig. \ref{r2_lin_sys}. Here $\textbf{v}_0^T = [-3.1530, 2.0160]$, $\textbf{v}_1^T = [1.1704, 1.8305]$, $\alpha_0 = 0.7418$, $\alpha_1 = -0.2043$, $\beta_0 = 0.322$, $\beta_1 = -0.7883$, $\gamma_0 = 0.3217$, and $\gamma_1 = 1.4534$. For both modes, the TCA and Exact DMD triplets were nearly identical (they have been separated from each other for clarity of display), and lie on top of the true sources. Thus, both methods correctly reveal that the first mode is less sensitive to initial condition than the second (sublinear vs. superlinear), and that the first mode grows in time, whereas the second decays. We generated 100 data tensors (again using $R = 2$, $n = 2$, $N = 100$, and $K = 3$) and computed the mean normed difference between the Exact DMD and the TCA modes, normalizing by the norm of the TCA modes. The median of these 100 values was $2.43\times10^{-12}$. As this suggests, for many of the 100 tensors, the two methods produced very similar results. However, we did find a few cases where the difference between the two was quite large (for these cases, the difference between the Exact DMD modes and the true values remained very small). The extent of this appeared to be related to how ``long'' the experiment was, as increasing $N$ led to more discrepancies and decreasing $N$ led to more overlap. We do not believe this was (primarily) caused by a change in $R^*$, as the error associated with using only one mode remained far larger (many orders of magnitude) than the error associated with using two. A reason for these results may be that, by increasing $N$ (and thus, the size of the matrix $\textbf{B}$), more local minima emerge that the optimization algorithm could get trapped in. \begin{figure} \centering \includegraphics[width = 0.49\textwidth]{R2_lin_sys_example.png} \caption{\textbf{Numerical example of TCA and DMD modes being equivalent.} (a) The two components of the vector $\textbf{v}_0$ as computed by Exact DMD (light blue) and TCA (red), along with their true values (black). The Exact DMD values are the mean of those computed from the data matrices corresponding to the $10$ different initial conditions. (c) The real part of each mode's time evolution $\textbf{s}'_0$, as computed by the two methods, along with their true value. The Exact DMD values are again the mean over the $10$ different computed values. Points are offset from each other and ``sparsefied'' for display purposes (they would otherwise lie on top of each other). (e) The dependence on initial condition $\phi_0(x)$, as computed by the two methods, along with their true values. The points are again separated for purpose of display. (b), (d), (f) are the same as (a), (c), (e), respectively, for the other mode. } \label{r2_lin_sys} \end{figure} \section{Discussion} In this paper, we examined tensor component analysis (TCA -- also known as CANDECOMP/PARAFAC or canonical polyadic decomposition) in relation to Koopman mode decomposition (KMD). This was motivated by the fact that both methods have become popular ways to discover, in an unsupervised manner, the relevant features and/or dynamics of a given dynamical system. Despite their joint aim, the two methods have largely occupied disjoint scientific realms. Therefore, it became our goal to examine the two methods together and see what, if any, connections existed between them in an effort to ``bridge'' the different communities. While previous work has compared principal component analysis (PCA) with KOT methods both directly\cite{mez05, row09, bru16, klu18a, lu20} and indirectly\cite{bra17, red20}, and approaches to do KMD on tensors have been developed \cite{klu18b, gel19}, little work has be done on comparing KMD to TCA \cite{lus19}. We considered dynamic mode decomposition (DMD) \cite{row09, sch10, che12, tu14, wil15}, a popular approach for performing KMD, on a data three-tensor, with one dimension being the elements of the state space, one being time, and one being the initial conditions. We proved, in Lemma 2, that when the data is linearly consistent and the Koopman operator, approximated via Exact DMD \cite{tu14}, has a full set of eigenvectors and rank R, the DMD modes are an R--optimal TCA decomposition of the data. Motivated by this, we then formulated a correspondence between the TCA and DMD modes (Eq. 17). We proved, in Lemma 3, that this correspondence was exact, up to scaling and permutation of labels of the modes, when R is equal to the tensor rank of the data and when a certain inequality (Eq. \ref{Kruskal uniqueness}), known to be sufficient for TCA uniqueness, is satisfied. When the modes of the two methods are equivalent, there is a strong implication about the dynamics of the data. Namely, the data comes from distinct sources with single exponential growth/decay and/or oscillatory time dynamics. On a simple example, we showed that modes computed via a numerical implementation of TCA\cite{tensorlab} very nearly matched the modes computed via Exact DMD \cite{tu14} (Fig. \ref{r2_lin_sys}). \subsection{Limitations} Our present analysis is limited by the fact that we used Exact DMD to make the connection between TCA and KMD. This requires the approximated Koopman operator to have a full set of eigenvectors and the data to be linearly consistent\cite{tu14}. Because these assumptions are not always met \cite{bag13}, and because there are other DMD and KMD approaches that still hold in such cases, future work will need to look at connecting TCA and KMD in less restricted settings. In addition, DMD is ultimately limited by the fact that it is an approximation to the true KMD. This approximation can break down when the data's dynamics are nonlinear, which has motivated the development of more sophisticated KOT methods \cite{wil15, kam20, tak17, li17, lus18, ott19, yeu19}. Understanding whether and, if so, how such techniques can be connected to TCA is a major open question, whose answer(s) will greatly extend our work. \subsection{TCA vs. DMD} A large, and growing, body of applied research makes it clear that TCA and DMD are powerful tools, especially when the equations underlying
### Signal and image processing How many samples are needed to reconstruct a sparse signal? Well, there are many, many results around some of which you probably know (at least if you are following this blog or this one). Today I write about a neat result which I found quite some time ago on reconstruction of nonnegative sparse signals from a semi-continuous perspective. 1. From discrete sparse reconstruction/compressed sensing to semi-continuous The basic sparse reconstruction problem asks the following: Say we have a vector ${x\in{\mathbb R}^m}$ which only has ${s non-zero entries and a fat matrix ${A\in{\mathbb R}^{n\times m}}$ (i.e. ${n>m}$) and consider that we are given measurements ${b=Ax}$. Of course, the system ${Ax=b}$ is underdetermined. However, we may add a little more prior knowledge on the solution and ask: Is is possible to reconstruct ${x}$ from ${b}$ if we know that the vector ${x}$ is sparse? If yes: How? Under what conditions on ${m}$, ${s}$, ${n}$ and ${A}$? This question created the expanding universe of compressed sensing recently (and this universe is expanding so fast that for sure there has to be some dark energy in it). As a matter of fact, a powerful method to obtain sparse solutions to underdetermined systems is ${\ell^1}$-minimization a.k.a. Basis Pursuit on which I blogged recently: Solve $\displaystyle \min_x \|x\|_1\ \text{s.t.}\ Ax=b$ and the important ingredient here is the ${\ell^1}$-norm of the vector in the objective function. In this post I’ll formulate semi-continuous sparse reconstruction. We move from an ${m}$-vector ${x}$ to a finite signed measure ${\mu}$ on a closed interval (which we assume to be ${I=[-1,1]}$ for simplicty). We may embed the ${m}$-vectors into the space of finite signed measures by choosing ${m}$ points ${t_i}$, ${i=1,\dots, m}$ from the interval ${I}$ and build ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ with the point-masses (or Dirac measures) ${\delta_{t_i}}$. To a be a bit more precise, we speak about the space ${\mathfrak{M}}$ of Radon measures on ${I}$, which are defined on the Borel ${\sigma}$-algebra of ${I}$ and are finite. Radon measures are not very scary objects and an intuitive way to think of them is to use Riesz representation: Every Radon measure arises as a continuous linear functional on a space of continuous functions, namely the space ${C_0(I)}$ which is the closure of the continuous functions with compact support in ${{]{-1,1}[}}$ with respect to the supremum norm. Hence, Radon measures work on these functions as ${\int_I fd\mu}$. It is also natural to speak of the support ${\text{supp}(\mu)}$ of a Radon measure ${\mu}$ and it holds for any continuous function ${f}$ that $\displaystyle \int_I f d\mu = \int_{\text{supp}(\mu)}f d\mu.$ An important tool for Radon measures is the Hahn-Jordan decomposition which decomposes ${\mu}$ into a positive part ${\mu^+}$ and a negative part ${\mu^-}$, i.e. ${\mu^+}$ and ${\mu^-}$ are non-negative and ${\mu = \mu^+-\mu^-}$. Finally the variation of a measure, which is $\displaystyle \|\mu\| = \mu^+(I) + \mu^-(I)$ provides a norm on the space of Radon measures. Example 1 For the measure ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ one readily calculates that $\displaystyle \mu^+ = \sum_i \max(0,x_i)\delta_{t_i},\quad \mu^- = \sum_i \max(0,-x_i)\delta_{t_i}$ and hence $\displaystyle \|\mu\| = \sum_i |x_i| = \|x\|_1.$ In this sense, the space of Radon measures provides a generalization of ${\ell^1}$. We may sample a Radon measure ${\mu}$ with ${n+1}$ linear functionals and these can be encoded by ${n+1}$ continuous functions ${u_0,\dots,u_n}$ as $\displaystyle b_k = \int_I u_k d\mu.$ This sampling gives a bounded linear operator ${K:\mathfrak{M}\rightarrow {\mathbb R}^{n+1}}$. The generalization of Basis Pursuit is then given by $\displaystyle \min_{\mu\in\mathfrak{M}} \|\mu\|\ \text{s.t.}\ K\mu = b.$ This was introduced and called “Support Pursuit” in the preprint Exact Reconstruction using Support Pursuit by Yohann de Castro and Frabrice Gamboa. More on the motivation and the use of Radon measures for sparsity can be found in Inverse problems in spaces of measures by Kristian Bredies and Hanna Pikkarainen. 2. Exact reconstruction of sparse nonnegative Radon measures Before I talk about the results we may count the degrees of freedom a sparse Radon measure has: If ${\mu = \sum_{i=1}^s x_i \delta_{t_i}}$ with some ${s}$ than ${\mu}$ is defined by the ${s}$ weights ${x_i}$ and the ${s}$ positions ${t_i}$. Hence, we expect that at least ${2s}$ linear measurements should be necessary to reconstruct ${\mu}$. Surprisingly, this is almost enough if we know that the measure is nonnegative! We only need one more measurement, that is ${2s+1}$ and moreover, we can take fairly simple measurements, namely the monomials: ${u_i(t) = t^i}$ ${i=0,\dots, n}$ (with the convention that ${u_0(t)\equiv 1}$). This is shown in the following theorem by de Castro and Gamboa. Theorem 1 Let ${\mu = \sum_{i=1}^s x_i\delta_{t_i}}$ with ${x_i\geq 0}$, ${n=2s}$ and let ${u_i}$, ${i=0,\dots n}$ be the monomials as above. Define ${b_i = \int_I u_i(t)d\mu}$. Then ${\mu}$ is the unique solution of the support pursuit problem, that is of $\displaystyle \min \|\nu\|\ \text{s.t.}\ K\nu = b.\qquad \textup{(SP)}$ Proof: The following polynomial will be of importance: For a constant ${c>0}$ define $\displaystyle P(t) = 1 - c \prod_{i=1}^s (t-t_i)^2.$ The following properties of ${P}$ will be used: 1. ${P(t_i) = 1}$ for ${i=1,\dots,s}$ 2. ${P}$ has degree ${n=2s}$ and hence, is a linear combination of the ${u_i}$, ${i=0,\dots,n}$, i.e. ${P = \sum_{k=0}^n a_k u_k}$. 3. For ${c}$ small enough it holds for ${t\neq t_i}$ that ${|P(t)|<1}$. Now let ${\sigma}$ be a solution of (SP). We have to show that ${\|\mu\|\leq \|\sigma\|}$. Due to property 2 we know that $\displaystyle \int_I u_k d\sigma = (K\sigma)k = b_k = \int_I u_k d\mu.$ Due to property 1 and non-negativity of ${\mu}$ we conclude that $\displaystyle \begin{array}{rcl} \|\mu\| & = & \sum_{i=1}^s x_i = \int_I P d\mu\\ & = & \int_I \sum_{k=0}^n a_k u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\sigma\\ & = & \int_I P d\sigma. \end{array}$ Moreover, by Lebesgue’s decomposition we can decompose ${\sigma}$ with respect to ${\mu}$ such that $\displaystyle \sigma = \underbrace{\sum_{i=1}^s y_i\delta_{t_i}}_{=\sigma_1} + \sigma_2$ and ${\sigma_2}$ is singular with respect to ${\mu}$. We get $\displaystyle \begin{array}{rcl} \int_I P d\sigma = \sum_{i=1}^s y_i + \int P d\sigma_2 \leq \|\sigma_1\| + \|\sigma_2\|=\|\sigma\| \end{array}$ and we conclude that ${\|\sigma\| = \|\mu\|}$ and especially ${\int_I P d\sigma_2 = \|\sigma_2\|}$. This shows that ${\mu}$ is a solution to ${(SP)}$. It remains to show uniqueness. We show the following: If there is a ${\nu\in\mathfrak{M}}$ with support in ${I\setminus\{x_1,\dots,x_s\}}$ such that ${\int_I Pd\nu = \|\nu\|}$, then ${\nu=0}$. To see this, we build, for any ${r>0}$, the sets $\displaystyle \Omega_r = [-1,1]\setminus \bigcup_{i=1}^s ]x_i-r,x_i+r[.$ and assume that there exists ${r>0}$ such that ${\|\nu|_{\Omega_r}\|\neq 0}$ (${\nu|_{\Omega_r}}$ denoting the restriction of ${\nu}$ to ${\Omega_r}$). However, it holds by property 3 of ${P}$ that $\displaystyle \int_{\Omega_r} P d\nu < \|\nu|_{\Omega_r}\|$ and consequently $\displaystyle \begin{array}{rcl} \|\nu\| &=& \int Pd\nu = \int_{\Omega_r} Pd\nu + \int_{\Omega_r^C} P d\nu\\ &<& \|\nu|_{\Omega_r}\| + \|\nu|_{\Omega_r^C}\| = \|\nu\| \end{array}$ which is a contradiction. Hence, ${\nu|_{\Omega_r}=0}$ for all ${r}$ and this implies ${\nu=0}$. Since ${\sigma_2}$ has its support in ${I\setminus\{x_1,\dots,x_s\}}$ we conclude that ${\sigma_2=0}$. Hence the support of ${\sigma}$ is exactly ${\{x_1,\dots,x_s\}}$. and since ${K\sigma = b = K\mu}$ and hence ${K(\sigma-\mu) = 0}$. This can be written as a Vandermonde system $\displaystyle \begin{pmatrix} u_0(t_1)& \dots &u_0(t_s)\\ \vdots & & \vdots\\ u_n(t_1)& \dots & u(t_s) \end{pmatrix} \begin{pmatrix} y_1 - x_1\\ \vdots\\ y_s - x_s \end{pmatrix} = 0$ which only has the zero solution, giving ${y_i=x_i}$. $\Box$ 3. Generalization to other measurements The measurement by monomials may sound a bit unusual. However, de Castro and Gamboa show more. What really matters here is that the monomials for a so-called Chebyshev-System (or Tchebyscheff-system or T-system – by the way, have you ever tried to google for a T-system?). This is explained, for example in the book “Tchebycheff Systems: With Applications in Analysis and Statistics” by Karlin and Studden. A T-system on ${I}$ is simply a set of ${n+1}$ functions ${\{u_0,\dots, u_n\}}$ such that any linear combination of these functions has at most ${n}$ zeros. These systems are called after Tchebyscheff since they obey many of the helpful properties of the Tchebyscheff-polynomials. What is helpful in
of groupoids \[ \sdot^{\mathrm{Wald}}(\cA)\stackrel{\simeq}{\longrightarrow}\sdot^e(\cA). \] \end{prop} \begin{proof} We build a canonical map $\sdot^{\mathrm{Wald}}\cA\to\sdot^e\cA$ that is a levelwise equivalence. If $A(\cA)$ denotes the groupoid of zero objects of $\cA$ and $0$ is a fixed zero object, there is an isomorphism of groupoids \begin{equation} \label{waldsn} \sdotwn{n}(\cA)\cong\{0\}^{n+1}\ttimes{A(\cA)^{n+1}}\sdoten{n}(\cA). \end{equation} For each $0\leq \inda \leq n$ there is a map $\sdoten{n}(\cA)\toA(\cA)$, given by evaluating a diagram of shape $\operatorname{Ar}[n]$ at the object $(\inda,\inda)$, which is an isofibration. It follows that the pullback \eqref{waldsn} is a 2-pullback, and the map \[ \sdotwn{n}(\cA)\cong\{0\}^{n+1}\ttimes{A(\cA)^{n+1}}\sdoten{n}(\cA)\longrightarrowA(\cA)^{n+1}\ttimes{A(\cA)^{n+1}}\sdoten{n}(\cA)\cong \sdoten{n}(\cA) \] induced by the categorical equivalence $\{0\}\hookrightarrowA(\cA)$ is an equivalence of groupoids, as desired. \end{proof} The following theorem shows that the generalized $S_{\bullet}$-construction recovers the previous constructions for exact categories. \begin{thm} \label{comparisonsdotexact} Let $\cA$ be an exact category. There is an isomorphism of groupoids $$\sdot^e(\cA)\congS_{\bullet}(N^e\cA).$$ \end{thm} \begin{rmk} \label{arnvswn} The difference between $\Ar[n]$ and $\wW{n}$ (depicted in \eqref{PictureW4}), which have essentially the same shape, comes to the fore in the course of proving this theorem, so it is important to distinguish the different contexts in which they live. An important heuristic difference between $\operatorname{Ar}[n]$ and $\wW{n}$ is the following. In the former, we consider the diagram as a category, with all arrows being morphisms in a common category, and the depiction merely helps to organize the data. In the latter, there are two distinct kinds of morphisms, horizontal and vertical, for which the depiction as such is an essential feature; for example, a horizontal and a vertical morphism cannot be composed with one another. In particular, the nerve $N\operatorname{Ar}[n]$ has the structure of a (discrete) simplicial space, whereas $\wW{n}$, as given in \cref{nerveofWn}, is a (discrete) preaugmented bisimplicial space. The difference becomes key in our comparison here, since $\sdoten{n}(\cA)$ is given by diagrams indexed by the category $\operatorname{Ar}[n]$, whereas $\wW{n}$ is more closely related to the $S_{\bullet}$-construction of an augmented stable double Segal object. The next lemma identifies the correct framework in which to compare the two indexing shapes $\operatorname{Ar}[n]$ and~$\wW{n}$. \end{rmk} The following technical lemma is the main tool in the argument of the proof of \cref{comparisonsdotexact}, and is formulated in the setting of $\mathit{s}\set$ endowed with the Joyal model structure, recalled in \cref{joyalmodel}. We denote by $\tau_1\colon\mathit{s}\set\to\cC\!\mathit{at}$ the left adjoint to the nerve functor, often called the \emph{fundamental category functor} \cite[\S 1]{joyalquasicategories}. We further explain the intuition behind the functor $T$ in \cref{TNC}. \begin{lem} \label{midanodyne} Consider the functor $T\colon\mathit{s}\set\to \sasset$ defined by \[ (TX)_{\inda,r}:= \Map_{\mathit{s}\set}(\Delta[\inda]\times \Delta[r], X)\text{ and } (TX)_{-1}:= \Map_{\mathit{s}\set}(\Delta[0], X). \] \begin{enumerate} \item[(a)] The functor $T$ is part of a simplicial adjoint pair \[ L\colon \sasset \leftrightarrows \mathit{s}\set\colon T. \] \item[(b)] There is a natural acyclic cofibration \[ L\wW{n}\longrightarrow N\operatorname{Ar}[n]. \] \item[(c)] The acyclic cofibration from (b) induces an isomorphism of categories \[ \tau_1(L\wW{n})\longrightarrow\tau_1(N\operatorname{Ar}[n])\cong \operatorname{Ar}[n]. \] \item[(d)] If $f \colon X\longrightarrow Y$ is a map of simplicial sets inducing an isomorphism of categories $\tau_1(f)\colon \tau_1(X)\to\tau_1(Y)$, then for any category $\mathcal{D}$ the map $f$ induces an isomorphism of mapping spaces \[ \Map_{\mathit{s}\set}(Y,N\mathcal{D})\to \Map_{\mathit{s}\set}(X,N\mathcal{D}). \] \end{enumerate} \end{lem} We postpone the proof to the appendix, where we also recall more of the main ingredients involved, including background on quasi-categories. \begin{rmk} \label{TNC} Applying the functor $T$ to the ordinary nerve $N\mathcal{D}$ of a category $\mathcal{D}$, one can verify that $TN\mathcal{D}$ is the preaugmented bisimplicial simplicial set described as follows. \begin{enumerate} \item The augmentation is simply the nerve of $\mathcal{D}$: \[ (TN\mathcal{D})_{-1}=N\mathcal{D}. \] \item \label{GridsBisimplicesTNC} For each $(\inda,r)$, we recover the nerve of the category of $(\inda\timesr)$-grids in $\mathcal{D}$: \[ (TN\mathcal{D})_{\inda, r}=N\Fun([\inda]\times[r], \mathcal{D}). \] \end{enumerate} The bisimplicial structure is induced by the bi-cosimplicial structure on the collection of categories $[\inda]\times[r]$ for $\inda, r \geq 0$. The additional map $(TN\mathcal{D})_{-1}\to (TN\mathcal{D})_{0,0}$ is the identity map. \end{rmk} We now establish our desired comparison between $\sdot^e(\cA)$ and~$S_{\bullet}(N^e\cA)$. \begin{proof}[Proof of \cref{comparisonsdotexact}] The isomorphism of categories \[ \tau_1(L\wW{n})\xrightarrow{\cong}\tau_1(N\operatorname{Ar}[n])\cong \operatorname{Ar}[n] \] from \cref{midanodyne}(c) induces, via \cref{midanodyne}(d) and \cref{midanodyne}(a), isomorphisms of simplicial sets \begin{gather}\label{largeridentification} \begin{aligned} N\Fun(\operatorname{Ar}[n],\cA)&\cong \Map_{\mathit{s}\set}(N\operatorname{Ar}[n], N\cA)\\ &\cong \Map_{\mathit{s}\set}(L\wW{n}, N\cA)\\ &\cong \Map_{\sasset}(\wW{n}, TN\cA). \end{aligned} \end{gather} In particular, because $N\Fun(\operatorname{Ar}[n],\cA)$ is a nerve of a category and hence a quasi-category, we can conclude that the simplicial set $\Map_{\sasset}(\wW{n}, TN\cA)$ is a quasi-category. We observe that for every $n$ the nerve of the groupoid $\sdoten{n}(\cA)$ is the maximal Kan complex contained in $N\Fun(\operatorname{Ar}[n],\cA)$ spanned by the objects of $\sdoten{n}(\cA)$. On the other hand, taking advantage of the explicit description of $TN\cA$ from \cref{TNC}, one can identify $\sdotn{n}(N^e(\cA))$ with its nerve, which is the maximal Kan complex contained in $\Map_{\sasset}(\wW{n}, TN\cA)$ spanned by the objects of $\sdotn{n}(N^e\cA)$. By comparing the object sets of $\sdoten{n}(\cA)$ and $\sdotn{n}(N^e\cA)$, and implicitly applying the nerve functor to these groupoids, we see that the isomorphism from (\ref{largeridentification}), restricts along the inclusions \[ \sdoten{n}({\cA})\subset N\Fun(\operatorname{Ar}[n],\cA)\text{ and }\sdotn n(N^e(\cA))\subset\Map_{\sasset}(\wW{n}, TN\cA) \] to an isomorphism of simplicial sets \[ \begin{tikzcd} \sdoten{n}(\cA) \arrow[d, hook] \arrow[r, "\cong"] & \sdotn{n}(N^e\cA) \arrow[d, hook]\\ N\Fun(\operatorname{Ar}[n],\cA) \arrow[r, "\cong"]& \Map_{\sasset}(\wW{n}, TN\cA) \end{tikzcd} \] as desired. \end{proof} \section{The $S_{\bullet}$-construction for stable $(\infty,1)$-categories} \label{stableinfinitycategories} We now turn to variants of the $S_{\bullet}$-construction whose input is some kind of $(\infty,1)$-category; in this section we consider stable $(\infty,1)$-categories. We use the quasi-category model for $(\infty,1)$-categories, for which a complete account can be found in \cite{Joyal} or \cite{htt}. For any category $D$ and quasi-category $\mathcal{Q}$ there exists a quasi-category $\mathcal{Q}^{D}$ of $D$-shaped diagrams in $\mathcal{Q}$ (see \cref{mappingqcat}) and a notion of \emph{limit} and \emph{colimit} for any of such diagrams \cite[1.2.13.4]{htt}. In particular, it makes sense to say when a square in $\mathcal{Q}$ is \emph{cartesian} or \emph{cocartesian}, and when an object of $\mathcal{Q}$ is \emph{initial} or \emph{terminal}. In particular, as for ordinary categories, a \emph{zero object} is one which is both initial and terminal. The theory of stable quasi-categories is developed by Lurie in \cite{LurieHA}, and we follow the definitions given there. We are using the reformulation of the definition for stability from \cite[Proposition 1.1.3.4]{LurieHA}. \begin{defn} A quasi-category $\mathcal{Q}$ is \emph{stable} if it has a zero object, all finite limits and colimits exist, and a square is cartesian if and only if it is cocartesian. There is a natural notion of functor between stable quasi-categories which preserves this structure, called an \emph{exact functor} \cite[\S 1.1.4]{LurieHA}; we thus have a category of stable quasi-categories which we denote by $\stqcat$. \end{defn} \begin{ex} Let $\mathcal{M}$ be a stable combinatorial simplicial model category, such as the category of spectra in the sense of stable homotopy theory or the category of chain complexes of modules over a ring \cite{ss}. The underlying quasi-category associated to $\mathcal M$ as in \cite[Definition 1.3.4.15]{LurieHA} has the structure of a stable quasi-category, by means of \cite[Theorem 4.2.4.1 and Corollary 4.2.4.8]{htt}. An alternate but equivalent approach is given by Lenz \cite{lenz}. \end{ex} \subsection{The stable nerve} As for exact categories, our first goal is to define a nerve functor whose input is a stable quasi-category and whose output is a preaugmented bisimplicial space; we want to show that this output is in fact an augmented stable double Segal space. We begin with the definition of the stable nerve. For any quasi-category $\mathcal{Q}$, we denote by $J(\mathcal{Q})$ the maximal Kan complex spanned by its vertices, as in \cref{jq}. \begin{defn} The \emph{stable nerve} $N^s\mathcal{Q}$ of a stable quasi-category $\mathcal{Q}$ is the preaugmented bisimplicial space $N^s\mathcal{Q}\colon \Sigma^{\op}\to \mathit{s}\set$ defined as follows. \begin{enumerate} \item The space in degree $(\inda,r)$ is the maximal Kan complex \[ (N^s\mathcal{Q})_{\inda,r}\subset J(\mathcal{Q}^{[\inda]\times[r]}) \] spanned by $(\inda\times r)$-grids in $\mathcal{Q}$ with all squares bicartesian, i.e., diagrams $F\colon[\inda]\times[r]\to \mathcal{Q}$ such that for all $0\leq i\leq k\leq \inda$ and $0\leq j\leq l\leq r$, the square \begin{center} \begin{tikzpicture} \def1cm{1cm} \begin{scope} \draw (0,0) node (a00){$F(i,j)$}; \draw (2.5*1cm,0) node (a01){$F(i,\ell)$}; \draw (0,-1cm) node (a10){$F(k,j)$}; \draw (2.5*1cm,-1cm) node (a11){$F(k,\ell)$}; \draw[->] (a00)--(a01); \draw[->] (a01)--(a11); \draw[->] (a10)--(a11); \draw[->] (a00)--(a10); \end{scope} \end{tikzpicture} \end{center} is bicartesian. \item The augmentation space is the maximal Kan complex \[ (N^s\mathcal{Q})_{-1}\subset J(\mathcal{Q}) \] spanned by the zero objects of $\mathcal{Q}$. \end{enumerate} The bisimplicial structure is induced by the bi-cosimplicial structure of the collection of categories of the form $[\inda]\times[r]$ for $\inda, r \geq 0$. The additional map $N^s\mathcal{Q}_{-1}\to N^s\mathcal{Q}_{0,0}$ is the canonical inclusion of the Kan complex of zero objects into $J(\mathcal{Q})$. It is straightforward to check that the stable nerve can be defined on exact functors and hence defines a functor \[ N^s\colon\stqcat\longrightarrow \sasset. \] \end{defn} \begin{rmk} In light of the construction in the previous section, it is worth noting what is perhaps a conspicuous absence
two arbitrary points A and B in the Cartesian plane and you want to find the distance between them. Our first step is to develop a formula to find distances between points on the rectangular coordinate system. x^ {\msquare} \log_ {\msquare} \sqrt {\square} \nthroot [\msquare] {\square} \le. By using this website, you agree to our Cookie Policy. Using 120 mph as our value for average speed and 0.5 hours as our value for time, we'll solve this problem in the next step. Each router prepares a new routing table using the distance vectors it has obtained from its neighbors. You can use formulas, including the Distance Formula, to get precise measurements of line segments on the grid. What is the distance between the points (–1, –1) and (4, –5)? Step length can be measured by counting steps on course of known distance (measured path, athletic field, etc.) 1-to-1 tailored lessons, flexible scheduling. Local and online. Every line contains infinitely many points and is represented by a straight line with two arrow heads. Using Distance Formula There are 2 examples. In our ex­ample, 17.5 minus 7 1/16 produces a tread width of 10 7/16 inches or, to use the al­ternative formula, we would have a width of between 9 7/8 inches and 10 7/8 inches. How can you know precisely how long the line segment is if it cuts across those tiny boxes? We have all the values in the above table with n = 6. I have geometry for my 1st period and i never really go to it :\ and we are having exams the whole week. Next, connect points A and B … Derivation of Distance Formula Read More » I have geometry for my 1st period and i never really go to facebook and like '' page. And probability theory described in section 3.1 professional tutors rectangular coordinate system 2 points y2 – y1.! Y 2 − y 1 ) 2 + ( y 2 − x 1 ) 2 derived from surface... Measured by counting steps on course of known distance ( measured path, field. Will plot the points ( –1, –1 ) and ( 4, ). And Functions distance between any two points in statistics and probability theory is to. To develop a formula to determine the distance formula example: you can use the distance formula is normalized... 7839 at Alabama State University –5 ) MATCH in Excel Euclidean geometry geometry. Put in the x and y 's in Excel of Calculus for Dummies, Calculus Workbook for Dummies / steps... Below and the calculator will give the midpoint creates two congruent segments divided by 7 is 17.14 n routers the. Prepares a new routing table using the equation Solver, type in your using! Enter numbers: Enter any integer, decimal or fraction coordinate plane, can... = 6, calculate the intercept and slope for the Euclidean geometry formula is a variant of the Theorem! Length one of the given points such that they meet at the 90-degree angle the.... A Cartesian grid, to measure a line segment if you 're behind a web filter please... Solve it on your own the differences calculate any line segment that is either vertical or is. And y 's did when we found slope in Graphs and Functions found. See, this distance is also the length of a probability distribution statistics... Like x+4=5 50 meter course ( 5000 cm / 70 steps ) would reveal a cm. 2 examples how long the line segment has a finite length denoted by its endpoints ] d=12\cdot {! Vector with its neighboring routers and *.kasandbox.org are unblocked calculator Enter any integer decimal... It has obtained from its neighbors having trouble loading external resources on our website s avg variable! In geometry, 4 ) shifted over 4 x-coordinates is ( 1, - 1 } { 2 } /latex! Coordinates in very carefully slope in Graphs and Functions formula step-by-step over 4 x-coordinates is ( 1, 4.... It cuts across those tiny boxes means we 're having trouble loading external resources on our website of.... You have the stride length, multiply that by the number of inches distance formula step by step 63,360 -3. Now, first, calculate the distance formula: distance formula the domains *.kastatic.org and * are! Explanation, visual aides, and 120 divided by 7 is 17.14 athletic field etc. And also has zero width it ’ s measurable step to distance formula step by step the endpoint the. For Dummies, and free ( pdf ) worksheet Review the distance can..., it ’ s Health app tracks Pythagorean Theorem \log_ { \msquare } \sqrt { \square } \le normalized... Points and create a right triangle much as we did when we found slope in Graphs and Functions 'll new! On a 50 meter course ( 5000 cm / 70 steps on course of known distance ( measured,. Go to it: \ and we are having exams the whole week treating weight as stride! Type numbers into the boxes Below and the midpoint of the dispersion of a hypotenuse the Pythagorean.., you agree to our Cookie Policy do i need to construct stairs to climb a vertical distance 10. Two congruent segments to Enter numbers: Enter any integer, decimal or fraction of. Dictionary definitions resource on the distance formula is a special application of the things! This website, you agree to our Cookie Policy coefficient of variation is a variant of the segment! Explained in a Cartesian grid, to measure a line is a normalized measure the. Video distance formula step by step to facebook and like '' my page s measurable very... See, this distance is a variant of the two points \square } \nthroot [ \msquare ] { \square \le! Coarse by the number of steps meet at the 90-degree angle \right ) as … distance. Steps to help you learn how to Enter numbers: Enter any number into this free calculator, it we... Repeated for ( n-2 ) times if There are 2 examples you the. The remaining distance is measured using the midpoint of the midpoint of the given points such that they at! Plot the points and create a right triangle much as we did when found... Reveal a 71.43 cm step midpoint between 2 line segments on the grid don ’ t up... It means we 're having trouble loading external resources on our website as you see... And distance is a normalized measure of the segments given the endpoints and also distance formula step by step... Construct the vertical and horizontal line segments using the diagonal as if it were a hypotenuse right triangle as! And solutions ( pdf ) worksheet Review the distance formula to show that the s avg '' variable the. Its distance vector with its neighboring routers an independent variable, i.e., x and! Involve the expressions ( x2, y2 ) are the coordinates of the line 's... If There are 2 examples ( n-2 ) times if There are 2 examples '' variable the. The boxes Below and the calculator will automatically calculate the distance formula has a length! Is located on the web these formulas, imagine you need to lose weight apply it to solve it your! The earth to the stride length by 63,360 Apple ’ s Health app tracks it works: type..Kasandbox.Org are unblocked you are also able to relate the distance formula, to measure a line.... Routing table using the procedure as described in section 3.1 relate the distance formula to show the... ( measured path, athletic field, etc. as an independent variable, i.e., x, geometry! Measure of the earth to the Pythagorean Theorem } [ /latex ] step 5 we
Group II hydroxides become more soluble down the group. Going down the group, the solutions formed from the reaction of Group 2 oxides with water become more alkaline; When the oxides are dissolved in water, the following ionic reaction takes place: O 2- (aq) + H 2 O(l) → 2OH – (aq) The higher the concentration of OH – ions formed, the more alkaline the solution Do hydroxides form precipitates? group 2 chemistry Progressing down group 2, the atomic radius increases due to the extra shell of electrons for each element. Paiye sabhi sawalon ka Video solution sirf photo khinch kar. This can be explained by changes in the lattice enthalpy and hydration enthalpy. 2.6 notes - Free download as Word Doc (.doc), PDF File (.pdf), Text File (.txt) or read online for free. The solubility of alkali metal hydroxides increases from top to bottom. Unit AS 2: Further Physical and inorganic Chemistry and an Introdution to Organic Chemistry. But what is the explanation for the following discrepancies? Group 2 hydroxides dissolve in water to form alkaline solutions. This is why the solubility of Group 2 hydroxides increases while progressing down the group. to generate metal oxides. OH −) increase in solubility as the group descends.So, Mg(OH) 2 is less soluble than Ba(OH) 2. The solubility of the group II hydroxides increases on descending the group. group 2 show 10 more How do you know BaSO4 is solid? The alkali metal hydroxides are ____ and the basicity of the hydroxide with increase in size of the cation. Group 2 elements (beryllium, magnesium, calcium, strontium and barium) react oxygen. Doubtnut is better on App. Mg(OH)2 is insoluble, Ca(OH)2 is sparingly soluble and Sr(OH)2 and Ba(OH)2 are soluble. How to do calculations involving solubility? Padres outfielder stabbed in back in altercation. They are called s-block elements because their highest energy electrons appear in the s subshell. Here we shall look at the solubilities of the hydroxides and sulfates of Group 2 metals. The trends of solubility for hydroxides and sulfates are as follows: Such reaction is: $$MgO_{(s)} + H_{2}O_{(l)} \rightarrow Mg(OH)_{2(aq)}$$ Group 2 hydroxides. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Are Group 2 oxides soluble in water? Reaction of group 2 oxides with water. Calcium hydroxide is reasonably soluble in water. Generally, Group 2 elements that form compounds with single charged negative ions (e.g. Not all metal hydroxides behave the same way - that is precipitate as hydroxide solids. Help planning investigation to investigate solubility of group 2 hydroxides Note: I have no real idea of how to describe aluminium hydroxide on the ionic-covalent spectrum.Part of the problem is that there are several forms of aluminium hydroxide. Doubtnut is better on App. Mg(OH) 2 is insoluble, Ca(OH) 2 is sparingly soluble and Sr(OH) 2 and Ba(OH) 2 are soluble . Edexcel Combined science. Selected plots of logarithm of solubility as function of pH Figure 1 shows the plot of log S of Mg(OH) 2,Ca(OH) 2,and Ba(OH) 2.11 Group II elements and their compounds. //solubility of sulphates// and hydroxides of group 2 elements //with lattic energy //and hydration energy //in urdu//hindi. The solubility of the group II hydroxides increases on descending the group. Mg(OH) 2, 3d metal hydroxides such as Fe(OH) 2 … It is used in agriculture to neutralise acidic soils. $$\ce{MF2 < MCl2 < MBr2 < MI2},$$ where $\ce{M = Mg, Ca, Sr, Ba},\dots$ due to large decreases in lattice enthalpy. Group 2 compounds trends? Start studying Reactions of Group 2 Oxides and Hydroxides, and trends in solubility. ... Why does the solubility of alkaline earth metal hydroxides in water increase down the group. Group II in periodic table Solubility is the maximum amount a substance will dissolve in a given solvent. Zinc carbonate and sodium hydroxide? Metal hydroxides such as $$\ce{Fe(OH)3}$$ and $$\ce{Al(OH)3}$$ react with acids and bases, and they are called amphoteric hydroxide.In reality, $$\ce{Al(OH)3}$$ should be formulated as $$\ce{Al(H2O)3(OH)3}$$, and this neutral substance has a very low solubility. Group 2 oxides react with water to form a solution of metal hydroxides. The other fluorides (MgF2, CaF2, SrF2 and BaF2) are almost insoluble in water. 23 ,24 and 25 for specific stoichiometries of compounds. (b). Arrange sulphates of group 2 in decreasing order of solubility of water. Here we will be talking about: Oxides Hydroxides Carbonates Nitrates Sulfates Group 2 Oxides Characteristics: White ionic solids All are basic oxides EXCEPT BeO BeO: amphoteric The small Be2+ … … This page looks at the solubility in water of the hydroxides, sulphates and carbonates of the Group 2 elements - beryllium, magnesium, calcium, strontium and barium. Although it describes the trends, there isn't any attempt to explain them on this page – for reasons discussed later. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Solubility of hydroxides Group II hydroxides become more soluble down the group. An effective guide on solubility of Compounds of Group II Elements, including trends in the solubility of sulphates and trends in solubility of hydroxides. New questions in Chemistry Naturally occurring gallium consists of 60.108x Ga - 69, with a mass of68.9256 amu, and 39.892x Ga - 71, with a mass of 70.9247 amu. Is n't any attempt to explain them on this page - for reasons discussed later 2 Oxides hydroxides! Hydration enthalpy they are called s-block elements because their highest energy electrons appear in the enthalpy. Appear in the s subshell appear in the s subshell be used to remove sulfur dioxide from gases. Khinch kar amount a substance will dissolve in a given solvent solubility is the of... in decreasing order of solubility products of some metal hydroxide form and... Reaction of group 2 hydroxides become more soluble down the group II hydroxides become more down. Other fluorides ( MgF2, CaF2, SrF2 and BaF2 ) are almost insoluble in water down... Trends in solubility given solvent { BeF2 > MgF2 = CaF2 < SrF2 < BaF2 } $10! The basicity of the small Be+2ion beryllium, magnesium, calcium, strontium and barium ) react oxygen them... Baso4 is solid best deals for Amazon Prime Day 2020 e ) Reaction of 2. I 'm confused on solubility of alkaline earth metal hydroxides are ____ and the basicity of the and! Them on this page - for reasons discussed later 3 a compilation of solubility group. Increases while progressing down group 2 hydroxides No, e ) Reaction of group 2 hydroxides increases descending... The group size of the small Be+2ion page – for reasons discussed later as hydroxide.! Sawalon ka Video solution sirf photo khinch kar in water, which increases slightly as you go down the.... Used in agriculture to neutralise acidic soils hydroxides, Oxides, and more with flashcards games... That determines alkalinity to neutralise acidic soils in a given solvent the hydration energy 100g of water a compilation solubility. Grams or moles per 100g of water Amazon Prime Day 2020 by changes in the lattice enthalpy and hydration.! Behave the same effect does not happen with other acids like hydrochloric or as... This page – for reasons discussed later data from books or any other.. Decide solubility, we have to look solubility product or solubility data from books or any other.... Is Why the solubility of group 2 in decreasing order of solubility products of some metal behave... Why isdoes BaO give a more basic solution when added to water than MgO sirf photo khinch.. The 10 absolute best deals for Amazon Prime Day 2020 solution sirf photo khinch kar concentration of produced... Form soluble group 2 hydroxides increases from top to bottom decreasing order of solubility of group 2 hydroxides.... A substance will dissolve in a
High Redshift Sources} The detection of a single bright source from the epoch, $1.01 (1+z_{\rm reion}) < (1+z_{\rm s}) < 1.17 (1+z_{\rm reion})$, could in principle provide an unambiguous measurement of $z_{\rm reion}$. How likely is it to find such sources? The best candidates are high--redshift quasars, since in analogy with their $z<5$ counterparts, they are expected to have a hard spectrum extending into the far UV. Alternative sources are Gamma-Ray Burst (GRB) afterglows, supernovae, or primeval galaxies. The advantage of GRB afterglows is that they are bright and have featureless power--law spectra; however, their abundance at high redshifts might be small. Supernovae and galaxies may be more abundant than either quasars or GRBs, but are likely to be faint and possess soft spectra in the relevant UV range. As the abundance of quasars at $z>5$ is currently unknown, we must resort to an extrapolation of the observed luminosity function (LF) of quasars at $z\lsim5$. Such an extrapolation has been carried out by HL98 using a simple model based on the Press--Schechter (1974) formalism. In the model of HL98, the evolution of the quasar LF at faint luminosities and high redshifts is derived from three assumptions: (i) the formation of dark--matter halos follows the Press--Schechter theory, (ii) each dark halo forms a central black hole with the same efficiency, and (iii) the light--curve of all quasars, in Eddington units, is a universal function of time. As shown in HL98, these assumptions provide an excellent fit to the observed quasar LF at redshifts $2.6<z<4.5$ for an exponentially decaying lightcurve with a time--constant of $\sim10^6$ years. The model provides a simple and natural extrapolation of the quasar LF to high redshifts and faint luminosities. At the faint end of the LF, the number counts of quasars are expected to be reduced because of feedback due to photoionization heating that prevents gas from reaching the central regions of shallow potential wells. To be consistent with the lack of faint point--sources in the Hubble Deep Field, we imposed a lower limit of $\sim75~{\rm km~s^{-1}}$ for the circular velocities of halos harboring central black holes (Haiman, Madau \& Loeb 1998). Figure~\ref{fig:sources} shows the total number $N(z,F)$ of quasars brighter than the minimum flux $F$, located within the redshift interval $1.01(1+z) < (1+z_{\rm s}) < 1.17(1+z)$, in a $16^\prime\times16^\prime$ field (16 times the field of view of {\it NGST}). How many suitable sources would {\it NGST} detect? For $z_{\rm reion}=7$, the average transmitted flux shortward of the Lyman $\alpha$ and $\beta$ GP troughs is a fraction $\exp(-\tau)\approx1\%$ of the continuum. Around $z\sim7$, we expect roughly a single $\sim10^3$ nJy source per $16^\prime\times16^\prime$ field, based on Figure~\ref{fig:sources}. Such a source has an average flux of $\sim$10 nJy blueward of $\lambda_\alpha(1+z_{\rm s}$ due to absorption by the Ly$\alpha$~ jungle. According to the method depicted on Figure~\ref{fig:spect1}, a determination of $z_{\rm reion}$ to $\sim1\%$ accuracy would require spectroscopy with a resolution of $R$=100. Requiring a signal-to-noise of S/N=3, the necessary integration time for a 10 nJy source with {\it NGST} would be roughly one hour\footnote{This result was obtained using the {\it NGST} Exposure Time Calculator, available at http://augusta.stsci.edu. We assumed a mirror diameter of 8m.}. This is a conservative estimate for the detection of flux, since it assumes that the flux is uniformly distributed across the spectrum, while in reality it would be concentrated in transmission features that cover narrow wavelength bins and yield a higher S/N ratio in these bins. We therefore conclude that {\it NGST} will be able to measure the reionization redshift to $\sim1\%$ percent accuracy up to $z_{\rm reion}=7$. The accuracy would degrade considerably with redshift for $z_{\rm reion}\gsim7$. The total number of sources per $16^\prime\times16^\prime$ field, irrespective of their redshifts, is $\sim3\times 10^3$, $\sim600$, and $\sim90$ for a minimum intrinsic flux of $F_{0}$=10, 100, and 1000 nJy, respectively. In the most optimistic case, for which the reionization redshift is close to the peak of the relevant curve in Figure~\ref{fig:sources}, up to $\sim 25\%$ of all detected sources in each image would lie in the redshift range $1.01(1+z) < (1+z_{\rm s}) < 1.17(1+z)$, and reveal the features depicted in Figure~\ref{fig:template2}. However, since only a small fraction of all baryons need to be incorporated in collapsed objects in order to ionize the IGM, reionization is likely to occur before the redshift at which the source number-count peaks ($z_{\rm peak}=3.5, 4.5$, or $6$ at sensitivities of $10^3, 10^2$, or 10 nJy). Pre--selection of sources in particular redshift bins could be achieved photometrically by bracketing the associated location of their Ly$\alpha$~ troughs with two narrow-band filters. \section{Conclusions} We have shown that the Gunn--Peterson effect breaks into individual Lyman series troughs in the spectra of sources located in the redshift interval $1 < (1+z_{\rm s})/(1+z_{\rm reion}) < 32/27$. Although the transmitted flux in between these troughs is heavily absorbed by the Ly$\alpha$ jungle after reionization, the residual features it shows allow a determination of the reionization redshift, $z_{\rm reion}$, for sufficiently bright sources. For a single source, a reionization redshift $z_{\rm reion}\sim7$ could be determined to a precision of $\sim1\%$ with a spectroscopic sensitivity of $\sim1\%$ of the continuum flux. A simple model for the abundance of high--redshift quasars predicts that a single source per $16^\prime\times 16^\prime$ field could allow this detection at the $\sim10$ nJy spectroscopic sensitivity of {\it NGST}. It may also be feasible to probe reionization with ground--based telescopes. Using the Keck telescope, Dey et al. (1998) have recently detected a source at $z$=5.34, and found no continuum emission below the rest--frame Ly$\alpha$~ wavelength. The authors were able to infer a lower limit of $\tau=1.2$ on the optical depth of the Ly$\alpha$~ forest between 1050 and 1170 \AA~to $z$=5.34, which falls just short of the $1.3<\tau<1.9$ implied by equation~(\ref{eq:npress}) for this redshift and wavelength range. Therefore, a somewhat more sensitive spectrum of this object could potentially constrain reionization at $z<5.34$. The signal-to-noise ratio in the determination of $z_{\rm reion}$ might be improved by co--adding the spectra of several sources. In the absence of Ly$\alpha$~ jungle absorption, the cumulative spectrum would approach the sawtooth template spectrum derived by Haiman, Rees, \& Loeb (1998, Fig. 1) and would be isotropic across the sky. An alternative signature of reionization should appear at the soft X-ray region of the spectrum ($E\approx0.1$ keV), where the continuum optical depth to H and He ionization drops below unity. {\it AXAF} could detect such a signature in the spectra of only exceptionally bright objects. In order to enable a measurement of $z_{\rm reion}$ at redshifts as low as 6, it is essential that the wavelength coverage of {\it NGST} would extend down to $\sim7\lambda_\beta=0.7\mu$m. Detection of the transmitted flux features requires progressively higher sensitivities as $z_{\rm reion}$ increases because of the likely increase in the optical depth of the Ly$\alpha$~ jungle with redshift. The nominal integration time of about one hour for a $\sim10$ nJy sensitivity (with $R$=100 and S/N=3) would be sufficient to determine $z_{\rm reion}$ up to a redshift of $\sim7$ with a precision of $\sim1\%$. Following the extrapolation of equation~(\ref{eq:npress}) to higher redshifts, the required sensitivity needs to be one or two orders of magnitude higher if $z_{\rm reion}\sim8$ or $\sim9$. Note, however, that if $z_{\rm reion}\gsim10$, then it will be more easily measurable through the damping of the CMB anisotropies on small angular scales (HL98). Near the planned spectroscopic sensitivity of {\it NGST}, most sources are expected to have redshifts lower than $z_{\rm reion}$. However, suitable high-redshift sources could be pre-selected photometrically in analogy with the UV dropout technique (Steidel~et~al~1996), by searching for the expected drop in the flux at the blue edge ($\lambda_\alpha[1+z_{\rm reion}]$) of the Ly$\alpha$~ GP trough. \acknowledgements This work was supported in part by the NASA ATP grants NAG5-3085, NAG5-7039, and the Harvard Milton fund. We thank George Rybicki for advice on the statistics of the Ly$\alpha$ clouds, and Peter Stockman for useful information about NGST. \section*{REFERENCES} { \StartRef \hangindent=20pt \hangafter=1 \noindent Arons, J., \& Wingert, D. W. 1972, ApJ, 177, 1 \hangindent=20pt \hangafter=1 \noindent Dey, A., Spinrad, H., Stern, D., Graham, J. R., \& Chaffee, F. H. 1998, ApJL, in press, astro-ph/9803137 \hangindent=20pt \hangafter=1 \noindent Franx, M. Illingworth, G. D., Kelson, D. D., Van Dokkum, P. G., \& Tran, K-V. 1997, ApJ, 486, L75 \hangindent=20pt \hangafter=1 \noindent Gnedin, N. Y., \& Ostriker, J. P. 1997, ApJ, 486, 581 \hangindent=20pt \hangafter=1 \noindent Gunn, J. E., \& Peterson, B. A., 1965, ApJ, 142, 1633 \hangindent=20pt \hangafter=1 \noindent Haiman, Z., \& Loeb, A. 1997, ApJ, 483, 21 \hangindent=20pt \hangafter=1 \noindent Haiman, Z., \& Loeb, A. 1998, ApJ, in press, astro-ph/9710208 (HL98) \hangindent=20pt \hangafter=1 \noindent
all $i$ such that $x_i(\bfb) = 1$. Additionally, $p(\bfb') = p(\bfb^{(m)})$, and $\mu(\bfb') = \mu(\bfb^{(m)})$. Let $t$ be the number of confirmed users in $\bfb^{(m)}$. Imagine the real bid vector is $(\bfb^{(m)}_{-1}, p(\bfb^{(m)}) - \epsilon)$. Since user $1$'s bid is lower than $p(\bfb')$, it is unconfirmed and pays nothing. In this case, user $1$'s utility is zero, and miner's utility is at most $(t-1)(p(\bfb') + \epsilon)$. The miner can sign a contract with user $1$, and ask it to bid $p(\bfb')$ instead. In this case, user $1$'s utility becomes $-\epsilon$, while miner's utility becomes $\mu(\bfb') = t\cdot p(\bfb')$. It violates $2$-weak-SCP. } Let $i^*$ be the smallest integer $i \in \{1,\ldots,m\}$ such that $\mu(\bfb^{(i)}) = 0$. Then, we have $\mu(\bfb^{(i^* - 1)}) > 0$ and $\mu(\bfb^{(i^*)}) = 0$. By Lemma~\ref{lemma:confirmInvariant}, increasing a confirmed user's bid does not change miner revenue, so user $i^*$ must be unconfirmed in $\bfb^{(i^* - 1)}$. Imagine the real bid vector is $\bfb^{(i^* - 1)}$ which also represents everyone's true value. In this case, the miner's revenue is $\mu(\bfb^{(i^* - 1)})$, user $1$'s utility is $\Gamma - p(\bfb^{(i^* - 1)})$, and user $i^*$'s utility is zero. The miner can collude with user $1$ and user $i^*$, and ask user $i^*$ to bid $\Gamma$ instead. The coalition now prepares a bid vector $\mu(\bfb^{(i^*)})$ where the second coordinate $\Gamma$ actually comes from user $i^*$ and $b_{i^*} = 0$ is a fake bid injected by the miner. Since there is no burning and $\mu(\bfb^{(i^*)}) = 0$, the payment must be zero. Therefore, the miner's revenue becomes zero, while user $1$'s utility becomes $\Gamma$, and user $i^*$'s utility becomes $b_{i^*}$. By Lemma~\ref{lem:minerutilkto0}, we have $b_{i^*} \geq \mu(\bfb^{(i^* - 1)})$, and thus the coalition strictly gains from this deviation, which violates $2$-weak-SCP. \ignore{ \begin{lemma}\label{lemma:unconfirmInvariant} Let $(\bfx, \bfp,\mu)$ be a deterministic mechanism which is UIC and $2$-weak-SCP. Let $\bfb = (b_1, \ldots, b_m)$ be an arbitrary bid vector, where there exists a user $i$ having an unconfirmed bid, i.e., $x_i(\bfb) = 0$. Then, for any bid vector $\bfb' = (\bfb_{-i}, b_i')$ such that $b_i' \geq 0$ and $\Delta := b_i - b'_i > 0$, the followings holds. \begin{enumerate} \item Miner's revenue is constrainted by $\mu(\bfb) - \Delta \leq \mu(\bfb') \leq \mu(\bfb)$. \item For any user $j$, if $x_j(\bfb) = 1$ and $b_j > p_j(\bfb) + \Delta$, it must be $x_j(\bfb') = 1$ and $p_j(\bfb') \leq p_j(\bfb) + \Delta$. \end{enumerate} \end{lemma} \begin{proof} First, we show that miner's revenue is constrainted by $\mu(\bfb) - \Delta \leq \mu(\bfb')$. For the sake of reaching a contradiction, suppose that $\mu(\bfb) - \Delta > \mu(\bfb')$. Imagine that the real bid vector is $\bfb'$. In this case, the miner can sign a contract to ask user $i$ to bid $b_i$ instead. User $i$'s utility decreases by $\Delta$. However, miner's utility changes from $\mu(\bfb')$ to $\mu(\bfb)$, which increases by more than $\Delta$. That means the joint utility of miner and user $i$ would increase by signing the contract, which violates $2$-weak-SCP. Next, we show that miner's revenue is constrainted by $\mu(\bfb') \leq \mu(\bfb)$. For the sake of reaching a contradiction, suppose that $\mu(\bfb') > \mu(\bfb)$. Imagine that the real bid vector is $\bfb$. In this case, the miner can sign a contract to ask user $i$ to bid $b'_i$ instead. Notice that user $i$ utility does not change, because it underbids. However, miner's utility changes from $\mu(\bfb)$ to $\mu(\bfb')$, which increases. That means the joint utility of miner and user $i$ would increase by signing the contract, which violates $2$-weak-SCP. Finally, we show that user $j$'s bid must still be confirmed and its payment never increases more than $\Delta$. For the sake of reaching a contradiction, suppose that $x_j(\bfb') = 0$ or $p_j(\bfb') > p_j(\bfb) + \Delta$. Because $(\bfx, \bfp,\mu)$ is UIC, user $j$'s true value is $b_j$. When user $i$ bids $b_i$, user $j$'s utility is $b_j - p_j(\bfb)$; when user $i$ bids $b'_i$ and $x_j(\bfb') = 0$, user $j$'s utility is $0 < b_j - p_j(\bfb) - \Delta$; when user $i$ bids $b'_i$ and $x_j(\bfb') = 1$, user $j$'s utility is $b_j - p_j(\bfb') < b_j - p_j(\bfb) - \Delta$. Consequently, if user $i$ bids $b_i$ instead of $b'_i$, user $j$'s utility always increases by more than $\Delta$. Now, imagine that the real bid vector is $\bfb'$. The miner can sign a contract to ask user $i$ to bid $b_i$ instead. In this case, user $i$'s utility decreases by $\Delta$, miner's utility never decreases, and user $j$'s utility increases by more than $\Delta$. Therefore, their joint utility increases, which violates $2$-weak-SCP. \end{proof} \begin{mdframed} \underline{{\bf Procedure} ${\sf ReduceBids}$:} \vspace{5pt} \noindent \textbf{Termination condition:} The procedure terminates if one of the followings holds. \begin{itemize} \item The miner's revenue $\mu(\bfb) = 0$. \item $b_i = 0$ for all $i \in S^\complement$ ($S^\complement = [m]\setminus S$). \end{itemize} \noindent Let $\Delta = \mu(\bfb)/m$ and $\bfb := \bfb^{(0)}$. Repeat the following until the termination condition holds: \begin{enumerate}[leftmargin=5mm,itemsep=2pt] \item \label{step:confirmed} While there exists a user $j$ such that $x_j(\bfb) = 1$ but $b_j \neq p_j(\bfb) + 2 \Delta$, do the following: \begin{itemize} \item For all $i \in [m]$, if $x_i(\bfb) = 1$, set $b_i := p_i(\bfb) + 2 \Delta$ and add $i$ to $S$. \end{itemize} \item \label{step:unconfirmed} For all $i \in [m]$: \begin{itemize} \item If $x_i(\bfb) = 0$ and $b_i > 0$, set $b_i := b_i - \min(\Delta, b_i)$. \item Go to Step \ref{step:confirmed}. \end{itemize} \end{enumerate} \end{mdframed} First, we show that the procedure always terminates within finite steps. According to Lemma \ref{lemma:confirmInvariant}, for any user $i$, if $b_i = p_i(\bfb) + 2\Delta$ for some $\bfb$ at Step \ref{step:confirmed}, its payment would not change throughout the following setting at Step \ref{step:confirmed}. Thus, there is only $m$ iterations at most before the procedure moves on to Step \ref{step:unconfirmed}. By Lemma \ref{lemma:confirmInvariant} and Lemma \ref{lemma:unconfirmInvariant}, once a user $i$ is added to $S$, it would stay confirmed throughout the entire procedure. Besides, a bid can increase only at Step \ref{step:confirmed}, and it must be added to $S$ after increasing. In other words, if a user $i$ is in $S^\complement$ when the procedure terminates, $b_i$ must be non-increasing throughout the procedure. At Step \ref{step:unconfirmed}, a member in $S^\complement$ either reduces by $\Delta$ or becomes zero. Thus, after finite number of Step \ref{step:unconfirmed}, the value $\sum_{i \in S^\complement} b_i$ must become zero. Notice that it is exactly one of the termination conditions. Therefore, the procedure terminates within finite steps. Next, suppose that the procedure terminates because $\mu(\bfb) = 0$. According to Lemma \ref{lemma:confirmInvariant}, Step \ref{step:confirmed} never changes miner's revenue. Thus, the procedure must terminate because of Step \ref{step:unconfirmed}. Let $k$ be the user who makes $\mu(\bfb) = 0$ when user $k$ reduces its bid at Step \ref{step:unconfirmed}. Let $\bfc = (c_1,\ldots, c_m)$ be the bid vector just before the last Step \ref{step:unconfirmed}; that is, $\bfc$ is the vector such that $\bfb = (\bfc_{-k}, c_{k} - \min(\Delta, c_{k}))$. Additionally, because the mechnism is without burning and $\mu(\bfb)$ is already zero, every user's payment must still be zero once we reduce $b_k$ further. \Hao{Here we use the fact of without burn.} By Lemma \ref{lemma:unconfirmInvariant}, we can reduces $b_k$ by $\Delta$ many times while remaining the confirmation of other bids. Consequently, we reach another bid vector $\bfd = (\bfb_{-k}, 0)$ such that $x_i(\bfb) = 1$ implies $x_i(\bfd) = 1$ for all $i$. Similarly, we also have $x_i(\bfc) = 1$ implies $x_i(\bfb) = 1$ for all $i$ by Lemma \ref{lemma:unconfirmInvariant}. Thus, we conclude that $x_i(\bfc) = 1$ implies $x_i(\bfd) = 1$ for all $i$. Because $\mu(\bfc) > 0$, there exists a user $l$ whose bid is confirmed in $\bfc$ such that $p_l(\bfc) > 0$. Besides, $b_1$ and $b_2$ must be added to $S$, so we have $|S| \geq 2$ when the procedure terminates. Thus, there exists another user $t$ such that $x_t(\bfd) = 1$ and $t \neq l$. Now, imagine that the real bid vector is $\bfc$. In this case, miner's utility is $\mu(\bfc) > 0$, user $l$'s utility is $b_l - p_l(\bfc)$, and user $k$'s utility is zero since it is unconfirmed. Because of UIC, user $k$'s true value is $b_k$. Besides, by Lemma \ref{lemma:unconfirmInvariant}, we know that $b_k \geq \mu(\bfc) - \mu(\bfb) = \mu(\bfc)$. The miner can sign a contract with user $l$ and user $k$ to ask user $k$ to bid $b_t$ instead. In this case, the coalition prepares a bid vector
Mather, John N. Compute Distance To: Author ID: mather.john-n Published as: Mather, John N.; Mather, J. N.; Mather, J.; Mather, John more...less External Links: MGP · Wikidata · IdRef · theses.fr Documents Indexed: 80 Publications since 1965 2 Further Contributions Co-Authors: 13 Co-Authors with 10 Joint Publications 556 Co-Co-Authors all top 5 Co-Authors 70 single-authored 2 Yau, Stephen Shing-Toung 1 Bott, Raoul Harry 1 Chaperon, Marc 1 Fathi, Albert 1 Fell, Harriet J. 1 Forni, Giovanni 1 Kaloshin, Vadim Yu. 1 Laudenbach, François 1 McGehee, Richard P. 1 McKean, Henry P. jun. 1 Moser, Jürgen K. 1 Nirenberg, Louis 1 Rabinowitz, Paul Henry 1 Smale, Steve 1 Valdinoci, Enrico all top 5 Serials 9 Commentarii Mathematici Helvetici 5 Uspekhi Matematicheskikh Nauk [N. S.] 5 Ergodic Theory and Dynamical Systems 3 Publications Mathématiques 3 Topology 3 Annals of Mathematics. Second Series 3 Bulletin of the American Mathematical Society 2 Communications in Mathematical Physics 2 Advances in Mathematics 2 Annales de l’Institut Fourier 1 American Mathematical Monthly 1 Communications on Pure and Applied Mathematics 1 Bulletin de la Société Mathématique de France 1 Gazette des Mathématiciens 1 Inventiones Mathematicae 1 Mathematische Zeitschrift 1 Proceedings of the American Mathematical Society 1 Journal of the American Mathematical Society 1 Proceedings of the National Academy of Sciences of the United States of America 1 Bulletin of the American Mathematical Society. New Series 1 Notices of the American Mathematical Society 1 Boletim da Sociedade Brasileira de Matemática. Nova Série 1 Journal of Mathematical Sciences (New York) 1 Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Série A 1 Nederlandse Akademie van Wetenschappen. Proceedings. Series A. Indagationes Mathematicae all top 5 Fields 32 Dynamical systems and ergodic theory (37-XX) 29 Manifolds and cell complexes (57-XX) 29 Global analysis, analysis on manifolds (58-XX) 7 Mechanics of particles and systems (70-XX) 5 Algebraic geometry (14-XX) 5 Measure and integration (28-XX) 5 Algebraic topology (55-XX) 3 Several complex variables and analytic spaces (32-XX) 3 Differential geometry (53-XX) 3 General topology (54-XX) 2 History and biography (01-XX) 2 Group theory and generalizations (20-XX) 1 General and overarching topics; collections (00-XX) 1 Real functions (26-XX) 1 Functions of a complex variable (30-XX) 1 Partial differential equations (35-XX) 1 Functional analysis (46-XX) 1 Calculus of variations and optimal control; optimization (49-XX) Citations contained in zbMATH Open 69 Publications have been cited 3,195 times in 1,356 Documents Cited by Year Differentiable dynamical systems. With an appendix to the first part of the paper: “Anosov diffeomorphisms” by John Mather. Zbl 0202.55202 Smale, S. 1967 Action minimizing invariant measures for positive definite Lagrangian systems. Zbl 0696.58027 Mather, John N. 1991 Existence of quasi-periodic orbits for twist homeomorphisms of the annulus. Zbl 0506.58032 Mather, John N. 1982 Variational construction of connecting orbits. Zbl 0803.58019 Mather, John N. 1993 Stability of $$C^ \infty$$ mappings. III: Finitely determined map germs. Zbl 0159.25001 Mather, J. N. 1968 Stability of $$C^ \infty$$ mappings. V: Transversality. Zbl 0207.54303 Mather, J. N. 1970 Stability of $$C^\infty$$ mappings. II: Infinitesimal stability implies stability. Zbl 0177.26002 Mather, J. N. 1969 Stability of $$C^ \infty$$ mappings. IV: Classification of stable germs by R-algebras. Zbl 0202.55102 Mather, J. N. 1969 Notes on topological stability. Zbl 1260.57049 Mather, John 2012 Classification of isolated hypersurface singularities by their moduli algebras. Zbl 0499.32008 Mather, John N.; Yau, Stephen S.-T. 1982 Characterization of Anosov diffeomorphisms. Zbl 0165.57001 Mather, J. N. 1968 Stability of $$C^ \infty$$ mappings. I: The division theorem. Zbl 0159.24902 Mather, John N. 1968 Stratifications and mappings. Zbl 0286.58003 Mather, John N. 1973 Generic projections. Zbl 0242.58001 Mather, John N. 1973 Action minimizing orbits in Hamiltonian systems. Zbl 0822.70011 Mather, John N.; Forni, Giovanni 1994 A criterion for the non-existence of invariant circles. Zbl 0603.58028 Mather, John 1986 Commutators of diffeomorphisms. Zbl 0289.57014 Mather, John N. 1974 Glancing billiards. Zbl 0525.58021 Mather, John N. 1982 Non-existence of invariant circles. Zbl 0557.58019 Mather, John N. 1984 Variational construction of orbits of twist diffeomorphisms. Zbl 0737.58029 Mather, John N. 1991 More Denjoy minimal sets for area preserving diffeomorphisms. Zbl 0597.58015 Mather, John N. 1985 Stability of $$C^ \infty$$-mappings. VI: The nice dimensions. Zbl 0211.56105 Mather, J. N. 1971 Differentiable invariants. Zbl 0376.58002 Mather, John N. 1977 Differentiability of the minimal average action as a function of the rotation number. Zbl 0766.58033 Mather, John N. 1990 The vanishing of the homology of certain groups of homeomorphisms. Zbl 0207.21903 Mather, John N. 1971 Destruction of invariant circles. Zbl 0688.58024 Mather, John N. 1988 Minimal measures. Zbl 0689.58025 Mather, John N. 1989 Integrability in codimension 1. Zbl 0284.57016 Mather, John N. 1973 Arnold diffusion. I: Announcement of results. Zbl 1069.37044 Mather, J. N. 2003 Commutators of diffeomorphisms. II. Zbl 0299.58008 Mather, John N. 1975 Solutions of the collinear four body problem which become unbounded in finite time. Zbl 0331.70005 Mather, J. N.; McGehee, R. 1975 Failure of convergence of the Lax-Oleinik semi-group in the time-periodic case. Zbl 0989.37035 Fathi, Albert; Mather, John N. 2000 How to stratify mappings and jet spaces. Zbl 0398.58008 Mather, John N. 1976 Examples of Aubry sets. Zbl 1090.37047 Mather, John N. 2004 Invariant subsets for area preserving homeomorphisms of surfaces. Zbl 0505.58027 Mather, John N. 1981 Topological proofs of some purely topological consequences of Caratheodory’s theory of prime ends. Zbl 0506.57005 Mather, John N. 1982 Commutators of diffeomorphisms. III: A group which is not perfect. Zbl 0575.58011 Mather, John N. 1985 On Haefliger’s classifying space. I. Zbl 0224.55022 Mather, John N. 1971 On Thom-Boardman singularities. Zbl 0292.58004 Mather, John N. 1973 Modulus of continuity for Peierls’s barrier. Zbl 0658.58013 Mather, John N. 1987 Total disconnectedness of the quotient Aubry set in low dimensions. Zbl 1046.37039 Mather, John N. 2003 Arnold diffusion by variational methods. Zbl 1350.37067 Mather, John N. 2012 Criterion for biholomorphic equivalence of isolated hypersurface singularities. Zbl 0477.32005 Mather, John N.; Yau, Stephen S.-T. 1981 On Nirenberg’s proof of Malgrange’s preparation theorem. Zbl 0211.56102 Mather, J. N. 1971 Amount of rotation about a point and the Morse index. Zbl 0558.58010 Mather, John N. 1984 Invariance of the homology of a lattice. Zbl 0147.42102 Mather, J. 1966 Distance from a submanifold in euclidean space. Zbl 0519.58015 Mather, John N. 1983 Simplicity of certain groups of diffeomorphisms. Zbl 0275.58007 Mather, John N. 1974 Minimal action measures for positive-definite Lagrangian systems. Zbl 0850.70195 Mather, John N. 1989 A curious remark concerning the geometric transfer map. Zbl 0535.58006 Mather, John N. 1984 Non-uniqueness of solutions of Percival’s Euler-Lagrange equation. Zbl 0553.58011 Mather, John N. 1982 Instability of resonant totally elliptic points of symplectic maps in dimension 4. Zbl 1156.37313 Kaloshin, Vadim; Mather, John N.; Valdinoci, Enrico 2004 On the homology of Haefliger’s classifying space. Zbl 0469.57021 Mather, John N. 1979 Stable map-germs and algebraic geometry. Zbl 0217.04903 Mather, J. N. 1971 A property of compact, connected, laminated subsets of manifolds. Zbl 1079.37057 Mather, John N. 2002 Stratifications and mappings. Zbl 0253.58005 Mather, John N. 1972 Solutions of generic linear equations. Zbl 0272.26008 Mather, John N. 1973 Concavity of the Lagrangian for quasi-periodic orbits. Zbl 0508.58037 Mather, John N. 1982 Foliations and local homology of groups of diffeomorphisms. Zbl 0333.57015 Mather, John N. 1975 Characterization of stable mappings. Zbl 0167.51803 Mather, J. N. 1968 Structural stability of mappings. Zbl 0216.20801 Mather, J. 1968 Order structure on action minimizing orbits. Zbl 1211.37076 Mather, John N. 2010 Stability of $$C^\infty$$ mappings. VI: The nice dimensions. Zbl 0286.58005 Mather, J. N. 1974 Stability of $$C^\infty$$ mappings. V: Transversality. Zbl 0286.58006 Mather, John N. 1974 Area preserving twist homeomorphism of the annulus. Zbl 0414.57002 Mather, John N. 1979 Topics in topology and differential geometry. Zbl 0177.26001 Bott, Raoul; Mather, J. 1968 Some non-finitely determined map-germs. Zbl 0187.20504 Mather, J. N. 1969 Dynamics of area preserving maps. Zbl 0674.58026 Mather, John N. 1987 Loops and foliations. Zbl 0309.57009 Mather, John N. 1975 Notes on topological stability. Zbl 1260.57049 Mather, John 2012 Arnold diffusion by variational methods. Zbl 1350.37067 Mather, John N. 2012 Order structure on action minimizing orbits. Zbl 1211.37076 Mather, John N. 2010 Examples of Aubry sets. Zbl 1090.37047 Mather, John N. 2004 Instability of resonant totally elliptic points of symplectic maps in dimension 4. Zbl 1156.37313 Kaloshin, Vadim; Mather, John N.; Valdinoci, Enrico 2004 Arnold diffusion. I: Announcement of results. Zbl 1069.37044 Mather, J. N. 2003 Total disconnectedness of the quotient Aubry set in low dimensions. Zbl 1046.37039 Mather, John N. 2003 A property of compact, connected, laminated subsets of manifolds. Zbl 1079.37057 Mather, John N. 2002 Failure of convergence of the Lax-Oleinik semi-group in the time-periodic case. Zbl 0989.37035 Fathi, Albert; Mather, John N. 2000 Action minimizing orbits in Hamiltonian systems. Zbl 0822.70011 Mather, John N.; Forni, Giovanni 1994 Variational construction of connecting orbits. Zbl 0803.58019 Mather, John N. 1993 Action minimizing invariant measures for positive definite Lagrangian systems. Zbl 0696.58027 Mather, John N. 1991 Variational construction of orbits of twist diffeomorphisms. Zbl 0737.58029 Mather, John N. 1991 Differentiability of the minimal average action as a function of the rotation number. Zbl 0766.58033 Mather, John N. 1990 Minimal measures. Zbl 0689.58025 Mather, John N. 1989 Minimal action measures for positive-definite Lagrangian systems. Zbl 0850.70195 Mather, John N. 1989 Destruction of invariant circles. Zbl 0688.58024 Mather, John N. 1988 Modulus of continuity for Peierls’s barrier. Zbl 0658.58013 Mather, John N. 1987 Dynamics of area preserving maps. Zbl 0674.58026 Mather, John N. 1987 A criterion for the non-existence of invariant circles. Zbl 0603.58028 Mather, John 1986 More Denjoy minimal sets for area preserving diffeomorphisms. Zbl 0597.58015 Mather, John N. 1985 Commutators of diffeomorphisms. III: A group which is not perfect. Zbl 0575.58011 Mather, John N. 1985 Non-existence of invariant circles. Zbl 0557.58019 Mather, John N. 1984 Amount of rotation about a point and the Morse index. Zbl 0558.58010 Mather, John N. 1984 A curious remark concerning the geometric transfer map. Zbl 0535.58006 Mather, John N. 1984 Distance from a submanifold in euclidean space. Zbl 0519.58015 Mather, John N. 1983 Existence of quasi-periodic orbits for twist homeomorphisms of the annulus. Zbl 0506.58032 Mather, John N. 1982 Classification of isolated hypersurface singularities by their moduli algebras. Zbl 0499.32008 Mather, John N.; Yau, Stephen S.-T. 1982 Glancing billiards. Zbl 0525.58021 Mather, John N. 1982 Topological proofs of some purely topological consequences of Caratheodory’s theory of prime ends. Zbl 0506.57005 Mather, John N. 1982 Non-uniqueness of solutions of Percival’s Euler-Lagrange equation. Zbl 0553.58011 Mather, John N. 1982 Concavity of the Lagrangian for quasi-periodic orbits. Zbl 0508.58037 Mather, John N. 1982 Invariant subsets for area preserving homeomorphisms of surfaces. Zbl 0505.58027 Mather, John N. 1981 Criterion for biholomorphic equivalence of isolated hypersurface singularities. Zbl 0477.32005 Mather, John N.; Yau, Stephen S.-T. 1981 On the homology of Haefliger’s classifying space. Zbl 0469.57021 Mather, John N. 1979 Area preserving twist homeomorphism of the annulus. Zbl 0414.57002 Mather, John N. 1979 Differentiable invariants. Zbl 0376.58002 Mather, John N. 1977 How to stratify mappings and jet spaces. Zbl 0398.58008 Mather, John N. 1976 Commutators of diffeomorphisms. II. Zbl 0299.58008 Mather, John N. 1975 Solutions of the collinear four body problem which become unbounded in finite time. Zbl 0331.70005 Mather, J. N.; McGehee, R. 1975 Foliations and local homology of groups of diffeomorphisms. Zbl 0333.57015 Mather, John N. 1975 Loops and foliations. Zbl 0309.57009 Mather, John
PanSTARRS\footnote{The Panoramic Survey Telescope \& Rapid Response System; see \texttt{http://pan-starrs.ifa.hawaii.edu/public/}.}, VST\footnote{The V LT Survey Telescope ; \texttt{http://www.eso.org/ sci/observing/policies/PublicSurveys/ sciencePublicSurveys.html\#VST}.} or LSST\footnote{The Large Synoptic Survey Telescope; \texttt{http://www.lsst.org/lsst}.}, classified via a future Galaxy Zoo project would enable us to better separate them into early- and late-types. \subsection{The Black Hole at the Center of the Milky Way should be Active!} \label{sec:mw} Figure \ref{fig:sy_frac} shows that the duty cycle for AGN in late-type galaxies is very strongly peaked at a particular stellar mass and color. The Milky Way most likely occupies this sweet spot for late-type AGN. Estimates of the stellar mass of the Milky Way vary, but are around $\sim10^{11}$$M_{\odot}$, while the current star formation rate is approximately 3 $M_{\odot}yr^{-1}$\ \citep[e.g.,][]{1986FCPh...11....1S,2006Natur.439...45D,2009eimw.confE...8C}. These parameters place the Milky Way right where the AGN fraction for late-type galaxies is highest, between 5 and 10\% and therefore the duty cycle is the highest. However, this also implies that the black holes in galaxies like the Milky Way are \textit{not} active $\sim92-95\%$ of the time at the luminosity of the typical Seyfert in the SDSS Universe ($\sim10^{40}$$~\rm ergs^{-1}$, see Section \ref{sec:complete} and \ref{sec:sy_prop}). The black hole at the center of the Milky Way at Sagittarius A$^*$ (Sgr A$^*$) is, apart from occasional weak flares, remarkably quiet with an extremely sub-Eddington accretion rate \citep[e.g.,][]{2006ApJ...640..308M,2008ApJ...682..373M}. However, there is evidence that Sgr A$^*$ was substantially more active in the very recent past. Hard X-ray observations with \textit{ASCA} showed Fe K$\alpha$ lines around Galactic Center emitted by the molecular cloud complex surrounding the black hole. Follow-up observations with more recent hard X-ray facilities indicate that Sgr A$^*$ was orders of magnitude, perhaps $10^6$ times more luminous in the last few hundred years than it is currently. \textit{Suzaku} observations by \cite{2008PASJ...60S.191N} detect a K$\alpha$ luminosity consistent with Sgr A$^*$ having been at $10^{38-39}$$~\rm ergs^{-1}$\ a mere 300 yrs ago, while \cite{2004A&A...425L..49R} used \textit{INTEGRAL} data to argue that Sgr A$^*$ was at a 2--200 keV luminosity of $L\sim 1.5 \times 10^{39}$$~\rm ergs^{-1}$\ just 300 to 400 years ago. This is not quite the luminosity necessary to excite a narrow-line region sufficiently luminous to be detected by an SDSS fibre at $z \sim 0.02$, but these observations clearly imply that Sgr A$^*$ is active at slightly lower luminosity on short timescales. Together with the duty cycle for Milky Way-like galaxies derived here, these observations may give a first hint of the lifetime of AGN in such galaxies. A caveat to this statement is the fact that the luminosity of this very recent flare is still below the AGN luminosity limit of our sample. Under what conditions would the black hole at the center of the Milky Way reach a sufficient luminosity so that it would be included in our AGN sample? To reach a typical luminosity of a few percent Eddington $(\sim \rm{few} \times 10^{42}$$~\rm ergs^{-1}$\ bolometric luminosity) requires a mass-accretion rate of only $10^{-3}$$M_{\odot}yr^{-1}$. This only a factor 100 higher than Bondi accretion rate inferred from simulations, and there are $10^{4}$$M_{\odot}$\ of molecular gas within 2 pc of the central black hole \citep{2002ApJ...579L..83H}, with large clouds at further distances. It would not necessarily be surprising if dynamical friction or a minor merger, e.g., the swallowing of the Sagittarius dwarf galaxy drove some of these clouds in, temporarily re-igniting the black hole at the center of the Milky Way. All the ingredients thus appear to be on hand for turning our galaxy into a Seyfert AGN. Indeed, the mystery may rather be why is our central black hole so dark and why is gas accretion is so infrequent? The implication of this work is that the Milky Way is in a location on the color-mass diagram where late-type galaxies have the highest duty cycle for AGN phases. The mass of the Milky Way black hole has been inferred to be $4.1 \times 10^{6}$$M_{\odot}$\ \citep{2008ApJ...689.1044G} which places it in the typical range of black hole masses in late-type galaxies that are currently growing in the local Universe (Figure \ref{fig:sy_mbh} and \ref{fig:sy_fmbh}). Thus, 5-10\% duty cycle for galaxies like the Milky Way places into context the current quiescence of Sgr A$^*$, together with the evidence for accretion in the recent past and apparently favorable conditions for a resumption of accretion. \section{Summary} \label{sec:summary} In order to understand the co-evolution of galaxies and the supermassive black holes at their centers, we have investigated which galaxies are more likely to host active black holes and how they differ systematically from their normal counterparts, if at all. We use data from the Sloan Digital Sky Survey and visual classifications of morphology from the Galaxy Zoo project to analyze black hole growth in the nearby Universe ($ z < 0.05$) and dissect the AGN host galaxy population by color, stellar mass and morphology. We show for the first time the importance of host galaxy morphology for black hole growth and relate it to our understanding of the way in which galaxies and their supermassive black holes co-evolve. In summary: \begin{enumerate} \item We confirm that the selection of AGN via emission line diagnostic diagrams may miss low-luminosity AGN ($L[\mbox{O\,{\sc iii}}]$\ $< 10^{40}$$~\rm ergs^{-1}$) in star-forming galaxies. \item AGN host galaxies as a population have high stellar masses around $10^{11}$$M_{\odot}$\, reside in the green valley, and have median black hole masses around $M_{\rm BH}$\ $\sim 10^{6.5}$$M_{\odot}$. \item When we divide the AGN host galaxy population into early- and late-types, key differences appear. While both early- and late-type AGN host galaxies have similar typical black hole masses and Eddington ratios, their stellar masses are different, with the early-type hosts being significantly less massive ($\sim10^{10}$$M_{\odot}$) than the late-type hosts ($\sim10^{11}$$M_{\odot}$). The late-type hosts nevertheless have the same characteristic bulge masses, and therefore black hole masses, as the early-types. The difference in stellar mass is due to the presence of a massive stellar disk in the late-types. \item We consider the fraction of galaxies hosting an AGN as a function of black hole mass. Dividing by morphology into early- and late-type galaxies, we find that in early-type galaxies, it is preferentially the galaxies with the \textit{least massive} black holes that are active. In late-type galaxies, it is preferentially the \textit{most massive} black holes that are active, with a potential drop in the active fraction above ($M_{\rm BH}$\ $ \sim10^{7.5}$$M_{\odot}$). \item We estimate the duty cycle of AGN on the color-mass diagram, split by morphology. While some early-type galaxies lie on the red sequence that host AGN, their duty cycle is negligible. The duty cycle of AGN in early-type galaxies is strongly peaked in the green valley below the low-mass end of the red sequence and above the blue cloud. The duty cycle of AGN in late-types, on the other hand, is high in massive green and red late-types. When we impose a minimum Eddington ratio cut, we find that only the green valley early-type AGN still have a substantial duty cycle. No part of the late-type population has a high duty cycle of high-Eddington ratio accretion. \item We discuss the implications of these results for our understanding of the role of AGN in the evolution of galaxies. We conclude, in particular, that \textit{there are two fundamentally different modes of black hole growth at work in early- and late-type galaxies}. We now have a good understanding of the role of AGN in early-type galaxies: The low-mass early-type AGN hosts are post-starburst objects moving towards the low-mass end of the red sequence. They may represent a low-mass, low-luminosity or \textit{downsized} version of the mode of evolution that was involved in the formation of more massive early-type galaxies at high redshift. The role of AGN in the evolution of the late-type galaxies is less clear and we offer some possible scenarios. The high host stellar masses and grand-design stellar disks makes it implausible
that S\,3 (A new design) is probably the best strategy for further development. 3.487 + 3.488 +But this result respects only the view on requirements and their relevance. Other factors like development effort and risks are important to think about too. These issues are discussed in the following sections, comparing S\,3 against the combination S\,1+2. 3.489 3.490 3.491 3.492 @@ -487,13 +492,14 @@ 3.493 3.494 \subsubsection*{Quality improvements} 3.495 3.496 -Most quality properties can hardly be added to a software afterwards. Hence, if reliability, extendability, or maintainability shall be improved, a redesign of \masqmail\ is the best way to take. The wish to improve quality inevitably point towards a modular architecture. Modularity with internal and external interfaces is highly preferred from the architectural point of view (see section~\ref{sec:discussion-mta-arch}). The need for further features, especially ones that require changes in \masqmail's structure, support the decision for a new design too. Hence a rewrite is favored if \masqmail\ should become a modern \MTA, with good quality properties. 3.497 +Most quality properties can hardly be added afterwards. Hence, if reliability, extendability, or maintainability shall be improved, a redesign of \masqmail\ is the best way to take. The wish to improve quality inevitably point towards a modular architecture. Modularity with internal and external interfaces is highly preferred from the architectural point of view (see section~\ref{sec:discussion-mta-arch}). The need for further features, especially ones that require changes in \masqmail's structure, support the decision for a new design, too. Hence a rewrite is favored if \masqmail\ should become a modern \MTA\ with good quality properties. 3.498 3.499 3.500 3.501 \subsubsection*{Security} 3.502 3.503 Similar is the situation for security. Security comes from good design, explain \person{Graff} and \person{van Wyk}: 3.504 + 3.505 \begin{quote} 3.506 Good design is the sword and shield of the security-conscious developer. Sound design defends your application from subversion or misuse, protecting your network and the information on it from internal and external attacks alike. It also provides a safe foundation for future extensions and maintenance of the software. 3.507 % 3.508 @@ -503,7 +509,7 @@ 3.509 3.510 They also suggest to add wrappers and interposition filters \emph{around} applications, but more as repair techniques if it is not possible to design security \emph{into} a software the first way \cite[pages~71--72]{graff03}. 3.511 3.512 -\person{Hafiz} adds: The major idea is that security cannot be retrofitted \emph{into} an architecture.'' \cite[page 64]{hafiz05} (emphasis added). 3.513 +\person{Hafiz} adds: The major idea is that security cannot be retrofitted \emph{into} an architecture.'' \cite[page 64, (emphasis added)]{hafiz05}. 3.514 3.515 3.516 3.517 @@ -518,18 +524,18 @@ 3.518 3.519 The development costs in money are not relevant for a \freesw\ project with volunteer developers, but the development time is. About 24 man-months are estimated. The current code base was written almost completely by \person{Oliver Kurth} within four years in his spare time. This means he needed around twice as much time. Of course, he programmed as a volunteer developer not as an employee with eight work-hours per day. 3.520 3.521 -Given the assumptions that (1) an equal amount of code needs to be produced for a new designed \masqmail, (2) a third of existing code can be reused plus concepts and knowledge, and (3) development speed is like \person{Kurth}'s. Then it would take between two and three years for one programmer to produce a redesigned new \masqmail\ with the same features that \masqmail\ now has. Less time would be needed if a simpler architecture allows faster development, better testing, and less bugs. Of course more developers would speed it up too. 3.522 +Given the assumptions that (1) an equal amount of code needs to be produced for a new designed \masqmail, (2) a third of the existing code can be reused plus concepts and knowledge, and (3) development speed is like \person{Kurth}'s, then it would take between two and three years for one programmer to produce a redesigned new \masqmail\ with the same features that \masqmail\ now has. Less time would be needed if a simpler architecture allows faster development, better testing, and less bugs. Of course more developers would speed it up too. 3.523 3.524 3.525 3.526 3.527 \subsubsection*{Risks} 3.528 3.529 -The gained result might still outweighs the development effort. But risks are something more to consider. 3.530 +The gained result of a new design might still outweigh the development effort. But risks are something more to consider. 3.531 3.532 A redesign and rewrite of software from scratch is hard. It takes time to design a new architecture, which then must prove that it is as good as expected. As well is much time and work needed to implement the design, test it, fix bugs, and so on. If flaws in the design appear during prototype implementation, it is necessary to start again. 3.533 3.534 -Such a redesign can fail at many points and it is for long unclear if the result is really better than the code that is already existent. Even if the new code is working like expected, it is still not matured. 3.535 +Such a redesign can fail at many points and it is long time unclear if the result is really better than the code that already exists. Even if the new code is working like expected, it is still not matured. 3.536 3.537 One thing is clear: Doing a redesign and rebuild \emph{is} a risky decision. 3.538 3.539 @@ -539,9 +545,9 @@ 3.540 3.541 If a new design needs much effort and additionally is a risk, what about the existing code base then? 3.542 3.543 -Adding new functionality to an existing code base seems to be a secure and cheap strategy. The existing code is known to work and features can often be added in small increments. Risks like wasted effort if a new design fails are hardly existent. And the faults in the current design are already made and most probably fixed. 3.544 +Adding new functionality to an existing code base seems to be a secure and cheap strategy. The existing code is known to work and features can often be added in small increments. Risks like wasted effort if a new design fails are hardly existent, and the faults in the current design are already made and most probably fixed. 3.545 3.546 -And functionality that is hard to add incrementally into the application, like support for new protocols, may be addable by translation programs'' to the outside. \masqmail\ can be secured to a huge amount by guarding it with wrappers that block attackers. Spam and malware scanners can be included by running two instances of \masqmail. All those methods base on the current code which they can indirectly improve. 3.547 +Functionality that is hard to add incrementally into the application, like support for new protocols, may be addable to the outside. \masqmail\ can be secured to a huge amount by guarding it with wrappers that block attackers. Spam and malware scanners can be included by running two instances of \masqmail. All those methods base on the current code which they can indirectly improve. 3.548 3.549 The required effort is probably under one third of a new design and work directly shows results. These are strong arguments against a new design. 3.550 3.551 @@ -550,15 +556,17 @@ 3.552 3.553 \subsubsection*{Repairing} 3.554 3.555 -Besides these advantages of existing code, one must not forget that further work on it is often repair work. Small bug fixes are not the problem, but adding something for which the software originally was not designed for are problems. Such work often destroys the clear concepts of the software, especially in interweaved monolithic code. 3.556 +Besides these advantages of existing code, one must not forget that further work on it is often repair work. Small bug fixes are not the problem, but adding something for which the software originally was not designed will cause problems. Such work often destroys the clear concepts of the software, especially in interweaved monolithic code. 3.557 3.558 -Repair strategies are useful, but only in the short-time view and in times of trouble. If the future is bright, however, one does best by investing into a software. As shown in section~\ref{sec:market-analysis-conclusion}, the future for
field instead of Q, @*       - you can call the procedure with Z/pZ as base field instead of Q, but there are some problems you should be aware of: @*         + the Puiseux series field over the algebraic closure of Z/pZ is NOT algebraicall closed, and thus there may not exist a point in V(i) over the Puiseux series field with the desired valuation; @*         + the Puiseux series field over the algebraic closure of Z/pZ is NOT algebraicall closed, and thus there may not exist a point in V(i) over the Puiseux series field with the desired valuation; so there is no chance that the procedure produced a sensible output - e.g. if i=tx^p-tx-1 @*         + if the dimension of i over Z/pZ(t) is not zero the process of reduction to zero might not work if the characteristic is small - e.g. if i=tx^p-tx-1 @*         + if the dimension of i over Z/pZ(t) is not zero the process of reduction to zero might not work if the characteristic is small and you are unlucky @*         + the option 'noAbs' has to be used since absolute primary decomposition in Singular only works in characteristic zero @*       - the basefield should either be Q or Z/pZ for some prime p; field extensions will be computed if necessary; if you need parameters or field extensions from the beginning they should rather be simulated as variables possibly adding their relations to @*         + the option 'noAbs' has to be used since absolute primary decomposition in @sc{Singular} only works in characteristic zero @*       - the basefield should either be Q or Z/pZ for some prime p; field extensions will be computed if necessary; if you need parameters or field extensions from the beginning they should rather be simulated as variables possibly adding their relations to the ideal; the weights for the additional variables should be zero EXAMPLE: example tropicalLifting;   shows an example" noabs=1; } // this option is not documented -- it prevents the execution of gfan and // just asks for wneu to be inserted -- it can be used to check problems // with the precedure without calling gfan, if wneu is know from previous // this option is not documented -- it prevents the execution of gfan and // just asks for wneu to be inserted -- it can be used to check problems // with the precedure without calling gfan, if wneu is know from previous // computations if (#[j]=="noGfan") } } // if the basering has characteristic not equal to zero, // if the basering has characteristic not equal to zero, // then absolute factorisation // is not available, and thus we need the option noAbs { Error("The first coordinate of your input w must be NON-ZERO, since it is a DENOMINATOR!"); } } // if w_0<0, then replace w by -w, so that the "denominator" w_0 is positive if (w[1]<0) } intvec prew=w; // stores w for later reference // for our computations, w[1] represents the weight of t and this // for our computations, w[1] represents the weight of t and this // should be -w_0 !!! w[1]=-w[1]; w[1]=-1; } // if some entry of w is positive, we have to make a transformation, // if some entry of w is positive, we have to make a transformation, // which moves it to something non-positive for (j=2;j<=nvars(basering);j++) { variablen=variablen+var(j); } } map GRUNDPHI=BASERING,t,variablen; ideal i=GRUNDPHI(i); // compute the initial ideal of i and test if w is in the tropical // variety of i // compute the initial ideal of i and test if w is in the tropical // variety of i // - the last entry 1 only means that t is the last variable in the ring ideal ini=tInitialIdeal(i,w,1); if (isintrop==0) // test if w is in trop(i) only if isInTrop has not been set { { poly product=1; for (j=1;j<=nvars(basering)-1;j++) int dd=dim(i); setring GRUNDRING; // if the dimension is not zero, we cut the ideal down to dimension zero // if the dimension is not zero, we cut the ideal down to dimension zero // and compute the // t-initial ideal of the new ideal at the same time if(dd!=0) { // the procedurce cutdown computes a new ring, in which there lives a // the procedurce cutdown computes a new ring, in which there lives a // zero-dimensional // ideal which has been computed by cutting down the input with // ideal which has been computed by cutting down the input with // generic linear forms // of the type x_i1-p_1,...,x_id-p_d for some polynomials // p_1,...,p_d not depending // on the variables x_i1,...,x_id; that way we have reduced // of the type x_i1-p_1,...,x_id-p_d for some polynomials // p_1,...,p_d not depending // on the variables x_i1,...,x_id; that way we have reduced // the number of variables by dd !!! // the new zero-dimensional ideal is called i, its t-initial // the new zero-dimensional ideal is called i, its t-initial // ideal (with respect to // the new w=CUTDOWN[2]) is ini, and finally there is a list // repl in the ring // the new w=CUTDOWN[2]) is ini, and finally there is a list // repl in the ring // which contains at the polynomial p_j at position i_j and //a zero otherwise; list liftrings; // will contain the final result // if the procedure is called without 'findAll' then it may happen, that no // proper solution is found when dd>0; in that case we have // proper solution is found when dd>0; in that case we have // to start all over again; // this is controlled by the while-loop // compute the liftrings by resubstitution kk=1;  // counts the liftrings int isgood;  // test in the non-zerodimensional case int isgood;  // test in the non-zerodimensional case // if the result has the correct valuation for (jj=1;jj<=size(TP);jj++) { // the list TP contains as a first entry the ring over which the // tropical parametrisation // the list TP contains as a first entry the ring over which the // tropical parametrisation // of the (possibly cutdown ideal) i lives def LIFTRING=TP[jj][1]; // if the dimension of i originally was not zero, // if the dimension of i originally was not zero, // then we have to fill in the missing // parts of the parametrisation if (dd!=0) { // we need a ring where the parameters X_1,...,X_k // we need a ring where the parameters X_1,...,X_k // from LIFTRING are present, // and where also the variables of CUTDOWNRING live execute("ring REPLACEMENTRING=("+charstr(LIFTRING)+"),("+varstr(CUTDOWNRING)+"),dp;"); list repl=imap(CUTDOWNRING,repl); // get the replacement rules list repl=imap(CUTDOWNRING,repl); // get the replacement rules // from CUTDOWNRING ideal PARA=imap(LIFTRING,PARA);   // get the zero-dim. parametrisatio ideal PARA=imap(LIFTRING,PARA);   // get the zero-dim. parametrisatio // from LIFTRING // compute the lift of the solution of the original ideal i k=1; // the lift has as many components as GRUNDRING has variables!=t for (j=1;j<=nvars(GRUNDRING)-1;j++) for (j=1;j<=nvars(GRUNDRING)-1;j++) { // if repl[j]=0, then the corresponding variable was not eliminated if (repl[j]==0) if (repl[j]==0) { LIFT[j]=PARA[k]; // thus the lift has been LIFT[j]=PARA[k]; // thus the lift has been // computed by tropicalparametrise k++; // k checks how many entries of PARA have already been used else  // if repl[j]!=0, repl[j] contains replacement rule for the lift { LIFT[j]=repl[j]; // we still have to replace the vars LIFT[j]=repl[j]; // we still have to replace the vars // in repl[j] by the corresp. entries of PARA // replace all variables!=t (from CUTDOWNRING) for (l=1;l<=nvars(CUTDOWNRING)-1;l++) for (l=1;l<=nvars(CUTDOWNRING)-1;l++) { // substitute the kth variable by
#!/usr/bin/env python # -- coding: utf-8 -- #========================== #= Gomoku Game = #========================== from __future__ import print_function, division import os, sys, time, collections, shutil from functools import update_wrapper import pickle, h5py import numpy as np from nifty import createWorkQueue,getWorkQueue,queue_up,wq_wait,LinkFile import tarfile def decorator(d): "Make function d a decorator: d wraps a function fn." def _d(fn): return update_wrapper(d(fn), fn) update_wrapper(_d, d) return _d @decorator def memo(f): """Decorator that caches the return value for each call to f(args). Then when called again with same args, we can just look it up.""" cache = {} def _f(*args): try: return cache[args] except KeyError: cache[args] = result = f(*args) return result except TypeError: # some element of args refuses to be a dict key return f(args) _f.cache = cache return _f @memo def colored(s, color=''): if color.lower() == 'green': return '\033[92m' + s + '\033[0m' elif color.lower() == 'yellow': return '\033[93m' + s + '\033[0m' elif color.lower() == 'red': return '\033[91m' + s + '\033[0m' elif color.lower() == 'blue': return '\033[94m' + s + '\033[0m' elif color.lower() == 'bold': return '\033[1m' + s + '\033[0m' else: return s class Gomoku(object): """ Gomoku Game Rules: Two players alternatively put their stone on the board. First one got five in a row wins. """ def __init__(self, board_size=15, players=None, fastmode=False, first_center=None): print("*********************************") print("* Welcome to Gomoku ! *") print("*********************************") print(self.__doc__) self.reset() self.board_size = board_size self.fastmode = fastmode #self.players = [Player(player_name) for player_name in players] self.first_center = first_center @property def state(self): return (self.board, self.last_move, self.playing, self.board_size) def load_state(self, state): (self.board, self.last_move, self.playing, self.board_size) = state def reset(self): self.board = (set(), set()) self.playing = None self.winning_stones = set() self.last_move = None def print_board(self): print(' '*4 + ' '.join([chr(97+i) for i in range(self.board_size)])) print(' '*3 + '='*(2*self.board_size)) for x in range(1, self.board_size+1): row = ['%2s|'%x] for y in range(1, self.board_size+1): if (x,y) in self.board[0]: c = 'x' elif (x,y) in self.board[1]: c = 'o' else: c = '-' if (x,y) in self.winning_stones or (x,y) == self.last_move: c = colored(c, 'green') row.append(c) print(' '.join(row)) def play(self): if self.fastmode < 2: print("Game Start!") i_turn = len(self.board[0]) + len(self.board[1]) new_step = None while True: if self.fastmode < 2: print("----- Turn %d -------" % i_turn) self.playing = i_turn % 2 if self.fastmode < 2: self.print_board() current_player = self.players[self.playing] other_player = self.players[int(not self.playing)] if self.fastmode < 2: print("--- %s's turn ---" % current_player.name) max_try = 5 for i_try in range(max_try): action = current_player.strategy(self.state) if action == (0, 0): print("Player %s admit defeat!" % current_player.name) winner = other_player.name self.print_board() print("Winner is %s"%winner) return winner self.last_move = action if self.place_stone() is True: break if i_try == max_try-1: print("Player %s has made %d illegal moves, he lost."%(current_player.name, max_try)) winner = other_player.name print("Winner is %s"%winner) return winner # check if current player wins winner = self.check_winner() if winner: self.print_board() print("########## %s is the WINNER! #########" % current_player.name) return winner elif i_turn == self.board_size ** 2 - 1: self.print_board() print("This game is a Draw!") return "Draw" i_turn += 1 def place_stone(self): # check if this position is on the board r, c = self.last_move if r < 1 or r > self.board_size or c < 1 or c > self.board_size: print("This position is outside the board!") return False # check if this position is already taken taken_pos = self.board[0] | self.board[1] if self.first_center is True and len(taken_pos) == 0: # if this is the very first move, it must be on the center center = int((self.board_size+1)/2) if r != center or c != center: print("This is the first move, please put it on the center (%s%s)!"% (str(center),chr(center+96))) return False elif self.last_move in taken_pos: print("This position is already taken!") return False self.board[self.playing].add(self.last_move) return True def check_winner(self): r, c = self.last_move my_stones = self.board[self.playing] # find any nearby stone nearby_stones = set() for x in range(max(r-1, 1), min(r+2, self.board_size+1)): for y in range(max(c-1, 1), min(c+2, self.board_size+1)): stone = (x,y) if stone in my_stones and (2*r-x, 2*c-y) not in nearby_stones: nearby_stones.add(stone) for nearby_s in nearby_stones: winning_stones = {self.last_move, nearby_s} nr, nc = nearby_s dx, dy = nr-r, nc-c # try to extend in this direction for i in range(1,4): ext_stone = (nr+dx*i, nc+dy*i) if ext_stone in my_stones: winning_stones.add(ext_stone) else: break # try to extend in the opposite direction for i in range(1,5): ext_stone = (r-dx*i, c-dy*i) if ext_stone in my_stones: winning_stones.add(ext_stone) else: break if len(winning_stones) >= 5: self.winning_stones = winning_stones return self.players[self.playing].name return None def delay(self, n): """ Delay n seconds if not in fastmode""" if not self.fastmode: time.sleep(n) def get_strategy(self, p): return p.strategy(self.state) class Player(object): @property def name(self): return self._name @name.setter def name(self, value): try: self._name = str(value) except: raise TypeError("Player Name must be a string.") def __repr__(self): return "Player %s"%self.name def __init__(self, name): self.name = name # search for the strategy file # p = __import__(name) # p.initialize() # self.strategy = p.strategy # self.finish = p.finish # self.train_model = p.train_model def prepare_train_data(learndata_A, learndata_B): nb, nw = len(learndata_A), len(learndata_B) n_data = nb + nw train_X = np.empty(n_data*15*15*3, dtype=np.int8).reshape(n_data,15,15,3) train_Y = np.empty(n_data, dtype=np.float32).reshape(-1,1) i = 0 bx, wx, by, wy = [],[],[],[] for k in learndata_A: x, y, n = learndata_A[k] bx.append(x) by.append(y) train_X[i, :, :, 0] = (x == 1) # first plane indicates my stones train_X[i, :, :, 1] = (x == -1) # first plane indicates opponent_stones train_X[i, :, :, 2] = 1 # third plane indicates if i'm black train_Y[i, 0] = y i += 1 for k in learndata_B: x, y, n = learndata_B[k] wx.append(x) wy.append(y) train_X[i, :, :, 0] = (x == 1) # first plane indicates my stones train_X[i, :, :, 1] = (x == -1) # first plane indicates opponent_stones train_X[i, :, :, 2] = 0 # third plane indicates if i'm black train_Y[i, 0] = y i += 1 # save the current train data to data.h5 file h5f = h5py.File('data.h5','w') h5f.create_dataset('bx',data=np.array(bx, dtype=np.int8)) h5f.create_dataset('by',data=np.array(by, dtype=np.float32)) h5f.create_dataset('wx',data=np.array(wx, dtype=np.int8)) h5f.create_dataset('wy',data=np.array(wy, dtype=np.float32)) h5f.close() print("Successfully prepared %d black and %d white training data." % (nb, nw)) return train_X, train_Y def update_learn_data(learndata1, learndata2): # update learndata1 with data in learndata2, if n2 > n1 then use y2 for key in learndata2: x2, y2, n2 = learndata2[key] if key in learndata1: x1, y1, n1 = learndata1[key] if n2 > n1: learndata1[key] = x2, y2, n2 else: learndata1[key] = (x2, y2, n2) def main(): import argparse parser = argparse.ArgumentParser("Play the Gomoku Game!", formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument('-n', '--n_train', type=int, default=10, help='Play a number of games to gather statistics.') parser.add_argument('-b', '--n_batches', type=int, default=10, help='Number of batches of games to play in each iteration.') parser.add_argument('-w', '--n_workers', type=int, default=30, help='Number of workers to use in each batch.') parser.add_argument('-g', '--worker_games', type=int, default=100, help='Number of games each worker play each time before returning data.') parser.add_argument('-p', '--wq_port', type=int, default=50123, help='Port to use in work queue.') args = parser.parse_args() createWorkQueue(args.wq_port, name='gtrain') wq = getWorkQueue() cmdstr = 'python gomoku_worker.py -g %d' % args.worker_games input_files = ['construct_dnn.py', 'player_A.py', 'player_B.py'] print("Training the model for %d iterations, each will run %d batches of games."% (args.n_train, args.n_batches)) model_name = 'initial_model' # we start with tf_model saved in initial_model import construct_dnn model = construct_dnn.construct_dnn() model.load(os.path.join(model_name, 'tf_model', 'tf_model')) for i_train in xrange(args.n_train): prev_model_name = model_name # check if the current model exists model_name = "trained_model_%03d" % i_train if os.path.exists(model_name): backup_name = model_name+'_backup' if os.path.exists(backup_name): shutil.rmtree(backup_name) shutil.move(model_name, backup_name) print("Current model %s already exists, backed up to %s" % (model_name, backup_name)) # create black_learndata and white_learndata dict black_learndata, white_learndata = dict(), dict() # create and enter the model folder os.mkdir(model_name) os.chdir(model_name) for i_batch in xrange(args.n_batches): print("Batch %d, launching %d workers, each play %d games, then update strategy.learndata" % (i_batch, args.n_workers, args.worker_games)) batch_name = "batch_%03d" % i_batch os.mkdir(batch_name) os.chdir(batch_name) pickle.dump(black_learndata, open('black.learndata', 'wb')) pickle.dump(white_learndata, open('white.learndata', 'wb')) # put all input files in a tar.gz file for transfer with tarfile.open("input.tar.gz", "w:gz") as tar: for f in input_files: tar.add("../../" + f, arcname=f) tar.add('black.learndata') tar.add('white.learndata') # add the previous tf model to the input files, rename to tf_model tar.add(os.path.join('../..', prev_model_name, 'tf_model'), arcname='tf_model') for i_worker in xrange(args.n_workers): worker_name
more precisely (with large or even infinite $T_i$) can indeed reduce the overall communication cost, as the optimal sets of these local models are likely to intersect. \textbf{\begin{figure*}[!t] \centering \subfigure[Gradient residual $\Vert \nabla f(x_n) \Vert^2$ for the Non-Intersected case.] {\label{nonInterA} \includegraphics[width=.42\linewidth]{NonOverlappingNetworkGradientComparison.pdf} } \subfigure[Loss for the Non-Intersected case.] {\label{nonInterB} \includegraphics[width=.42\linewidth]{NonOverlappingNetworkLossComparison.pdf} } \subfigure[Gradient residual $\Vert \nabla f(x_n) \Vert^2$ for the Intersected case.] {\label{InterA} \includegraphics[width=.42\linewidth]{OverlappingNetworkGradientComparison.pdf} } \subfigure[Loss for the Intersected case.] {\label{InterB} \includegraphics[width=.42\linewidth]{OverlappingNetworkLossComparison.pdf} } \caption{1 Layer Neural Network on MNIST dataset. $T_i =100$ for all cases.} \label{fig:1LayerNN} \end{figure*}} \subsubsection{Necessity of the Intersection Assumption} Before further numerical validation, we first highlight the necessity of the intersection assumption~\ref{Ass:intersection} that distinguishes our work from previous studies. We select the first 500 training samples from MNIST dataset~\cite{lecun1998gradient} and construct two 1-layer neural networks: (1) the first is to directly transform the $28 \cdot 28$ image into 10 categories by an affine transformation followed by softmax cross-entropy loss, which we name as the ``Intersected Case'' since the parameters exceed the instance numbers; (2) the second is to perform two continuous max-pooling twice with a $(2,2)$ window before the final prediction, which we name as the ``Non-Intersected Case'' since the total parameters number is 490 and the intersection assumption is not satisfied. Figure~\ref{fig:1LayerNN} shows the results of centralized training only on the server~(denoted as ``1 Node'') and distributed training on 10 nodes. Without the intersection condition being satisfied, the gradient residuals $\Vert \nabla f(x_n) \Vert^2$ may not even vanish on the ``Non-Intersected case'' in Fig~\ref{nonInterA} and the distributed training loss $f(x_n)$ can also be different from centralized learning in Fig~\ref{nonInterB}. On the contrary, both the gradient residuals and the loss on 10 learning nodes perform in a similar way to centralized learning for the ``Intersected case'' in Fig~\ref{InterA} and ~\ref{InterB}. These different results validate the importance of the intersection assumption we made in the previous part. \subsubsection{LeNet and ResNet} In practice, most deep learning models are highly over-parameterized, and the intersection assumption is likely to hold. In these scenarios, we numerically test the performance of Alg \ref{alg} on non-convex objectives and explore whether large or even ``infinite'' $T_i$ leads to less communication requirement. To do so, we select two classical benchmarks for deep learning: LeNet~\cite{lecun1998gradient} on MNIST dataset and ResNet-18~\cite{he2016deep} on CIFAR-10 dataset~\cite{krizhevsky2009learning}. To accelerate experiments, we only select the first 1000 training samples from these two dataset (although we also provide experiments on the complete dataset in appendix), and evenly distribute these instances to multiple learning nodes. Similar to our previous convex experiments, each node is required to perform $T_i$ iterations of GD before sending models to server, and the $T_i = \infty$ is simulated by continuous gradient descents until local gradient residual $\Vert \nabla f_i \Vert^2$ is sufficiently small. Figure~\ref{fig:dl} shows the experimental results on these benchmarks. The result is consistent with our previous convex experiments that choices for $T_i$ is no longer limited as the conventional studies, and larger $T_i$ decreases the total loss more aggressively. In other words, updating local model more precisely can reduces communication cost for these two deep learning models. Note that for ResNet-18, we intentionally set the local gradient norm threshold to a relatively small number $10^{-2}$, and hence the ``Threshold'' method requires thousands of epochs to reach this borderline in the beginning but only need a few epochs after 40 iterations, which explains why it first outperforms $T_i=100$ but then is inferior to it. \begin{figure*}[!t] \centering \hspace*{\fill} \subfigure[LeNet for MNIST dataset. The threshold is set as $\Vert \nabla f_i \Vert_2^2 \leq 10^{-4}$.] {\label{LeNet} \includegraphics[width=.44\linewidth]{10Nodes_Loss.pdf} } \hfill \subfigure[ResNet for CIFAR 10 dataset. The threshold is set as $\Vert \nabla f_i \Vert_2^2 \leq 10^{-2}$.] {\label{ResNet} \includegraphics[width=.44\linewidth]{10Nodes_Loss.pdf} } \hspace*{\fill} \caption{Deep learning experiments. The x-axis denotes the communication round $n$ and the y-axis denotes the $\log(f)$. } \label{fig:dl} \end{figure*} \section{A Quantitative Analysis of the Trade-off between Communication and Optimization} \label{sec:tradeoff} In previous sections we have focused on convergence properties. Recall that we proved in the convex case, essentially any frequency of local updates is sufficient for convergence. This effect then brings into relevance the following important practical question: given a degenerate distributed optimization problem, can we decide how many steps $T_i$ to take locally before a combination, in order to optimize performance or minimize some notion of computational cost? Note that this question is not well-posed unless convergence is guaranteed for any (or at least a large range of) $T_i$, as we have established in Sec 3 by relying on the degeneracy assumption. Building on the results earlier, we now show that in our setting, this question can be answered quantitatively and this gives guidance to designing efficient distributed algorithms. From Lemma~\ref{lm:mother_equation}, it is clear that the decrement of $\dS{x_{n}}$ come from two sources, one from the frequency of applying the outer iteration in $n$ (communication), and the other from the size of $\sum_{t=0}^{T_i - 1} \alpha_i \| \nabla f_i (x_n^{i,t}) \|^2$, which relies on $T_i$ and also the rate of convergence of local gradient descent steps (optimization). It is well-known that for general smooth convex functions, the upper-bound for the decay of gradient norms is $\mathcal{O}(t^{-1})$. However, depending on the loss function at hand, different convergence rates can occur, ranging from linear convergence to sub-linear convergence in the form of power-laws. Therefore, to make headway one has to assume some decay rate of local gradient descent. To this end, let us assume that the local gradient descent decreases the gradient norm according to \begin{align} \label{eq:h_def} \| \nabla f_i(x_n^{i,t}) \|^2 \geq h_i(t) \| \nabla f_i(x_n^{i,0}) \|^2 \end{align} where $h_i(t)$ is a positive, monotone decreasing function with $h_i(0)=1$. Let $\epsilon>0$ be fixed and define \begin{align} \label{eq:n_star_def} n^* = \inf \{ k\geq 0, \|\nabla f(x_k)\|^2 \leq \epsilon \}. \end{align} From Lemma~\ref{lm:mother_equation} and Eq.~\eqref{eq:h_def} we have \begin{align*} \dS{x_{n+1}}^2 \leq \dS{x_{n}}^2 - \tfrac{1}{m}\sum_{i=1}^{m} \sum_{t=0}^{T_i - 1} \alpha_i h_i(t) \| \nabla f_i( x_n) \|^2. \end{align*} Assume for simplicity $T_i=T$ for all $i$. we have for each $n \leq n^* - 1$, \begin{align*} \dS{x_{n+1}}^2 \leq \dS{x_{n}}^2 - \sum_{t=0}^{T - 1} \alpha h(t) \epsilon, \end{align*} where $\alpha := \min_i \alpha_i$ and $h(t) := \min_i h_i(t)$. Hence, \begin{align} \label{eq:n_star_expr} n^* \leq \tfrac{\dS{x_0}^2}{ \alpha \epsilon \sum_{t=0}^{T-1} h(t)}. \end{align} This expression concretely links the number of steps required to reach an error tolerance to the local optimization steps. Now, we need to define some notion of cost in order to analyze how to pick $T$. In arbitrary units, suppose each communication step has associated cost $C_c$ per node and each local gradient descent step has cost $C_g$. Then, the total cost for first achieving $\|\nabla f(x_n) \|^2 \leq \epsilon $ is \begin{align*} \begin{split} C_{total} &= (C_c m + C_g m T) n^* \\ &= C_c m ( 1 + r T ) n^* \\ &\leq C_c m \dS{x_{0}}^2 {(\alpha\epsilon)}^{-1} \tfrac{( 1 + r T )}{ \sum_{t=0}^{T-1} h(t)}, \end{split} \end{align*} where we have defined $r := C_g / C_c$. We are mostly interested in the regime where $r$ is small, i.e. communication cost dominates gradient descent cost. The key question we would like to answer is: for a fixed and small cost ratio $r$, how many gradient descent steps should we take for every communication and combination step in order to minimize the total cost? It is clear that the answer to this question depend on the behavior of the sum $\sum_{t=0}^{T-1} h(t)$ as $T$ varies. Below, let us consider two representative forms of $h(t)$, which gives very different optimal solutions. \paragraph{Linearly Convergent Case.} We first consider the linearly convergent case where $h(t) = \beta^T$ and $\beta \in (0,1)$. This is the situation if for example, each $f_i$ is strongly convex (in the restricted sense, see Assumption~\ref{Ass:restricted_sc}). Then, we have \begin{align*} \sum_{t=0}^{T-1} h(t) = \tfrac{1 - \beta^T}{1 - \beta}, \end{align*} and so $ C_{total} \leq C_c m \dS{x_0}^2 (1 - \beta) {(\alpha\epsilon)}^{-1} \tfrac{1 + r T}{1 - \beta^T}. $ The upper bound is minimized at $T=T^*$, with \begin{align*} T^* = \tfrac{1}{\log \beta} \left[ 1 + W^-(-e^{-1} \beta^{\tfrac{1}{r}}) \right] - \tfrac{1}{r}, \end{align*} where $W^-$ is the negative real branch of the Lambert's $W$ function, i.e. $W^{-}(x e^x) = x$ for $x \in [-1/e, 0)$. For small $x$, $W^-$ has the asymptotic form $W^- = \log (-x) + \log (- \log (-x) ) + o(1)$. Hence, for $r\ll 1$ we have \begin{align*} T^* = \log \left( 1 + \tfrac{\log (\beta^{-1})}{ r } \right) + o(1). \end{align*} \paragraph{Sub-linearly Convergent Case.} Let us suppose instead that $h(t) = 1 / (1+ a t)^\beta$ for some $a>0,\beta >1$.
\section{\label{sec:introduction} Introduction} The ever-increasing technological capabilities allow us to perform gravity and clock measurements with incredible accuracy. On the one hand, high-precision satellite missions such as GRACE/GRACE-FO allow to deduce properties of the Earth's gravity field, its changes on various time scales, and the investigation of underlying phenomena \cite{Tapley:2004, Tapley:2019, Abich:2019}. On the other hand, Earth-bound clock comparison networks or portable optical atomic clocks are used in the framework of chronometric geodesy \cite{Takano:2016, Mueller:2017, Kopeikin:2016b, Delva:2018}. One of the central notions that is to be determined by such geodetic measurements is the Earth's geoid -- its mathematical figure as the German mathematician C.F.\ Gauss has termed it. Geoid determination with high accuracy is necessary for, e.g., national and global height systems and is related to various applications such as GNSS. To thoroughly explain the outcome of contemporary and future geodetic missions at the cutting edge of available accuracy, we have to keep up at the theoretical level. Consequently, geodetic notions and concepts must be developed within a relativistic theory of gravity. The best available framework, consistent with all tests, is Einstein's theory \cite{Will:2014}. Therefore, it is our goal to generalize known geodetic concepts and define all notions intrinsically within General Relativity. To consistently interpret high-precision measurements without any doubt, a relativistic derivation of geodetic notions within General Relativity and beyond post-Newtonian gravity is inevitable. This is true in particular for the conceptual formulation of relativistic geodesy and model building in a top-down approach. In this work, we define a relativistic gravity potential, which generalizes the Newtonian one. It exists for any stationary configuration\footnote{By configuration we mean a spacetime model for the Earth's exterior.} with observers on isometric congruences, i.e.\ observers who rigidly co-rotate with the Earth. Its definition is based on the philosophy of Bjerhammar \cite{Bjerhammar:1985,Bjerhammar:1986} and Soffel \emph{et al.}\ \cite{Soffel:1988} as well as results on the time-independent redshift potential in Ref.\ \cite{Philipp:2017}. It allows geodetic notions such as the geoid to be defined and generalized in an intrinsic general relativistic manner and with well-defined weak-field limits. Moreover, it can be used to calculate the outcome of redshift and acceleration measurements, and it is realized by clock comparison as well as the determination of local plumb lines. One significant result is the direct comparison of the conventional Newtonian geoid and its relativistic generalization. Such a comparison is of relevance for geodesy, but involves a lot of subtle points. In particular, there is some gauge freedom in the choice of constants, different applicable conventions, and the more crucial geometrical problem of comparing objects that live in different geometries. We show how to perform this comparison using an isometric embedding and different conventions. The structure of this work is as follows. We start with a short recapitulation of the conventional geoid and results in the literature in Sec.\ \ref{sec:prevResults}. After introducing the relativistic gravity potential in Sec.\ \ref{sec:relGravityPotential}, it is employed to give a definition of the general relativistic geoid in direct analogy to the Newtonian case. We show how the potential can be used to express clock comparison as well as acceleration measurements between observers on the Earth's surface. Our definition of the geoid is such that it is the surface which is locally orthogonal to plumb lines and coincides with the surface of vanishing mutual redshifts of standard clocks on Killing congruences. Therefore, different measurements can contribute to its realization in data fusion. In Sec.\ \ref{Sec:Examples} the definitions are applied to some particular spacetime examples for illustration of the concepts and proper interpretation. A first-order parametrized post-Newtonian metric is used to show that results in the literature are embedded into the present framework. Exact expressions for the Schwarzschild spacetime, the quadrupolar Erez-Rosen spacetime, general asymptotically flat Weyl metrics, and the Kerr spacetime are derived as well. Thus, the effects of the relativistic monopole, the quadrupole, and higher-order multipoles in axisymmetric configurations can be analyzed order by order. Moreover, approximating the Earth's exterior spacetime by a suitably constructed Kerr metric allows us to consider gravitomagnetic contributions. In the last part, we compare the conventional Newtonian geoid to its relativistic generalization. Involved problems and subtleties are overcome by an isometric embedding of the relativistic geoid surface into Euclidean three-dimensional space, and we show that the leading-order difference, for a suitable convention, is about $2\,$mm due to the relativistic monopole. This embedding is not only an academic endeavor but necessary to overcome coordinate-dependent effects. \section{\label{sec:prevResults} Conventional Understanding and Previous Results} In this section, the conventional understanding of the geoid in Newtonian gravity as well as generalizations that exist so far within (approximate) relativistic frameworks are briefly summarized. We start with the Newtonian geoid and consider the post-Newtonian extension thereafter. In addition, we summarize helpful references in geodetic and general relativistic literature. \subsection{Newtonian Geoid} In Newtonian gravity, the geoid is defined as one particular level surface of the gravity potential\footnote{Note that in conventional geodesy, the gravitational potential is usually denoted by $V$, whereas the centrifugal potential is denoted by $Z$.} \cite{TorgeMueller:2012}, \begin{align} W\big(\vec{X}\big) := U\big(\vec{X}\big) + V\big(\vec{X}\big) \, , \end{align} where $U(\vec{X})$ is the Newtonian gravitational potential and $V(\vec{X})$ is the centrifugal potential experienced by rigidly co-rotating observers on the Earth's surface. We deliberately make the distinction between gravitation and gravity here to match geodetic notions and conventions. In Earth-centered global spherical coordinates $(R,\Theta,\Phi)$, we then have for the centrifugal potential, \begin{align} V\big( \vec{X} \big) := - \dfrac{1}{2} \omega^2 d_z^2 = - \dfrac{1}{2} \omega^2 R^2 \sin^2 \Theta \, , \end{align} and the expansion of the gravitational potential into spherical harmonics reads \begin{multline} \label{Eq:NewtonianPotentialExpansion} U(R,\Theta,\Phi) = - \dfrac{GM}{R} \sum_{l=0}^\infty \sum_{m=0}^l \left( \dfrac{R_{\text{ref}}}{R} \right)^l P_{lm} (\cos \Theta) \\ \times \left[ C_{lm} \cos (m \Phi) + S_{lm} \sin (m \Phi) \right] \, . \end{multline} In the equations above, $\omega$ is the angular frequency of the Earth's rotation, $R_{\text{ref}}$ is some chosen reference radius, and $d_z$ is the distance to the rotation axis which points into the $z$-direction. The $P_{lm}$ are the Legendre functions of degree $l$ and order $m$, and $C_{lm}, S_{lm}$ are multipole expansion coefficients. The gravitational potential is a solution of Poisson's equation (Laplace's equation outside the sources) and adapted to the condition that $U \to 0$ for $R \to \infty$. Note that in our sign convention the gravity potential is always negative since it refers to an attractive force. Under the assumption of axial symmetry, the expansion simplifies to \begin{align} \label{Eq:NewtonianPotentialExpansionAxisym} U(R,\Phi) = - \dfrac{GM}{R} \sum_{l=0}^\infty J_l \left( \dfrac{R_{\text{ref}}}{R} \right)^l P_l (\cos \Theta)\, , \end{align} where the $J_l$ are axially symmetric multipole moments. A suitable definition of the Newtonian geoid now is the following. \emph{Definition:} The Earth's geoid is defined by the level surface of the gravity potential ${W \big( \vec{X} \big)}$ such that \begin{align} \label{Eq:geoidDefNewton} - W \big( \vec{X} \big) \big|_{\text{geoid}} = W_0 = \text{constant} \, , \end{align} with a constant $W_0 = \SI{6.26368534e7}{\square\meter \per \square\second}$, which complies with modern conventions \cite{TorgeMueller:2012, Sanchez:2016}. The numerical value of $W_0$ is chosen in a way to have the geoid coincide best with the mean sea level at rest and a history of measurements and previous conventions, see Ref.\ \cite{Sanchez:2016}. In contrast to usual geodetic formulations, we use a negative potential such that our convention in Eq.\ \eqref{Eq:geoidDefNewton} differs by a sign. However, we want to keep the numerical value of $W_0$ to be strictly positive. \subsection{Post-Newtonian and General Relativistic Approaches} The first attempt to define a relativistic geoid was undertaken by Bjerhammar \cite{Bjerhammar:1985, Bjerhammar:1986} in 1985. He defined the geoid to be the surface on which ``precise clocks run with the same speed'', but most of the considerations involve approximations of order $c^{-2}$ and special relativistic results. Inspired by Bjerhammar's approach, which, however, lacks some formal and mathematical clarity, we give a rigorous general relativistic definition of a gravity potential and the geoid, based on clock comparison without approximations. Thus, we go beyond Bjerhammar's considerations and generalize his ideas. The essential steps to do so have already been outlined in Refs.\ \cite{Philipp:2017, Philipp:2017b, Philipp:2019}, in which the relativistic geoid is defined in terms of isochronometric surfaces, the level sets of a so-called stationary redshift potential for Killing congruences.
on $(T,\pm g)$ associated to the generalized metric $\mathcal G$, as in Example \ref{ex:D0}. We will abuse notation and let $\nabla^{\pm}$ and $\nabla^{\pm1/3}$ also denote the spin connections induced on a given spinor bundle for $\pm g$. Furthermore, by $\slashed \nabla^{\pm1/3}$ we mean the \emph{cubic Dirac operator}, that is, the Dirac operator associated to $\nabla^{\pm1/3}$. We will use the following Lichnerowicz-type formula for the cubic Dirac operator due to Bismut \cite{Bismut} (see also \cite[Theorem 6.2]{AgricolaFriedrich04}). Note the different sign convention for the Clifford algebra relations \eqref{eq:Cliffordrel}. \begin{lemma}\label{lem:scalarcurv} The spinor Laplacian $(\nabla^{\pm})^*\nabla^{\pm}$ and the square of the cubic Dirac operator $\Big{(}\slashed \nabla^{\pm 1/3}\Big{)}^2$ are related by \begin{equation*} \Big{(}\slashed \nabla^{\pm1/3}\Big{)}^2 - (\nabla^{\pm})^*\nabla^{\pm} = \mp \frac{1}{4} R \pm \frac{1}{48} |H|^2. \end{equation*} \end{lemma} The following lemma provides the structural property for the definition of the generalized scalar curvatures. Observe that the canonical mixed-type operators associated to $\mathcal G$ in Lemma \ref{l:mixedfixed} induce canonical spin connections as in \eqref{eq:LCspinpure} \begin{equation*} D^{S_+}_- \colon \Gamma(S_+) \to \Gamma(V_-^* \otimes S_+), \qquad D^{S_-}_+ \colon \Gamma(S_-) \to \Gamma(V_+^* \otimes S_-). \end{equation*} We follow the notation in Definition \ref{d:GRiemnniandiv}, Lemma \ref{lem:Riccivardiv}, and Lemma \ref{lem:dDpm}. \begin{prop}\label{prop:scalarpmexplicit} Let $(\mathcal G,\divop)$ be a pair given by a generalized metric $\mathcal G$ and a divergence operator $\divop$ on the exact Courant algebroid $E$. Denote $\divop^\mathcal G - \divop = \langle e, \cdot \rangle $, and set $\varphi_\pm = g(\pi e_\pm, \cdot) \in T^*$. Then, for any spinor $\eta \in \Gamma(S_\pm)$ one has \begin{gather}\label{eq:scalarpmexplicit} \begin{split} \Big{(}\slashed D^{S_\pm}_\pm\Big{)}^2 \eta & - \Big{(}D^{S_\pm}_\mp\Big{)}^*D^{S_\pm}_\mp \eta\\ =&\ \mp \frac{1}{4} \Bigg{(} R - \frac{1}{12} |H|^2 \mp 2 d^*\varphi_\pm - |\varphi_\pm|^2 + 2 d\varphi_\pm \pm 4 \nabla^\pm_{g^{-1}\varphi_\pm}\Bigg{)}\eta. \end{split} \end{gather} \begin{proof} We derive a formula for $$ \til{\mathcal{S}}^+ := \Big{(}\slashed D^{S_+}_+\Big{)}^2 - \Big{(}D^{S_+}_-\Big{)}^*D^{S_+}_-, $$ using Lemma \ref{lem:scalarcurv}. A similar formula for $\til{\mathcal{S}}^-$, defined as $\mathcal{S}^+$ replacing $+$ by $-$, follows from the analogue of Lemma \ref{lem:scalarcurv} for the opposite Clifford algebra relations \eqref{eq:Cliffordrel}. Via the isommetry $\pi_{|V_+} \colon V_+ \to (T,g)$, we have $$ \til{\mathcal{S}}^+ = \Big{(}\til{\slashed{\nabla}} \Big{)}^2 - \Big{(}\nabla^+\Big{)}^*\nabla^+ $$ acting on sections of the fixed spinor bundle for $g$, where $\til{\nabla}$ is as in \eqref{eq:LCvarphipure}: $$ \til{\nabla}_X Y = \nabla^{+1/3}_XY + \frac{1}{n-1}(g(X,Y)g^{-1}\varphi_+ - \varphi_+(Y)X). $$ Let $\{e_1,\ldots,e_n\}$ be a local orthonormal frame for $(T,g)$. An endomorphism $A\in \End(T)$ satisfies $$ A = \sum_{i,j=1}^n g(Ae_i,e_j)e^i\otimes e_j,$$ for $\{e^j\}$ the dual frame of $\{e_j\}$. Since $e^i\otimes e_j - e^j\otimes e_i\in \mathfrak{so}(T)$ embeds as $\frac{1}{2}e_j e_i$ in the Clifford bundle $Cl(T)$, an endomorphism $A\in \mathfrak{so}(T)$ corresponds to \begin{equation*} A=\frac{1}{4}\sum_{i,j} g(Ae_i,e_j) e_j e_i \in Cl(T). \end{equation*} Then, given a local spinor $\eta$, we have \begin{align*} \til \nabla \eta &{} = \nabla^{+1/3} \eta + \frac{1}{4(n-1)} \sum_{i,j} \varphi_+(e_j) e_j e_i \eta \otimes e^i - \varphi_+(e_i) e_j e_i \eta \otimes e^j \end{align*} and hence, \begin{align*} \til{\slashed{\nabla}} \eta & = \slashed{\nabla}^{+1/3}\eta + \frac{1}{4(n-1)}\sum_{i, j} \varphi_+(e_j) e_i e_j e_i \eta - \varphi_+(e_i) e_j e_j e_i \eta\\ & = \slashed{\nabla}^{+1/3}\eta + \frac{1}{4(n-1)}\sum_{i,j} \varphi_+(e_j) (2 \delta_{ij} - e_je_i) e_i \eta - \varphi_+(e_i) e_i \eta \\ & = \slashed{\nabla}^{+1/3}\eta - \frac{1}{2}\varphi_+ \eta. \end{align*} From this, we calculate \begin{align*} \Big{(}\til{\slashed{\nabla}} \Big{)}^2 \eta & = \Big{(}\slashed{\nabla}^{+1/3} \Big{)}^2 \eta - \frac{1}{2} \sum_j e_j \nabla^{+1/3}_{e_j}(\varphi_+ \eta) - \frac{1}{2} \sum_j \varphi_+ e_j\nabla^{+1/3}_{e_j}\eta + \frac{1}{4}|\varphi_+|^2 \eta \\ & = \Big{(}\slashed{\nabla}^{+1/3} \Big{)}^2 \eta - \frac{1}{2} \sum_j e_j (\nabla^{+1/3}_{e_j}\varphi_+) \eta - \frac{1}{2} \sum_j (e_j \varphi_+ +\varphi_+ e_j)\nabla^{+1/3}_{e_j}\eta + \frac{1}{4}|\varphi_+|^2 \eta\\ & = \Big{(}\slashed{\nabla}^{+1/3} \Big{)}^2 \eta - \frac{1}{2} \sum_{j,k} (i_{e_k}\nabla^{+1/3}_{e_j}\varphi_+)e_je_k \eta - \nabla^{+1/3}_{g^{-1}\varphi_+}\eta + \frac{1}{4}|\varphi_+|^2 \eta,\\ & = \Big{(}\slashed{\nabla}^{+1/3} \Big{)}^2 \eta + \frac{1}{2} (d^*\varphi_+) \eta - \frac{1}{2}(d\varphi_+) \eta + \frac{1}{4}|\varphi_+|^2 \eta \\ &\qquad - \frac{1}{12} \sum_{j,k} H(e_j,g^{-1}\varphi_+,e_k) e_je_k \eta - \nabla_{g^{-1}\varphi_+}\eta\\ &\qquad - \frac{1}{24} \sum_{j,k} H(g^{-1}\varphi_+,e_j,e_k) e_ke_j \eta\\ & = \Big{(}\slashed{\nabla}^{+1/3} \Big{)}^2 \eta + \frac{1}{2} (d^*\varphi_+) \eta - \frac{1}{2} (d\varphi_+) \eta + \frac{1}{4}|\varphi_+|^2 \eta - \nabla^+_{g^{-1}\varphi_+}\eta, \end{align*} where we used that $e^j \alpha + \alpha e^j = 2 \alpha(e_j)$ for any $\alpha \in T^*$, combined with the standard formulae \begin{align*} d^*\alpha & = - \sum_j i_{e_j} \nabla_{e_j}\alpha, \\ \nabla \alpha (e_j,e_k) & = \tfrac{1}{2} d\alpha (e_j,e_k) + \tfrac{1}{2}(i_{e_k}\nabla_{e_j} \alpha + i_{e_j}\nabla_{e_k} \alpha). \end{align*} The proof follows now from Lemma \ref{lem:scalarcurv}. \end{proof} \end{prop} Observe that the right hand side of \eqref{eq:scalarpmexplicit} defines a differential operator of order $1$ acting on spinors. The degree zero part of this operator has a scalar component and a component of degree two, the latter given by $\mp d \varphi_\pm$. If we insist in defining a \emph{scalar} out of \eqref{eq:scalarpmexplicit}, there are two natural conditions forced upon us. The first condition is that $\mp \nabla^\pm_{g^{-1}\varphi_\pm}$ is a natural operator, depending only on $(\mathcal G,\divop)$, so that it can be absorbed in the left hand side of \eqref{eq:scalarpmexplicit}. This is achieved provided that $\pi e= 0$, which implies $\varphi_+ = - \varphi_-$ and (see Lemma \ref{l:mixedfixed} and Proposition \ref{p:HitchinBismut}) $$ \nabla^\pm_{g^{-1}\varphi_\pm} = - \nabla^\pm_{g^{-1}\varphi_\mp} \equiv - D^{S_\pm}_{e_\mp}. $$ Given this, the second condition is that $e = \varphi \in T^*$ satisfies $d\varphi = 0$. As observed in \cite{GF19}, this pair of conditions relate naturally to the theory of \emph{Dirac generating operators} on Courant algebroids introduced in \cite{AXu,Severa}. \begin{defn}\label{def:closed} Let $(\mathcal G,\divop)$ be a pair given by a generalized metric $\mathcal G$ and a divergence operator $\divop$ on the exact Courant algebroid $E$. We say that $(\mathcal G,\divop)$ is \emph{closed} if $e \in \Gamma(E)$, defined by $\divop^\mathcal G - \divop = \langle e, \cdot \rangle $, satisfies $\pi e = 0$ and $d e = 0$. \end{defn} By Proposition \ref{p:GGdivergencecomp}, it follows immediately that if $(\mathcal G,\divop)$ is closed, then it is a compatible pair in the sense of Definition \ref{d:GGdivergencecomp}. We are ready to introduce our definition of the scalar curvatures for a closed pair $(\mathcal G,\divop)$. \begin{defn}\label{def:scalar} Let $(\mathcal G,\divop)$ be a closed pair on the exact Courant algebroid $E$. The \emph{generalized scalar curvature} $$ \mathcal{S} = \mathcal{S}(\mathcal G,\divop) \in C^\infty(M) $$ of the pair $(\mathcal G,\divop)$ is defined by $$ \mathcal{S}: = \mathcal{S}^+ - \mathcal{S}^- $$ where $\mathcal{S}_\pm \in C^\infty(M)$ are defined by the following Lichnerowicz type formulae \begin{equation}\label{eq:scalardef} \begin{split} -\frac{1}{2} \mathcal{S}^+ & := \Big{(}\slashed D^{S_+}_+\Big{)}^2 - \Big{(}D^{S_+}_-\Big{)}^*D^{S_+}_- - D^{S_+}_{e_-},\\ -\frac{1}{2} \mathcal{S}^- & := \Big{(}\slashed D^{S_-}_-\Big{)}^2 - \Big{(}D^{S_-}_+\Big{)}^*D^{S_-}_+ - D^{S_-}_{e_+}. \end{split} \end{equation} This is well-defined by Lemma \ref{l:mixedfixed}, Lemma \ref{lem:dDpm}, and Proposition \ref{prop:scalarpmexplicit} \end{defn} \begin{rmk} More explicitly, if $(\mathcal G,\divop)$ is a closed pair and we denote $\divop^\mathcal G - \divop = \langle \varphi, \rangle $, for a closed one-form $\varphi \in \Gamma(T^*)$, one has \begin{equation}\label{eq:scalarexplicit} \mathcal{S} = R - \frac{1}{12} |H|^2 - d^*\varphi - \frac{1}{4}|\varphi|^2. \end{equation} An important fact that we will use later in \S \ref{s:EHF} is that $\mathcal{S}(\mathcal G,\divop)$ does \emph{not} coincide with the trace of the Ricci tensors $\mathcal{R}c^\pm$ in \eqref{eq:Riccipmexact}. Note also that we can regard \eqref{eq:scalardef} as a local formula on $M$, so that there is no obstruction for the existence of the spinor bundles. Therefore we can define the generalized scalar curvature for exact Courant algebroids over any smooth manifold. \end{rmk} \begin{rmk}\label{rem:TypeIIscalar} When the divergence $\divop$ in Proposition \ref{prop:Ricciexplicit} is such that $e = 4df$, for some smooth function $f$ (so that $\varphi_+ = - \varphi_- = 2df$), the generalized scalar curvature is given by \begin{equation}\label{eq:scalarexplicitstring} \mathcal{S} = R - \frac{1}{12} |H|^2 + 4\Delta f - 4|\nabla f|^2, \end{equation} where $\Delta f = - d^* df = \tr_g \nabla^2 f$. Formula \eqref{eq:scalarexplicitstring} agrees exactly with the leading order term in $\alpha'$ expansion of the so-called \emph{$\beta$-function} for the dilaton field in the sigma model approach to type II string theory \cite{Friedanetal} (cf. Remark \ref{rmk:Riccibeta}). \end{rmk} We finish this section with an alternative point of view on the Ricci tensor, using the canonical Dirac operators as above (formula \eqref{eq:Ricci+-op} below corrects a factor of two in the statement of \cite[Lemma 4.7]{GF19}, taking into account the different normalization for the Dirac operators). \begin{lemma}\label{lem:Ricci} Assume that $M$ is positive-dimensional. Let $V_+ \subset E$ be a generalized metric and let $div$ be a divergence operator on $E$. Then the generalized Ricci tensors in Definition \ref{def:Ricci} can be calculated by \begin{equation}\label{eq:Ricci+-op} \pm \frac{1}{2}\iota_{a_\mp} \mathcal{R}c^\pm \cdot \eta = \Big{(}\slashed D^\pm D^{S_\pm}_{a_\mp} - D^{S_\pm}_{a_\mp}\slashed D^\pm - \sum_{i=1}^{r_\pm} e_i^\pm \cdot D^{S_\pm}_{[e_i^\pm,a_\mp]_\mp}\Big{)}\eta, \end{equation} where $\eta \in \Gamma(S_\pm)$ and $a_\mp \in \Gamma(V_\mp)$. \end{lemma} \begin{proof} We give a complete proof for $\mathcal{R}c^+$, as the other case is symmetric. It is easy to see that the right hand side of \eqref{eq:Ricci+-op} is tensorial in $a_\mp \in \Gamma(V_\mp)$. To evaluate \eqref{eq:Ricci+-op}, we choose an orthogonal frame $\{e_i^+\}$ for $V_+$ around $x \in M$ which satisfies $D_{a_-}e_i^+ = 0$ at the point $x$. This frame can constructed
thank A.M. Hall-Chen for proofreading this paper.} \section{Introduction: file preparation and submission} The \verb"iopart" \LaTeXe\ article class file is provided to help authors prepare articles for submission to IOP Publishing journals. This document gives advice on preparing your submission, and specific instructions on how to use \verb"iopart.cls" to follow this advice. You do not have to use \verb"iopart.cls"; articles prepared using any other common class and style files can also be submitted. It is not necessary to mimic the appearance of a published article. The advice on \LaTeX\ file preparation in this document applies to the journals listed in table~\ref{jlab1}. If your journal is not listed please go to the journal website via \verb"http://iopscience.iop.org/journals" for specific submission instructions. \begin{table} \caption{\label{jlab1}Journals to which this document applies, and macros for the abbreviated journal names in {\tt iopart.cls}. Macros for other journal titles are listed in appendix\,A.} \footnotesize \begin{tabular}{@{}llll} \br Short form of journal title&Macro name&Short form of journal title&Macro name\\ \mr 2D Mater.&\verb"\TDM"&Mater. Res. Express&\verb"\MRE"\\ Biofabrication&\verb"\BF"&Meas. Sci. Technol.$^c$&\verb"\MST"\\ Bioinspir. Biomim.&\verb"\BB"&Methods Appl. Fluoresc.&\verb"\MAF"\\ Biomed. Mater.&\verb"\BMM"&Modelling Simul. Mater. Sci. Eng.&\verb"\MSMSE"\\ Class. Quantum Grav.&\verb"\CQG"&Nucl. Fusion$^a$&\verb"\NF"\\ Comput. Sci. Disc.&\verb"\CSD"&New J. Phys.&\verb"\NJP"\\ Environ. Res. Lett.&\verb"\ERL"&Nonlinearity$^{a,b}$&\verb"\NL"\\ Eur. J. Phys.$^a$&\verb"\EJP"&Nanotechnology&\verb"\NT"\\ Inverse Problems$^{b,c}$&\verb"\IP"&Phys. Biol.$^c$&\verb"\PB"\\ J. Breath Res.&\verb"\JBR"&Phys. Educ.$^a$&\verb"\PED"\\ J. Geophys. Eng.$^a$&\verb"\JGE"&Physiol. Meas.$^{c,d,e}$&\verb"\PM"\\ J. Micromech. Microeng.&\verb"\JMM"&Phys. Med. Biol.$^{c,d,e}$&\verb"\PMB"\\ J. Neural Eng.$^c$&\verb"\JNE"&Plasma Phys. Control. Fusion&\verb"\PPCF"\\ J. Opt.&\verb"\JOPT"&Phys. Scr.&\verb"\PS"\\ J. Phys. A: Math. Theor.&\verb"\jpa"&Plasma Sources Sci. Technol.&\verb"\PSST"\\ J. Phys. B: At. Mol. Opt. Phys.$^a$&\verb"\jpb"&Rep. Prog. Phys.$^{e}$&\verb"\RPP"\\ J. Phys: Condens. Matter&\verb"\JPCM"&Semicond. Sci. Technol.&\verb"\SST"\\ J. Phys. D: Appl. Phys.$^a$&\verb"\JPD"&Smart Mater. Struct.&\verb"\SMS"\\ J. Phys. G: Nucl. Part. Phys.&\verb"\jpg"&Supercond. Sci. Technol.&\verb"\SUST"\\ J. Radiol. Prot.$^a$&\verb"\JRP"&Surf. Topogr.: Metrol. Prop.&\verb"\STMP"\\ Metrologia$^a$&\verb"\MET"&Transl. Mater. Res.&\verb"\TMR"\\ \br \end{tabular}\\ $^{a}$UK spelling is required; $^{b}$MSC classification may be used as well as PACS; $^{c}$titles of articles are required in journal references; $^{d}$Harvard-style references must be used (see section \ref{except}); $^{e}$final page numbers of articles are required in journal references. \end{table} \normalsize Any special submission requirements for the journals are indicated with footnotes in table~\ref{jlab1}. Journals which require references in a particular format will need special care if you are using BibTeX, and you might need to use a \verb".bst" file that gives slightly non-standard output in order to supply any extra information required. It is not necessary to give references in the exact style of references used in published articles, as long as all of the required information is present. Also note that there is an incompatibility between \verb"amsmath.sty" and \verb"iopart.cls" which cannot be completely worked around. If your article relies on commands in \verb"amsmath.sty" that are not available in \verb"iopart.cls", you may wish to consider using a different class file. Whatever journal you are submitting to, please look at recent published articles (preferably articles in your subject area) to familiarize yourself with the features of the journal. We do not demand that your \LaTeX\ file closely resembles a published article---a generic `preprint' appearance of the sort commonly seen on \verb"arXiv.org" is fine---but your submission should be presented in a way that makes it easy for the referees to form an opinion of whether it is suitable for the journal. The generic advice in this document---on what to include in an abstract, how best to present complicated mathematical expressions, and so on---applies whatever class file you are using. \subsection{What you will need to supply} Submissions to our journals are handled via the ScholarOne web-based submission system. When you submit a new article to us you need only submit a PDF of your article. When you submit a revised version, we ask you to submit the source files as well. Upon acceptance for publication we will use the source files to produce a proof of your article in the journal style. \subsubsection{Text.}When you send us the source files for a revised version of your submission, you should send us the \LaTeX\ source code of your paper with all figures read in by the source code (see section \ref{figinc}). Articles can be prepared using almost any version of \TeX\ or \LaTeX{}, not just \LaTeX\ with the class file \verb"iopart.cls". You may split your \LaTeX\ file into several parts, but please show which is the `master' \LaTeX\ file that reads in all of the other ones by naming it appropriately. The `master' \LaTeX\ file must read in all other \LaTeX\ and figure files from the current directory. {\it Do not read in files from a different directory, e.g. \verb"\includegraphics{/figures/figure1.eps}" or \verb"\include{../usr/home/smith/myfiles/macros.tex}"---we store submitted files all together in a single directory with no subdirectories}. \begin{itemize} \item {\bf Using \LaTeX\ packages.} Most \LaTeXe\ packages can be used if they are available in common distributions of \LaTeXe; however, if it is essential to use a non-standard package then any extra files needed to process the article must also be supplied. Try to avoid using any packages that manipulate or change the standard \LaTeX\ fonts: published articles use fonts in the Times family, but we prefer that you use \LaTeX\ default Computer Modern fonts in your submission. The use of \LaTeX\ 2.09, and of plain \TeX\ and variants such as AMSTeX is acceptable, but a complete PDF of your submission should be supplied in these cases. \end{itemize} \subsubsection{Figures.} Figures should ideally be included in an article as encapsulated PostScript files (see section \ref{figinc}) or created using standard \LaTeX\ drawing commands. Please name all figure files using the guidelines in section \ref{fname}. We accept submissions that use pdf\TeX\ to include PDF or bitmap figures, but please ensure that you send us a PDF that uses PDF version 1.4 or lower (to avoid problems in the ScholarOne system). You can do this by putting \verb"\pdfminorversion=4" at the very start of your TeX file. \label{fig1}All figures should be included within the body of the text at an appropriate point or grouped together with their captions at the end of the article. A standard graphics inclusion package such as \verb"graphicx" should be used for figure inclusion, and the package should be declared in the usual way, for example with \verb"\usepackage{graphicx}", after the \verb"\documentclass" command. Authors should avoid using special effects generated by including verbatim PostScript code in the submitted \LaTeX\ file. Wherever possible, please try to use standard \LaTeX\ tools and packages. \subsubsection{References.\label{bibby}} You can produce your bibliography in the standard \LaTeX\ way using the \verb"\bibitem" command. Alternatively you can use BibTeX: our preferred \verb".bst" styles are: \begin{itemize} \item For the numerical (Vancouver) reference style we recommend that authors use \verb"unsrt.bst"; this does not quite follow the style of published articles in our journals but this is not a problem. Alternatively \verb"iopart-num.bst" created by Mark A Caprio produces a reference style that closely matches that in published articles. The file is available from \verb"http://ctan.org/tex-archive/biblio/bibtex/contrib/iopart-num/" . \item For alphabetical (Harvard) style references we recommend that authors use the \verb"harvard.sty" in conjunction with the \verb"jphysicsB.bst" BibTeX style file. These, and accompanying documentation, can be downloaded from \penalty-10000 \verb"http://www.ctan.org/tex-archive/macros/latex/contrib/harvard/". Note that the \verb"jphysicsB.bst" bibliography style does not include article titles in references to journal articles. To include the titles of journal articles you can use the style \verb"dcu.bst" which is included in the \verb"harvard.sty" package. The output differs a little from the final journal reference style, but all of the necessary information is present and the reference list will be formatted into journal house style as part of the production process if your article is accepted for publication. \end{itemize} \noindent Please make sure that you include your \verb".bib" bibliographic database file(s) and any \verb".bst" style file(s) you have used. \subsection{\label{copyright}Copyrighted material and ethical policy} If you wish to make use of previously published material for which you do not own the copyright then you must seek permission from the copyright holder, usually both the author and the publisher. It is your responsibility to obtain copyright permissions and this should be done prior to submitting your article. If you have obtained permission, please provide full details of the permission granted---for example, copies of the text of any e-mails or a copy of any letters you may have received. Figure captions must include an acknowledgment of the original source of the material even when permission to reuse has been obtained. Please read our ethical policy (available at \verb"http://authors.iop.org/ethicalpolicy") before writing your article. \subsection{Naming your files} \subsubsection{General.} Please name all your files, both figures and text, as follows: \begin{itemize} \item Use only characters from the set a to z, A to
# Mass spectrometry imaging: Finding differential analytes ### Overview Questions • Can N-linked glycans from FFPE tissues be detected by MALDI imaging? • Can potential N-linked glycans be identified by an additional LC-MS/MS experiment? • Do specific kidney compartments have different N-linked glycan compositions? Objectives • Combining MSI datasets while using the information about each subfile in further steps. • Preprocessing raw MSI data. • Performing supervised and unsupervised statistical analysis. Requirements Time estimation: 2 hours Level: Intermediate Supporting Materials Last modification: Mar 12, 2021 # Introduction Mass spectrometry imaging (MSI) is applied to measure the spatial distribution of hundreds of biomolecules in a sample. A mass spectrometer scans over the entire sample and collects a mass spectrum every 5-200 µm. This results in thousands of spots (or pixels) for each of which a mass spectrum is acquired. Each mass spectrum consists of hundreds of analytes that are measured by their mass-to-charge (m/z) ratio. For each analyte the peak intensity in the mass spectra of every pixel is known and can be set together to map the spatial distribution of the analyte in the sample. The technique has a broad range of applications as it is able to measure many different kinds of analytes such as peptides, proteins, metabolites or chemical compounds in a large variety of samples such as cells, tissues and liquid biopsies. Application areas include pharmacokinetic studies, biomarker discovery, molecular pathology, forensic studies, plant research and material sciences. The strength of MSI is the simultaneous analysis of hundreds of analytes in an unbiased, untargeted, label-free, fast and affordable measurement while maintaining morphological information. Depending on the analyte of interest and the application, different mass spectrometers are used. A mass spectrometer measures the analytes by ionizing, evaporating and sorting them by their mass-to-charge (m/z) ratio. Put simply, a mass spectrometer consists basically of three parts: an ionization source, a mass analyzer and a detector. The most common ionization sources for MSI are MALDI (Matrix Assisted Laser Desorption/Ionization), DESI (Desorption Electrospray Ionization) and SIMS (Secondary Ion Mass Spectrometry). One common type of mass spectrometer for MSI is a MALDI Time-Of-Flight (MALDI-TOF) device. During MALDI ionization, a laser is fired onto the sample, which has been covered with a special matrix that absorbs the laser energy and transfers it to the analytes. This process vaporizes and ionizes the analytes. As they are now charged, they can be accelerated in an electric field towards the TOF tube. The time of flight through the tube to the detector is measured, which allows calculation of the mass over charge (m/z) of the analyte, as both mass and charge are correlated with time of flight. During measurement, complete mass spectra with hundreds of m/z - intensity pairs are acquired in thousands of sample plots, leading to large and complex datasets. Each mass spectrum is annotated with coordinates (x,y) that define its location in the sample. This allows visualization of the intensity distribution of each m/z feature in the sample as a heatmap. Depending on the analyte of interest, the sample type and the mass spectrometer, the sample preparation steps as well as the properties of the acquired data vary. Human tissues are often stained with hematoxylin and hemalaun (H&E) after mass spectrometry imaging. This allows better visualization of tissue histology and can be used to compare the MSI heatmaps with the real morphology. The stained image of the tissue can also be used to define regions of interest that can be extracted from the MSI data and can be subjected to supervised statistical analysis. As MSI measures only unfractionated molecules (MS1), identification of the resulting m/z features requires matching them to a list of known m/z features. The easiest approach is to match the MSI m/z features to existing databases such as UniProt for proteins or lipidmaps for lipids. More accurate identification is obtained by performing additional mass spectrometry experiments (either on the same or on a similar sample, e.g. adjacent slice from a tissue) that also measure the fragmented molecules (MS2) and thereby also allow accurate identification of isobaric molecules. This tandem mass spectrometry approach can either be done in situ directly on the sample or by transferring the sample to a tube and bringing it into solution for liquid-chromatography tandem mass spectrometry (LC-MS/MS). In this tutorial we will determine and identify N-linked glycans by differential analysis of a control (untreated) murine kidney section and a section treated with PNGase F. The data analysis steps can be transferred to any other application that requires finding differential analytes, such as for biomarker discovery. ### Agenda In this tutorial, we will cover: # N-linked glycans in murine kidney dataset In this tutorial we will use the murine kidney N-glycan dataset generated in the lab of Peter Hoffmann (Adelaide, Australia). N-linked glycans are carbohydrates (consisting of several sugar molecules) that are attached as a post-translational modification to the carboxamide side chain of an asparagine residue of a protein. Changes in N-linked glycans have been observed in several diseases. The Hoffmann lab acquired the presented dataset to show that their automated sample preparation method for N-glycan MALDI imaging successfully analyses N-glycans from formalin-fixed paraffin-embedded murine kidney tissue (Gustafsson et al.). The datasets generated in this study were made publicly available via the PRIDE Archive. Three 6 µm sections of formalin-fixed paraffin-embedded murine kidney tissue were prepared for MALDI imaging. To release N-linked glycans from the proteins, PNGase F was printed onto two kidney sections. In the third kidney section, one area was printed with buffer to serve as a control, and another was covered with N-glycan calibrants Figure 4a-c. 2,5-DHB matrix was sprayed onto the tissue sections and MALDI imaging was performed with 100 µm intervals using a MALDI-TOF/TOF instrument. To make computation times feasible for this training, we reduced all datasets to a m/z range of 1250 – 2310 and resampled the m/z values with a step size of 0.1. The main part of the training will be performed with the control and one PNGase F treated kidney file, which were both restricted to representative pixels (Figure 2). To test the results on the complete dataset we also provide a file in which all four (m/z reduced and resampled) files (control, calibrants, treated kidney 1 and treated kidney 2) are combined and TIC-normalized. All size-reduced training datasets are available via Zenodo. We will combine the cropped control and the cropped treated1 file and perform similar steps to those described by Gustafsson et al.. These steps include combining files, preprocessing, unsupervised and supervised analysis, identification of N-glycans and generation of analyte distribution heatmaps. ## Get data 1. Create a new history and give it a name. ### Tip: Creating a new history Click the new-history icon at the top of the history panel. If the new-history is missing: 1. Click on the galaxy-gear icon (History options) on the top of the history panel 2. Select the option Create New from the menu 2. Import the files from Zenodo. Upload all three datasets (one by one) to Galaxy via the composite option and rename them to ‘control’, ‘treated1’ and ‘all_files’. https://zenodo.org/record/2628280/files/conrol.imzml https://zenodo.org/record/2628280/files/control.ibd https://zenodo.org/record/2628280/files/treated1.imzml https://zenodo.org/record/2628280/files/treated1.ibd https://zenodo.org/record/2628280/files/all_files.imzml https://zenodo.org/record/2628280/files/all_files.ibd 3. Paste the link of the tabular file with the N-glycan identities into the regular upload and rename it to ‘Glycan IDs’: https://zenodo.org/record/2628280/files/Glycan_IDs.tabular 4. Create a tabular file with the three glycan calibrants that were used in the study and rename it to ‘calibrants’: M+Na+ composition 1257.4225 Man5GlcNAc2 1542.5551 Man3GlcNAc5 2393.8457 Man3Gal4GlcNAc6 5. Add a tag to each imzML file (‘control’, ‘treated1’ and ‘all_files’) and to the tabular file (‘Glycans’): • Click on the dataset • Click on galaxy-tags Edit dataset tags • Add a tag starting with # Tags starting with # will be automatically propagated to the outputs of tools using this dataset. • Check that the tag is appearing below the dataset name # Preparing the files for statistical analysis ## Combining control and treated1 files ### hands_on Hands-on: Combining files 1. MSI combine tool with the following parameters: • param-files “MSI data”: control (uploaded dataset) and treated1 (uploaded
& $-1.1$ & 0.33 & 0.08 & 1,2 \\ GK Per & 1901 2 21 & $ 151.0$ & $-10.1$ & 0.2 & $0.46\pm0.0.03$& 0.29 & 1,2 \\ \enddata \tablenotetext{a}{Peak magnitude when available, otherwise discovery magnitude} \tablenotetext{b} {(1) {\tt http://asd.gsfc.nasa.gov/Koji.Mukai/novae/novae.html}; (2) \citet{dar12}.} \end{deluxetable*} Nevertheless, evidence for possible incompleteness at bright magnitudes can be appreciated by considering how the rate of discovery of novae reaching apparent magnitude $m$, $N(m)$, has varied over time. Figure~\ref{fig2} shows the cumulative distribution of nova discovery magnitudes during the period between 1900 and 2015. Of the seven novae that reached $m=2$ or brighter, six were discovered in the first half (58 yrs) of this period (see Table~\ref{tab3}). Only one nova, Nova Cyg 1975 (V1500 Cyg), which reached $m=1.9$, has been discovered in the second 58 year interval (actually, in the last 73 years!). We can compute the significance of this result by computing the probability that $N$ or fewer novae with $m\leq2$ would be found within any consecutive 58 year span in the 116 years since 1900. That probability is given by equation~(1), where $M-1$ novae must erupt within the same 58 year window. In this case, where $N=1$, we find $P = 6\times0.5^6~(\rm{one~nova})~+ 0.5^6~(\rm{no~novae}) = 0.11$. Assuming that the true nova rate has been constant over time, a KS test reveals a similar result, namely only a 7\% chance that novae with $m\leq2$ would be distributed as shown in Figure~\ref{fig2}. Despite the fact that these probabilities do not rule out 100\% completeness for $m\leq2$ at the $2\sigma$ level, the probabilities are small, and suggest that at least one nova was likely missed in recent years. With only seven of a possible eight $m\leq2$ novae being detected since 1900, the completeness becomes $\sim88$\%. Although it appears counterintuitive, \citet{she14} points out that incompleteness at brighter magnitudes may be due in part to the evolution of how amateur astronomers survey the sky, which in recent years has turned to the use of telescopes equipped with CCD detectors rather than memorizing the sky and conducting wide-area visual observations. In addition, seasonal effects (i.e., sun in Sagittarius) will also have diminished the observed rate of fainter novae concentrated towards the Galactic center (see Figure~\ref{fig3}). As mentioned earlier, after considering a variety of selection effects \citet{she14} arrived at a completeness of just 43\% for novae brighter than $m=2$. Because only a summary of this work has been published, it is not possible to critically evaluate the assumptions made in arriving at this value. It does, however, seem quite surprising that more than half of the novae reaching second magnitude since 1900 could have been missed. Whether the completeness is close to 90\% as estimated above, or whether Schaefer is correct that we have missed more than half of the second magnitude and brighter novae, one thing seems clear, the assumption of 100\% completeness for $m\leq2$ made earlier by \citet{sha02} is likely to be overly optimistic. In the analysis to follow, we constrain the Galactic nova rate by adopting plausible limits on the completeness of bright novae. Given that it seems difficult to understand how the completeness could be lower than the value determined by \citet{she14}, we have adopted $c=0.43$ as a lower limit on the completeness of novae with $m\leq2$. The possibility that all novae with $m\leq2$ have been detected since 1900 provides a hard upper limit of 100\% on the completeness. We argue that the best estimate of the completeness lies between these limits, and follows from two considerations. As described earlier, the sharp drop in the number of $m\leq2$ novae observed over the past $\sim$60~yr suggests that at least one out of eight bright novae has likely been missed in recent years. If so, a value of $c=0.88$ would seem to offer a reasonable estimate for the completeness of novae with $m\leq2$. This estimate is supported by considering that even the brightest novae will likely be missed if they erupt within $18^{\circ}$ (1.2 hr) of the sun. Based on this correction alone, the completeness drops to $\sim90$\%. Thus, in computing the models described in the following section, we simply take $c=0.9$ as our best estimate of the completeness for novae with $m\leq2$. For comparison, we also consider models for $c=0.43$, which we take as a lower limit to the completeness of novae reaching second magnitude or brighter. \begin{figure} \includegraphics[scale=0.345]{f4.pdf} \caption{Cumulative distribution of Novae as a function of peak magnitude. Note the higher average nova rate for novae with $m\lessim2$ during the period between $1900-1950$ compared with the period from 1900 to the present. The red square and blue diamond represent the value of log $N(2)$ for the full $1900-2015$ interval corrected for the $c=0.43$ completeness of \citet{she14}, and for our $c=0.9$ completeness estimate, respectively. The error bars represent the Poisson error for the seven novae discovered with $m\leq2$. The dashed lines represents the expected increase in the nova rate for an infinite uniform distribution of nova progenitors (log $N \propto 0.6m$), and for an infinite disk distribution (log $N \propto 0.4m$). \label{fig4}} \end{figure} \section{Model} The annual discovery rate of novae brighter than $m=10$ since 1900 as a function of magnitude is shown in Figure~\ref{fig4}. The corrected $m\leq2$ nova discovery rates for our estimated completeness of $c=0.9$ and the lower completeness advocated by \cite{she14} are shown as the blue diamond and the red square, respectively. For comparison, we have also plotted the values of $N(m)$ computed from a sub-sample of novae discovered during the period between 1900 and 1950. We find that the annual discovery rate for novae reaching second magnitude or brighter during this earlier time span is approximately twice that for the full $1900 - 2015$ period, and is consistent with that expected after applying Schaefer's incompleteness estimate. In the analysis to follow, we estimate the Galactic nova rate by extrapolating the local nova rate ($m\leq2$) to the entire Galaxy based on a model consisting of separate bulge and disk components. The bulge component, $\rho_b$, is modeled using a standard \cite{dev59} luminosity profile, while The disk nova density, $\rho_d$, is assumed to have a double exponential dependence on distance from the Galactic center and on the distance from the Galactic plane. \begin{deluxetable}{lrrrc} \tablenum{4} \label{tab4} \tablewidth{0pt} \tablecolumns{5} \tablecaption{Distances of Novae from the Galactic Plane} \tablehead{\colhead{Nova} & \colhead{$d$ (kpc)} & \colhead{$b$ ($^{\circ}$)} & \colhead{$z$ (kpc)} & \colhead{Ref\tablenotemark{a}} } \startdata CI Aql & 5.00 &$ -0.8$& 0.070 & 1 \\ V356 Aql & 1.70 &$ -4.9$& 0.145 & 1 \\ V528 Aql & 2.40 &$ -5.9$& 0.247 & 1 \\ V603 Aql & 0.25 &$ 0.8$& 0.003 & 2 \\ V1229 Aql & 1.73 &$ -5.4$& 0.163 & 1 \\ T Aur & 0.96 &$ -1.7$& 0.028 & 1 \\ IV Cep & 2.05 &$ -1.6$& 0.057 & 1 \\ V394 CrA & 10.00 &$ -7.7$& 1.340 & 1 \\ T CrB & 0.90 &$ 48.2$& 0.671 & 1 \\ V476 Cyg & 1.62 &$ 12.4$& 0.348 & 1 \\ V1500 Cyg & 1.50 &$ -0.1$& 0.003 & 1 \\ V1974 Cyg & 1.77 &$ -9.6$& 0.295 & 1 \\ V2491 Cyg & 13.30 &$ 4.4$& 1.020 & 1 \\ HR Del & 0.76 &$ -14.0$& 0.184 & 1 \\ KT Eri & 6.50 &$ -31.9$& 3.435 & 1 \\ DN Gem & 0.45 &$ 14.7$& 0.114 & 1 \\ DQ Her & 0.39 &$ 26.4$& 0.173 & 2 \\ V533 Her & 0.56 &$ 24.3$& 0.230 & 1 \\ CP Lac & 1.00 &$ -0.8$& 0.014 & 1 \\ DK Lac & 3.90 &$ -5.4$& 0.367 & 1 \\ DI Lac & 2.25 &$ -4.9$& 0.192 & 1 \\ IM Nor & 3.40 &$ 3.0$& 0.178 & 1 \\ RS Oph & 1.40 &$ -10.4$& 0.253 & 1 \\ V849 Oph & 3.10 &$ 13.5$& 0.724 & 1 \\ V2487 Oph & 12.00 &$ 8.0$& 1.670 & 1 \\ GK Per & 0.48 &$ -10.1$& 0.084 & 2 \\ RR Pic & 0.52 &$ -25.4$& 0.223 & 2 \\ CP Pup & 1.14 &$ -0.4$& 0.008 & 2 \\ T Pyx & 4.50 &$ 10.2$& 0.797 & 1 \\ U Sco & 12.00 &$ -60.2$& 10.413 & 1 \\ V745 Sco & 7.80 &$ -60.2$& 6.769 & 1 \\ EU Sct & 5.10 &$ -2.8$& 0.249 & 1 \\ FH Ser & 0.92 &$ 5.8$& 0.093 & 1 \\ V3890 Sgr & 7.00 &$ -6.4$& 0.780 & 1 \\ LV Vul & 0.92 &$ 0.8$& 0.013 & 1 \\ NQ Vul & 1.28 &$ 1.3$& 0.029 & 1 \\ CT Ser & 1.43 &$ 47.6$& 1.056 & 1 \\ RW UMi & 4.90 &$ 33.2$& 2.683 & 1 \\ V3888 Sgr & 2.50 &$ 5.4$& 0.235 & 1 \\ PW Vul & 1.75 &$ 5.2$& 0.159 & 1 \\ QU Vul & 1.76 &$ -6.0$& 0.184 &
n) /n)$ convergence when the minimal risk $L^*=0$. For regression problems, similar fast rate when $L^*=0$ can be shown (it can be deduced after some argument from Assertion 2 on page 204 in \cite{Vap82}; an exact formulation is available, for example, in \cite{srebro2010smoothness}). Lee, Bartlett and Williamson \cite{lbw-iclsl-98} showed that the excess risk of the least squares estimator on $\F$ can attain the rate $O((\log n)/n)$ without the assumption $L^*=0$. Instead, they assumed that the class $\F$ is convex and has finite pseudo-dimension. Additionally, it was shown that the $n^{-1/2}$ rate cannot be improved if the class is non-convex and the estimator is a selector (that is, forced to take values in $\F$). In particular, the excess risk of ERM and of any selector on a finite class $\F$ cannot decrease faster than $\sqrt{(\log |\F|)/n}$ \cite{jrt-lma-08}. Optimality of ERM for certain problems is still an open question. Independently of this work on the excess risk in the distribution-free setting of statistical learning, Nemirovskii \cite{nemirovski2000topics} proposed to study the problem of aggregation, or mimicking the best function in the given class, for regression models. Nemirovskii~\cite{nemirovski2000topics} outlined three problems: model selection, convex aggregation, and linear aggregation. The notion of optimal rates of aggregation based on the minimax regret is introduced in \cite{tsybakov03optimal}, along with the derivation of the optimal rates for the three problems. In the following decade, much work has been done on understanding these and related aggregation problems \cite{Yang04,jntv-raemdaa-05,jrt-lma-08,Lounici07,rigollet2011exponential}. For recent developments and a survey we refer to \cite{LecueHabil,RigTsySTS12}. In parallel with this research, the study of the excess risk blossomed with the introduction of Rademacher and local Rademacher complexities \cite{Kol01, KolPan2000,bm-rgcrbsr-02,bousquet2002some,bbm-lrc-05,koltchinskii2006local}. These techniques provided a good understanding of the behavior of the ERM method. In particular, if $\F$ is a \emph{convex} subset of $d$-dimensional space, Koltchinskii \cite{koltchinskii2006local,koltchinskii2011oracle} obtained a sharp oracle inequality with the correct rate $d/n$ for the excess risk of least squares estimator on $\F$. Also, for convex $\F$ and $p\in(0,2)$, the least squares estimator on $\F$ attains the correct excess risk rate $n^{-\frac{2}{p+2}}$ under the assumptions of Theorem~\ref{thm:main_c}. This can be deduced from Theorem~5.1 in \cite{koltchinskii2011oracle}, remarks after it and in Example~4 on page 87 of \cite{koltchinskii2011oracle}. However, the convexity assumption appears to be crucial; without this assumption Koltchinskii \cite[Theorem 5.2]{koltchinskii2011oracle} obtains for the least squares estimator only a non-sharp inequality with leading constant $C>1$, cf. \eqref{eq:nonexact_oracle}. As follows from the results in Section~\ref{sec:main} our procedure overcomes this problem. Among a few of the estimators considered in the literature for general classes $\F$, empirical risk minimization on $\F$ has been one of the most studied. As mentioned above, ERM and other selector methods are suboptimal when the class $\F$ is finite. For the regression setting with finite $\F$, the approach that was found to achieve the optimal rate for the excess risk in expectation is through exponential weights with averaging of the trajectory~\cite{yang1999information,catoni,jrt-lma-08,DalTsy12}. However, Audibert \cite{audibert2007progressive} showed that, for the regression with random design, exponential weighting is suboptimal when the error is measured by the probability of deviation rather than by the expected risk. He proposed an alternative method, optimal both in probability and in deviation, which involves finding an ERM on a star connecting a global ERM and the other $|\F|-1$ functions. In \cite{lecue2009aggregation}, the authors exhibited another deviation optimal method which involves sample splitting. The first part of the sample is used to localize a convex subset around ERM and the second -- to find an ERM within this subset. Recently yet another procedure achieving the deviation optimality has been proposed in \cite{LecueRigollet}. It is based on a penalized version of exponential weighting and extends the method of \cite{dai2012deviation} originally proposed for regression with fixed design. The methods of \cite{audibert2007progressive,lecue2009aggregation,LecueRigollet} provide examples of sharp MS-aggregates that can be used at the third step of our procedure. We close this short summary with a connection to a different literature. In the context of prediction of deterministic individual sequences with logarithmic loss, Cesa-Bianchi and Lugosi \cite{cbl-wcbllp-01} considered regret with respect to rich classes of ``experts''. They showed that mixture of densities is suboptimal and proposed a two-level method where the rich set of distributions is divided into small balls, the optimal algorithm is run on each of these balls, and then the overall output is an aggregate of outputs on the balls. They derived a bound where the upper limit of the Dudley integral is the radius of the balls. This method served as an inspiration for the present work. \section{Proofs of Theorems~\ref{thm:main_c}-\ref{thm:main_aggr} and \ref{th:approx:error}} \label{sec:proofs:thms} We first state some auxiliary lemmas. \begin{lemma} \label{lem:bound_on_r-star} The following values can be taken as localization radii $r^*=r^*(\G)$ for $\G=\{(f-g)^2: f,g\in\F\}$. \begin{itemize} \item[(i)] For any class $\F\subseteq \{f:\ 0\le f\le 1\}$, and $n\ge 2$, \begin{align} \label{eq:rnstar-large} r^* = C\log^3(n) \Rad_n^2(\F). \end{align} \item[(ii)] If $\F\subseteq \{f:\ 0\le f\le 1\}$ and the empirical covering numbers exhibit polynomial growth $\sup_{S\in \Z^n}\cN_2(\F,\rho,S) \leq \left(\frac{A}{\rho}\right)^v$ for some constants $A<\infty$, $v>0$, then $$r^*= C\frac{v}{n}\log\left(\frac{en}{v}\right)\,$$ whenever $n\ge C_Av$ with $C_A>1$ large enough depending only on $A$. \item[(iii)] If $\F$ is a finite class with $|\F|\ge 2$, $$r^*= C\frac{\log|\F|}{n}\,.$$ \end{itemize} \end{lemma} The proof of this lemma is given in the Appendix. The following lemma is a direct consequence of Theorem~\ref{thm:localization} proved in the Appendix. \begin{lemma} \label{lem:empbound} For any class $\F\subseteq \{f:\ 0\le f\le 1\}$ and $\delta>0$, with probability at least $1-4\delta$, \begin{align} \label{eq:lem:empbound} \|f-f'\|^2 \leq 2d_S^2(f,f') + C(r^* + \beta), \qquad \forall \ f,f'\in \F, \end{align} where $\beta=(\log (1/\delta)+\log\log n)/n$, and $r^*=r^*(\G)$ for $\G=\{(f-g)^2: f,g\in\F\}$. \end{lemma} We will also use the following bound on the Rademacher average in terms of the empirical entropy \cite{bartlett2006notes,srebro2010smoothness}. \begin{lemma} \label{lem:rad:ent} For any class $\F\subseteq \{f:\ 0\le f\le 1\}$, \begin{align}\label{rad:ent} \hat{\Rad}_n(\F,S) &\leq \inf_{\alpha\geq 0}\left\{ 4\alpha + \frac{12}{\sqrt{n}}\int_{\alpha}^1 \sqrt{\log \cN_2(\F,\rho,S)}\,d\rho \right\} \,.\end{align} \end{lemma} \begin{proof}[\textbf{Proof of Theorem~\ref{thm:main_c}}] We consider only the case $p\in(0,2]$ since the results of the theorem for $p> 2$ follow immediately from Theorem~\ref{th:approx:error}. Indeed, it suffices to apply Theorem~\ref{th:approx:error} with $\Delta=1$ (an upper bound on the diameter of $\F$) and $\Delta=0$ to obtain (\ref{4}) and (\ref{5}), respectively. Assume without loss of generality that $A=1$, i.e., $\sup_{S\in \Z^n}\log \cN_2(\F, \rho,S) \le \rho^{-p}$. For $p\in(0,2)$, the bound (\ref{rad:ent}) with $\alpha=0$ combined with \eqref{eq:rnstar-large} yields \begin{align}\label{eq:rad:p:in:02} {\Rad}_n(\F) &\leq \frac{12}{\sqrt{n}(1-p/2)}\,, \qquad r^* \leq C\,\frac{(\log n)^3}{n} \end{align} for some absolute constant $C$. Thus, \begin{align}\label{thm2:proof1} \gamma &\le C\left(\epsilon + \frac{(\log n)^{3/2}}{\sqrt{n}} + \sqrt{\frac{\log(1/\delta)}{n}}\right)\, , \\ \gamma\sqrt{r^*} &\leq C(\log n)^{3/2} \left( \frac{\epsilon}{\sqrt{n}}+ \frac{(\log n)^{3/2}}{n} + \frac{\sqrt{\log(1/\delta)}}{n}\right)\,. \end{align} These inequalities together with (\ref{1}) and (\ref{2}) yield that for $0<\delta<1/2$, with probability at least $1-2\delta$, \begin{equation}\label{thm2:proof2} L\left(\tilde{f}\right)-L^* \leq C\left(\frac{\epsilon^{-p}}{n} + \frac{\log(1/\delta) }{n} + \gamma\sqrt{r^*} + \frac{\gamma^{1-p/2}}{\sqrt{n}}\right)\,. \end{equation} The value of $\epsilon$ minimizing the right hand side in (\ref{thm2:proof2}) is $\epsilon = n^{-1/(2+p)}$, which justifies the choice made in the theorem. Notably, the logarithmic factor arising from $r^*$ only appears together with the lower order terms and the summand $\gamma\sqrt{r^*}$ does not affect the rate. For $\epsilon = n^{-1/(2+p)}$ the right hand side of (\ref{thm2:proof2}) is bounded by $Cn^{-\frac{2}{2+p}}$ ignoring the terms with $\log(1/\delta)$ that disappear when passing from the bound in probability to that in expectation. Thus, the expected excess risk is bounded by $Cn^{-\frac{2}{2+p}}$, which proves (\ref{4}) for $p\in(0,2)$. Analogous argument yields (\ref{4}) for $p=2$. Then, the only difference is that the bounds on the Rademacher complexity and on $r^*$ involve an extra logarithmic factor, which does not affect the final rate as it goes with lower order terms. \end{proof} \begin{proof}[\textbf{Proof of Theorem~\ref{thm:main_VC}}] Throughout this proof, $C$ is a generic notation for positive constants that may depend only on $A$. Since $\epsilon=n^{-1/2}$ the expression for $r^*$ in Lemma~\ref{lem:bound_on_r-star}~(ii) leads to the bounds $\gamma \leq C\left(\sqrt{\frac{v\log(en/v)}{n}}+ \sqrt{ \frac{\log(1/\delta)}{n}}\right),$ and $\gamma\sqrt{r^*} \leq C\left(\frac{v\log(en/v)}{n}+ \frac{\log(1/\delta)}{n}\right).$ Next, since $ \cN_2(\F,\rho,S')\le \max\{1, (A/\rho)^v\}$ we get \begin{align*} \frac1{\sqrt{n}}\int_{0}^{C\gamma} \sqrt{\log\cN_2(\F, \rho, S')}\,d\rho \leq \sqrt{\frac{v}{n}} \int_{0}^{C\gamma/A\wedge 1} \sqrt{\log (1/t)}~dt \le C \sqrt{\frac{v}{n}}\, \gamma \sqrt{\log\left({C}/{\gamma}\right)\vee 1}\, \end{align*} where the last inequality is due to~(\ref{eq:vc:integral}). We assume w.l.o.g. that in the last expression $C$ is large enough to guarantee that the function $\gamma\mapsto \gamma \sqrt{\log\left({C}/{\gamma}\right)\vee 1}$ is increasing, so that we can replace
Fig. LABEL:LEFD_two_statesA). Note that the crossing points are not exact points representing changes in the diffusive states because different diffusive states coexist in a time window of . Therefore, we define the transition time as . The term is not exact when the threshold is not at the middle of two successive diffusive states. If only one transition occurs in the time interval , which is a physically reasonable assumption, the transition times represent the points of changes in the diffusive states. Note that some transition times of the diffusive states will be still missing. To correct the transition times obtained above, we test whether successive diffusive states are significantly different. Since we know the transition times of the diffusive states, we can estimate the diffusion coefficient in the time interval : the diffusion coefficient of the th diffusive state is given by ¯¯¯¯¯Di≡∫ti+1−Δti{r(t′+Δ)−r(t′)}2dt′2dΔ(ti+1−ti−Δ). (7) Since we consider a situation that is sufficiently large (), fluctuations of can be approximated as a Gaussian distribution by the central limit theorem. According to a statistical test, the th and th states can be considered as the same state if there exists such that both the and states satisfy D−σkZ≤¯¯¯¯¯Dk≤D+σkZ, (8) where is the variance of the TDC with the time window and the diffusion coefficient , which is given by , and is determined by the level of statistical significance, e.g., when the -value is 0.05. Therefore, the transition times can be corrected if the two successive diffusion states are the same. We repeat this procedure: Eq. (7) will be calculated again after correcting the transition times , and the above test will be repeated to correct the transition times. Furthermore, one can improve the transition times by changing the thresholds around the transition times. The detailed procedure and flowchart of our method are given in the Supplemental Material SM (). Here, we test our method with the trajectories of three different LEFD models, where the number of diffusive states is two, three, and uncountable. The crossover times in the RSD are finite for all models. In the Langevin equation with the two-state diffusivity, Fig. 1B shows the diffusion coefficient obtained by our method. Almost all diffusive states can be classified into two states according to the condition (8) with . Moreover, the deviations in the transition times from the actual transition times are within 0.25. Thus, we successfully extract the underlying diffusion process from a single trajectory after obtaining the characteristic time scale of the diffusive states. We introduce different relaxation times in the two sojourn-time distributions (we use the exponential distribution for both sojourn-time distributions) and examine the effects of the tuning parameter. Figure 2 shows the TDCs for different tuning parameters , , and 0.01, corresponding to , 1.6, and 0.16, respectively. As clearly seen, when the tuning parameter is small, the fluctuations in the TDC become large. Therefore, inaccurate transition times may be detected when is too small. On the other hand, the actual transition times may not be detected when is too large. In fact, the transition times around cannot be detected in the case of the green dotted line (). As a result, the tuning parameter can be set to or between 0.1 and 0.01. Next, we analyze the LEFD with the three-state diffusivity. The sojourn-time distributions are exponential distributions, and their relaxation times are the same in each state (). For the three-state LEFD, one can obtain several diffusive states from a single trajectory by our method after calculating the crossover time in the RSD using many trajectories. Figure 3 shows the diffusion coefficient obtained by our method, where we revised the threshold using the procedures described in the Supplemental Material SM (). As shown in Fig. 3A, the transition times are correctly detected. Moreover, almost all diffusive states belong to the three diffusive states using , and the distribution of the estimated diffusion coefficients has three peaks corresponding to the exact diffusion coefficient (see Fig. 3B). Finally, we apply our method to a diffusion process with an uncountable number of diffusive states. In particular, we use the annealed transit time model (ATTM) Massignan et al. (2014); Akimoto and Yamamoto (2016). The ATTM was proposed to describe heterogeneous diffusion in living cells Massignan et al. (2014); Manzo et al. (2015). The diffusion process is described by the LEFD where is coupled to the sojourn time. When the sojourn time is , the diffusion coefficient is given by . Here, we assume that the sojourn-time distribution follows an exponential distribution . One can obtain by the RSD analysis Akimoto and Yamamoto (2016). Figure 4A shows the diffusion coefficient obtained by our method. Because the variance of is not large, the transition times are not correctly detected compared with the other two models. However, the transition times for the highly diffusive states can be detected correctly. Moreover, Fig. 4B shows the relation between the obtained diffusion coefficient and the sojourn times, which exhibits a power-law relation . Therefore, our method can also be applied to systems with an uncountable number of diffusive states. Diffusivity changes with time in temporally/spatially heterogeneous environments such as cells and supercooled liquids. It is difficult to estimate such a fluctuating diffusivity from single-particle trajectories because one does not have information about the transition times when the diffusivity changes. In this paper, we have proposed a new method for detecting the transition times from single trajectories. Our method is based on a fluctuation analysis of the time-averaged MSD to extract information on the characteristic time scale of the system. We have applied this method to three different diffusion processes, i.e., the LEFD with two states, the LEFD with three states, and the ATTM, which has an uncountable number of diffusive states. Our method successfully extracts the transition times of the diffusivities and estimates the fluctuating diffusion coefficients in the three models. Since our method can be conducted with single-particle trajectories, the application will be useful and of importance in experiments. Furthermore, a slight modification of this method will be also applied to the time series of state-transition processes. E.Y. was supported by an MEXT (Ministry of Education, Culture, Sports, Science and Technology) Grant-in-Aid for the “Building of Consortia for the Development of Human Resources in Science and Technology.” ## References • Scher and Montroll (1975) H. Scher and E. W. Montroll, Phys. Rev. B 12, 2455 (1975). • Golding and Cox (2006) I. Golding and E. C. Cox, Phys. Rev. Lett. 96, 098102 (2006). • Manzo et al. (2015) C. Manzo, J. A. Torreno-Pina, P. Massignan, G. J. Lapeyre, M. Lewenstein,  and M. F. Garcia Parajo, Phys. Rev. X 5, 011021 (2015). • Granéli et al. (2006) A. Granéli, C. C. Yeykal, R. B. Robertson,  and E. C. Greene, Proc. Natl. Acad. Sci. USA 103, 1221 (2006). • Wang et al. (2006) Y. M. Wang, R. H. Austin,  and E. C. Cox, Phys. Rev. Lett. 97, 048302 (2006). • Bouchaud and Georges (1990) J. Bouchaud and A. Georges, Phys. Rep. 195, 127 (1990). • Yamamoto and Onuki (1998a) R. Yamamoto and A. Onuki, Phys. Rev. Lett. 81, 4915 (1998a). • Yamamoto and Onuki (1998b) R. Yamamoto and A. Onuki, Phys. Rev. E 58, 3515 (1998b). • Richert (2002) R. Richert, J. Phys. Cond. Matt. 14, R703 (2002). • Sergé et al. (2008) A. Sergé, N. Bertaux, H. Rigneault,  and D. Marguet, Nat. Methods 5, 687 (2008). • Weron et al. (2017) A. Weron, K. Burnecki, E. J. Akin, L. Solé, M. Balcerek, M. M. Tamkun,  and D. Krap, Sci. Rep. 7, 5404 (2017). • Yamamoto et al. (2017) E. Yamamoto, T. Akimoto, A. C. Kalli, K. Yasuoka,  and M. S. P. Sansom, Sci. Adv. 3, e1601871 (2017). • Montiel et al. (2006) D. Montiel, H. Cang,  and H. Yang, J. Phys. Chem. B 110, 19763 (2006). • Koo and Mochrie (2016) P. K. Koo and S. G. J. Mochrie, Phys. Rev. E 94, 052412 (2016). • Wang et al. (2016) S. Wang, R. Vafabakhsh, W. F. Borschel, T. Ha,  and C. G. Nichols, Nat. Struct. Mol. Biol. 23, 31 (2016). • Chung et al. (2012) H. S. Chung, K. McHale, J. M. Louis,  and W. A. Eaton, Science 335, 981 (2012). • Noji et al. (1997) H. Noji, R. Yasuda, M. Yoshida,  and K. Kinosita Jr, Nature 386, 299 (1997). • Stefani et al. (2009) F. D. Stefani, J. P. Hoogenboom,  and E. Barkai, Phys. Today 62, 34 (2009). • Howorka and Siwy (2009) S. Howorka and Z. Siwy, Chem. Soc. Rev. 38, 2360 (2009). • He et al. (2008) Y. He, S. Burov, R. Metzler,  and E. Barkai, Phys. Rev. Lett. 101, 058101 (2008). • Deng and Barkai (2009) W. Deng and E. Barkai, Phys. Rev. E 79, 011112 (2009). • Akimoto et al. (2011) T. Akimoto, E. Yamamoto, K. Yasuoka, Y. Hirano,  and M. Yasui, Phys. Rev. Lett. 107, 178103 (2011). • Miyaguchi and Akimoto (2011) T. Miyaguchi and T. Akimoto, Phys. Rev. E 83, 062101 (2011). • Uneyama et al. (2012) T. Uneyama, T. Akimoto,  and T. Miyaguchi, J. Chem. Phys. 137, 114903 (2012). • Uneyama et al. (2015) T. Uneyama, T. Miyaguchi,  and T. Akimoto, Phys. Rev. E 92, 032140 (2015). • Miyaguchi et al. (2016) T. Miyaguchi, T. Akimoto,  and E. Yamamoto, Phys. Rev. E 94, 012109 (2016). • (27) See Supplementary Material for the detailed procedures and flowchart of our method. • Massignan et al. (2014) P. Massignan, C. Manzo, J. A. Torreno-Pina, M. F. García-Parajo, M. Lewenstein,  and G. J. Lapeyre, Phys. Rev. Lett. 112, 150603 (2014). • Akimoto and Yamamoto (2016) T. Akimoto and E. Yamamoto, J. Stat. Mech. 2016, 123201 (2016). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in
\brs{d^* H - \nabla f_+ \lrcorner H}^2 \right.\\ &\ \left. + \frac{1}{6} \brs{H}^2 + 2 \left< \nabla W, \nabla f_+ \right> \right) u + W \left( R u - \frac{1}{4} \brs{H}^2 u \right) - 2 \left< \nabla W, \nabla f_+ \right> u\\ =&\ 2 t \brs{\Rc - \frac{1}{4} H^2 + \nabla^2 f_+ + \frac{g}{2 t}}^2 u + \frac{t}{2} \brs{d^* H - \nabla f_+ \lrcorner H}^2 u + \frac{1}{6} \brs{H}^2 u\\ &\ + R v_+ - \frac{1}{4} \brs{H}^2 v_+ \end{align*} and the result follows. \end{proof} \end{thm} \begin{defn} \label{d:expandingentropy} Let $(M^n, g, H)$ be a smooth manifold with $H \in \Lambda^3 T^*, dH = 0$. Fix $u \in C^{\infty}$, $u > 0$, and define $f_+$ via $u = \frac{e^{-f_+}}{(4\pi(t - T))^{\frac{n}{2}}}$. The \emph{expanding entropy} associated to this data is \begin{align*} \mathcal W_+(g,H,f_+,\sigma) := \int_M \left[ \sigma \left( \brs{\nabla f_+}^2 + R - \frac{1}{12} \brs{H}^2 \right) - f_+ + n \right] u dV. \end{align*} Furthermore, let \begin{gather*} \mu_+ \left(g, H, \sigma \right) := \inf_{ \left\{ f_+ | \int_M e^{-f_+}{(4 \pi \sigma)^{-\frac{n}{2}}} dV = 1 \right\}} \mathcal W_+ \left(g, H, f_+, \sigma \right), \end{gather*} and \begin{gather*} \nu_+(g, H) := \sup_{\sigma > 0} \mu_+ \left(g,H,\sigma \right). \end{gather*} \end{defn} \begin{cor} \label{c:expandingentropymonotonicity} Let $(M^n, g_t, H_t)$ be a solution to the $(X_t, k_t)$-gauge-fixed generalized Ricci flow equations. Let $u_t$ denote a solution to the $(X_t, k_t)$-gauge-fixed conjugate heat equation, and define $f_+$ via $u = \frac{e^{-f_+}}{(4\pi(t - T))^{\frac{n}{2}}}$. Then \begin{gather*} \begin{split} \frac{d}{dt}& \mathcal W_+(g, H, f_+, t - T)\\ =&\ \int_M \left[ (t - T) \left( 2 \brs{\Rc - \tfrac{1}{4} H^2 + \nabla^2 f_+ + \frac{g}{2(t-T)}}^2 + \frac{1}{2} \brs{d^* H + \nabla f_+ \lrcorner H}^2 \right) + \frac{1}{6} \brs{H}^2 \right] u dV. \end{split} \end{gather*} \begin{proof} As in the proof of Proposition \ref{p:genFFmonotonicity}, it suffices to consider the flow in any specific gauge, and in this case we pick the standard gauge. Since $\dt dV = R - \tfrac{1}{4} \brs{H}^2$ along a solution to generalized Ricci flow, it follows from Theorem \ref{t:expandingharnack} that \begin{align*} \dt \left( v_+ dV \right) =&\ \left[ (t - T) \left( 2 \brs{\Rc - \tfrac{1}{4} H^2 + \nabla^2 f_+ + \frac{g}{2(t-T)}}^2 + \frac{1}{2} \brs{d^* H + \nabla f_+ \lrcorner H}^2 \right) + \frac{1}{6} \brs{H}^2 \right] u dV. \end{align*} Integrating this result over $M$ yields the result. \end{proof} \end{cor} \begin{cor} \label{c:expandingharnack} Let $(M^n, g_t, H_t)$ be a solution to generalized Ricci flow on a compact manifold, $t \in [T, T']$. Let $u_t$ be a solution to the conjugate heat equation and define $v_+$ as in Theorem \ref{t:expandingharnack}. Then $\sup \frac{v_+}{u}$ is nondecreasing in $t$. \begin{proof} Without loss of generality assume $T = 0$. We compute with respect to the backwards time parameter $\tau = -t$, \begin{align*} \frac{\partial}{\partial \tau} \frac{v_+}{u} =&\ - \frac{\partial}{\partial t} \frac{v_+}{u}\\ =&\ - 2 t\brs{\Rc - \frac{1}{4} H^2 + \nabla^2 f_+ + \frac{g}{2 t}}^2 - \frac{1}{2} t \brs{d^* H - \nabla f_+ \lrcorner H}^2 -\frac{1}{6} \brs{H}^2\\ &\ - R \frac{v_+}{u} + \frac{1}{4} \brs{H}^2 \frac{v_+}{u} + \frac{\Delta v_+}{u} + \frac{v_+}{u^2} \left( \Delta u + R u - \frac{1}{4} \brs{H}^2 u \right)\\ =&\ \Delta \left( \frac{v_+}{u} \right) + 2 \frac{\left< \nabla v_+, \nabla u \right>}{u^2} - 2 \frac{v_+\brs{\nabla u}^2}{u^3}\\ &\ - 2 t \brs{\Rc - \frac{1}{4} H^2 + \nabla^2 f_+ + \frac{g}{2 t}}^2 - \frac{1}{2} t \brs{d^* H - \nabla f_+ \lrcorner H}^2 - \frac{1}{6} \brs{H}^2\\ =&\ \Delta \left( \frac{v_+}{u} \right) + 2 \frac{\left< \nabla u, \nabla \frac{v_+}{u} \right>}{u} - 2 t\brs{\Rc - \frac{1}{4} H^2 + \nabla^2 f_+ + \frac{g}{2t} }^2\\ &\ - \frac{1}{2} t \brs{d^* H - \nabla f_+ \lrcorner H}^2 - \frac{1}{6} \brs{H}^2. \end{align*} The result follows by the maximum principle as in Corollary \ref{c:steadyharnack}. \end{proof} \end{cor} \begin{prop} \label{p:rigidityofgenexp} Let $(M^n, g_t, H_t)$ be a solution to generalized Ricci flow on a compact manifold. The infimum in the definition of $\mu_+$ is attained by a unique function $f$. Furthermore $\mu_+(g_t, H_t,t-T)$ is monotonically nondecreasing along generalized Ricci flow, and is constant only on a generalized Ricci expander, i.e. $H \equiv 0$ and $g$ is a Ricci expander. \begin{proof} Using the relationship $u = \frac{e^{- f_+}}{(4 \pi (t - T)^{\frac{n}{2}}}$ and then setting $w = u^{\tfrac{1}{2}}$, we can express, up to a constant shift, \begin{align*} \mathcal W_+(g,H,w,\sigma) =&\ \int_M \left[ \sigma \left( 4 \brs{\nabla w}^2 + R w^2 - \frac{1}{12} \brs{H}^2 w^2\right) + w^2 \log w^2 \right] dV. \end{align*} This functional is coercive for the Sobolev space $W^{1,2}$ and convex. By \cite{Rothaus} this functional has a unique smooth minimizer. Thus at any time $t_0$ we can choose the smooth minimizer $u_{t_0}$ defining $\mu_+$, and construct a solution to the conjugate heat equation on $[0,t_0]$ with this final value. It follows using Corollary \ref{c:expandingentropymonotonicity} that, at time $t_0$, one has \begin{align*} \frac{d}{dt} & \mu_+(g_t, H_t, t - T)\\ =&\ \frac{d}{dt} \mathcal W_+( g_t, H_t, u_t, t - T)\\ =&\ \int_M 2 \left[ (t-T) \left( \brs{\Rc - \frac{1}{4} H^2 + \nabla^2 f_+ + \frac{g}{2 (t-T)}}^2 \right. \right.\\ &\ \qquad \qquad \qquad \qquad \left. \left. + \frac{1}{4} \brs{d^* H - \nabla f_+ \lrcorner H}^2 \right) + \frac{1}{12} \brs{H}^2 \right] u dV. \end{align*} The monotonicity and rigidity statements follow easily from this formula, which holds for the corresponding choice of $u$ at all times $t$. \end{proof} \end{prop} \section{Shrinking Entropy and local collapsing} \label{s:shrinker} Having dealt with the steady and expanding soliton cases, it is natural to seek a monotone entropy quantity which is fixed along a shrinking soliton. Following the ideas of the previous sections, a natural entropy functional is suggested which is a generalization of Perelman's original entropy which includes $H$. However, as we will see below, this entropy is not monotone along generalized Ricci flow. Nonetheless, by assuming the existence of a certain kind of subsolution to the heat equation along the solution, which exists in many applications, it is possible to further modify to obtain a monotone quantity. In this setting we adapt Perelman's ideas to obtain a $\kappa$-noncollapsing result for solutions to generalized Ricci flow. \begin{thm} \label{t:shrinkingharnack} Let $(M^n, g_t, H_t)$ be a solution to generalized Ricci flow on $[0,T]$ and suppose $u_t$ is a solution to the conjugate heat equation. Let $f_-$ be defined by $u = \frac{e^{-f_-}}{(4\pi(T - t))^{\frac{n}{2}}}$, and let \begin{align*} v_- = \left[ (T - t) \left( 2 \Delta {f_-} - \brs{\nabla f_-}^2 + R - \frac{1}{12} \brs{H}^2 \right) + f_- - n \right] u. \end{align*} Then \begin{align*} \square^* v_- =&\ 2(T - t) \left( \brs{\Rc - \frac{1}{4} H^2 + \nabla^2 f_- - \frac{g}{2 (T - t)}}^2 + \frac{1}{4} \brs{d^* H - \nabla f_- \lrcorner H}^2 \right) u\\ &\ - \frac{1}{6} \brs{H}^2 u. \end{align*} \begin{proof} This is an elementary modification of Theorem \ref{t:expandingharnack}, and we leave the details as an \textbf{exercise}. \end{proof} \end{thm} \begin{rmk}The presence of the final term $-\frac{1}{6} \brs{H}^2 u$ above makes it more difficult to use Theorem \ref{t:shrinkingharnack} without further hypotheses. Nonetheless we will be able to modify this quantity further to obtain a monotone entropy in some settings. The discovery of a monotone entropy quantity fixed on shrinking solitons for generalized Ricci flow in full generality remains an important open problem. \end{rmk} \begin{defn} \label{d:shrinkingentropy} Let $(M^n, g, H)$ be a smooth manifold with $H \in \Lambda^3 T^*, dH = 0$. Fix $u \in C^{\infty}$, $u > 0$, and define $f_-$ via $u = \frac{e^{-f_-}}{(4\pi(T - t))^{\frac{n}{2}}}$. The \emph{shrinking entropy} associated to this data is \begin{align*} \mathcal W_-(g,H,f_-,\tau) := \int_M \left[ \tau \left( \brs{\nabla f_-}^2 + R - \frac{1}{12} \brs{H}^2 \right) + f_- - n \right] u dV. \end{align*} Furthermore, let \begin{gather*} \mu_- \left(g, H, \tau \right) := \inf_{\left\{ f_- | \int_M e^{-f_-}{(4 \pi \tau)^{-\frac{n}{2}}} dV = 1 \right\} } \mathcal W_- \left(g, H, u, \tau \right), \end{gather*} and \begin{gather*} \nu_-(g, H) := \sup_{\tau > 0} \mu_- \left(g,H,\tau \right). \end{gather*} \end{defn} The evolution equation for $\mathcal W_-$ follows as an immediate application of Theorem \ref{t:shrinkingharnack}, and will not be monotone in general. To modify this to obtain a monotone energy we make a further definition. \begin{defn} \label{tbsdef} Let $M^n$ be a compact manifold, and suppose $(g_t, H_t)$ is a solution to generalized Ricci flow. We say that a one-parameter family of functions $\phi$ is a \emph{torsion-bounding subsolution} if $\phi \geq 0$ and \begin{align*} \dt \phi \leq&\ \Delta \phi - \brs{H}^2. \end{align*} \end{defn} The following basic lemma indicates the usefulness of a torsion-bounding subsolution. \begin{lemma} \label{tbsmono} Let $M^n$ be a compact manifold. Given $(g_t,H_t)$ a solution to generalized Ricci flow, $u_t$ a solution of the conjugate heat equation, and $\phi_t$ a solution to \begin{align*} \dt \phi =&\ \Delta \phi + \psi, \end{align*} one has \begin{align*} \dt \int_M \phi u dV =&\ \int_M \psi u dV. \end{align*} \begin{proof} We compute \begin{align*} \dt \int_M
2008 Sergey Gromov <snake.scaly gmail.com> writes: Fri, 10 Oct 2008 15:54:18 -0500, Andrei Alexandrescu wrote: Sean Kelly wrote: It would be slower than the seeking option, but something like a randomized mergesort would work as well. If the program can buffer k lines in a file containing n lines, then read the first k lines into memory, shuffle them, and write them out to a temporary file. Repeat until the input is exhausted. Now randomly pick two of the temporary files and randomly merge them. Repeat until two temporary files remain, then output the result of the final random merge to the screen. For small files (ie. where n<k) the file would be read, shuffled in memory, and printed to the screen, assuming the proper checks were in place. I think I found a problem with your solution. Consider you break the file in three chunks: a, b, c. Then you pick at random b and c and randomly merge them. The problem is, you make early the decision that nothing in a will appear in the first two thirds of the result. So the quality of randomness suffers. How would you address that? After merging b and c you end up with a and bc. Then you randomly merge these two files and lines from a have all the chances to appear anywhere within result. When randomly merging, the probability of picking a line from a file should be proportional to a number of lines left in that file. Oct 10 2008 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: Sergey Gromov wrote: Fri, 10 Oct 2008 15:54:18 -0500, Andrei Alexandrescu wrote: Sean Kelly wrote: It would be slower than the seeking option, but something like a randomized mergesort would work as well. If the program can buffer k lines in a file containing n lines, then read the first k lines into memory, shuffle them, and write them out to a temporary file. Repeat until the input is exhausted. Now randomly pick two of the temporary files and randomly merge them. Repeat until two temporary files remain, then output the result of the final random merge to the screen. For small files (ie. where n<k) the file would be read, shuffled in memory, and printed to the screen, assuming the proper checks were in place. I think I found a problem with your solution. Consider you break the file in three chunks: a, b, c. Then you pick at random b and c and randomly merge them. The problem is, you make early the decision that nothing in a will appear in the first two thirds of the result. So the quality of randomness suffers. How would you address that? After merging b and c you end up with a and bc. Then you randomly merge these two files and lines from a have all the chances to appear anywhere within result. When randomly merging, the probability of picking a line from a file should be proportional to a number of lines left in that file. How do you "randomly merge"? Describe the process. Andrei Oct 10 2008 Sergey Gromov <snake.scaly gmail.com> writes: Fri, 10 Oct 2008 16:10:29 -0500, Andrei Alexandrescu wrote: Sergey Gromov wrote: Fri, 10 Oct 2008 15:54:18 -0500, Andrei Alexandrescu wrote: Sean Kelly wrote: It would be slower than the seeking option, but something like a randomized mergesort would work as well. If the program can buffer k lines in a file containing n lines, then read the first k lines into memory, shuffle them, and write them out to a temporary file. Repeat until the input is exhausted. Now randomly pick two of the temporary files and randomly merge them. Repeat until two temporary files remain, then output the result of the final random merge to the screen. For small files (ie. where n<k) the file would be read, shuffled in memory, and printed to the screen, assuming the proper checks were in place. I think I found a problem with your solution. Consider you break the file in three chunks: a, b, c. Then you pick at random b and c and randomly merge them. The problem is, you make early the decision that nothing in a will appear in the first two thirds of the result. So the quality of randomness suffers. How would you address that? After merging b and c you end up with a and bc. Then you randomly merge these two files and lines from a have all the chances to appear anywhere within result. When randomly merging, the probability of picking a line from a file should be proportional to a number of lines left in that file. How do you "randomly merge"? Describe the process. a[] and b[] are files to merge. ab[] is the result. a.length and b.length are number of lines left in either file. while (a.length || b.length) { if (uniform(gen, 0, a.length + b.length) < a.length) { ab.put(a.head); a.next(); } else { ab.put(b.head); b.next(); } } Oct 10 2008 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: Sergey Gromov wrote: Fri, 10 Oct 2008 16:10:29 -0500, Andrei Alexandrescu wrote: Sergey Gromov wrote: Fri, 10 Oct 2008 15:54:18 -0500, Andrei Alexandrescu wrote: Sean Kelly wrote: It would be slower than the seeking option, but something like a randomized mergesort would work as well. If the program can buffer k lines in a file containing n lines, then read the first k lines into memory, shuffle them, and write them out to a temporary file. Repeat until the input is exhausted. Now randomly pick two of the temporary files and randomly merge them. Repeat until two temporary files remain, then output the result of the final random merge to the screen. For small files (ie. where n<k) the file would be read, shuffled in memory, and printed to the screen, assuming the proper checks were in place. I think I found a problem with your solution. Consider you break the file in three chunks: a, b, c. Then you pick at random b and c and randomly merge them. The problem is, you make early the decision that nothing in a will appear in the first two thirds of the result. So the quality of randomness suffers. How would you address that? After merging b and c you end up with a and bc. Then you randomly merge these two files and lines from a have all the chances to appear anywhere within result. When randomly merging, the probability of picking a line from a file should be proportional to a number of lines left in that file. How do you "randomly merge"? Describe the process. a[] and b[] are files to merge. ab[] is the result. a.length and b.length are number of lines left in either file. while (a.length || b.length) { if (uniform(gen, 0, a.length + b.length) < a.length) { ab.put(a.head); a.next(); } else { ab.put(b.head); b.next(); } } This should work. Use of ranges noted :o). Andrei Oct 10 2008 Sergey Gromov <snake.scaly gmail.com> writes: Fri, 10 Oct 2008 16:27:03 -0500, Andrei Alexandrescu wrote: Sergey Gromov wrote: Fri, 10 Oct 2008 16:10:29 -0500, Andrei Alexandrescu wrote: Sergey Gromov wrote: Fri, 10 Oct 2008 15:54:18 -0500, Andrei Alexandrescu wrote: Sean Kelly wrote: It would be slower than the seeking option, but something like a randomized mergesort would work as well. If the program can buffer k lines in a file containing n lines, then read the first k lines into memory, shuffle them, and write them out to a temporary file. Repeat until the input is exhausted. Now randomly pick two of the temporary files and randomly merge them. Repeat until two temporary files remain, then output the result of the final random merge to the screen. For small files (ie. where n<k) the file would be read, shuffled in memory, and printed to the screen, assuming the proper checks were in place. I think I found a problem with your solution. Consider you break the file in three chunks: a, b, c. Then you pick at random b and c and randomly merge them. The problem is, you make early the decision that nothing in a will appear in the first two thirds of the result. So the quality of randomness suffers. How would you address that? After merging b and
# Back propagation neural network This is the first time I tried to write a back propagation ANN and I would like to know what more experienced people think of it. The code is meant to distinguish if text is written in English, French or Dutch. I know my training set isn't very diverse but I just got the data from about 30 different texts, each containing 250 words in one of the 3 languages, so that's not my fault. I also know there are easier ways to do that but I wanted to learn something about ANNs. I'd be glad if any of you would be kind enough to give me his thoughts on how I did this and how I could improve it. import math, time, random, winsound global Usefull LearningRate = 0.001 InWeight = [[],[],[],[],[],[]] #Generate random InWeights for i in range(6): for j in range(21): InWeight[i].append(random.uniform(0,1)) #21 Input Values InNeuron = [0,0,0,0,0,0,0, 0,0,0,0,0,0,0, 0,0,0,0,0,0,0] #6 Hidden Neurons HiddenLayer = [0, 0, 0, 0, 0, 0] #Used to calculate Delta HiddenLayerNoSigmoid = [0, 0, 0, 0, 0, 0] HiddenWeight = [[],[],[]] #Generate random HiddenWeights for i in range(3): for j in range(6): HiddenWeight[i].append(random.uniform(0,1)) #3 Output Neurons OutNeuron = [0, 0, 0] #Used to calculate Delta OutNeuronNoSigmoid = [0, 0, 0] #Learning Table #Engels - Nederlands - Frans - Desired output test = [[11, 4, 8, 1, 14, 8, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]] test += [[4, 0, 6, 0, 4, 6, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]] test += [[6, 0, 6, 0, 11, 8, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]] test += [[23, 0, 0, 0, 13, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]] test += [[18, 4, 4, 2, 14, 8, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]] test += [[14, 1, 6, 0, 10, 7, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]] test += [[19, 0, 2, 0, 18, 4, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]] test += [[13, 1, 1, 1, 15, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]] test += [[19, 3, 1, 0, 14, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 2, 0, 5, 6, 1, 1, 8, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 3, 0, 7, 1, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 1, 0, 12, 7, 8, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 4, 0, 5, 4, 4, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 5, 1, 13, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 7, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 7, 0, 13, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 14, 1, 8, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 4, 0, 2, 4, 9, 4, 3, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 3, 0, 6, 0, 8, 0, 0, 1]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 2, 7, 0, 1, 0, 0, 0, 0, 1]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 1, 0, 2, 0, 1, 0, 0, 1]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 5, 2, 2, 0, 0, 0, 0, 1]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0, 7, 0, 2, 0, 2, 0, 0, 1]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 0, 7, 1, 1, 2, 3, 0, 0, 1]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 8, 0, 2, 0, 2, 0, 0, 1]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 4, 0, 3, 1, 3, 0, 0, 1]] test += [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 1, 5, 1, 2, 0, 0, 0, 0, 1]] def Sigmoid(Value): return math.tanh(Value) def DSigmoid(Value): return 1.0 - Value**2 def UpdateHiddenNode(): global InNeuron, InWeight for i in range(6): e = 0 for j in range(21): e += InWeight[i][j]*InNeuron[j] HiddenLayerNoSigmoid = e HiddenLayer[i] = Sigmoid(e) def UpdateOutNeuron(): global HiddenLayer, HiddenWeight for i in range(3): e = 0 for j in range(3): e += HiddenWeight[i][j]*HiddenLayer[j] OutNeuron[i] = Sigmoid(e) def UpdateDelta(): global Delta3, Delta4, Delta5, Delta6, Delta7, Delta8 Delta3 = Delta0*HiddenWeight[0][0]+Delta1*HiddenWeight[1][0]+Delta2*HiddenWeight[2][0] Delta4 = Delta0*HiddenWeight[0][1]+Delta1*HiddenWeight[1][1]+Delta2*HiddenWeight[2][1] Delta5 = Delta0*HiddenWeight[0][2]+Delta1*HiddenWeight[1][2]+Delta2*HiddenWeight[2][2] Delta6 = Delta0*HiddenWeight[0][3]+Delta1*HiddenWeight[1][3]+Delta2*HiddenWeight[2][3] Delta7 = Delta0*HiddenWeight[0][4]+Delta1*HiddenWeight[1][4]+Delta2*HiddenWeight[2][4] Delta8 = Delta0*HiddenWeight[0][5]+Delta1*HiddenWeight[1][5]+Delta2*HiddenWeight[2][5] def UpdateInWeights(): global Delta3, Delta4, Delta5, Delta6, Delta7, Delta8 for i in range(21): InWeight[0][i] += LearningRate*Delta3*DSigmoid(HiddenLayerNoSigmoid[0])*InNeuron[i] InWeight[1][i] += LearningRate*Delta4*DSigmoid(HiddenLayerNoSigmoid[1])*InNeuron[i] InWeight[2][i] += LearningRate*Delta5*DSigmoid(HiddenLayerNoSigmoid[2])*InNeuron[i] InWeight[3][i] += LearningRate*Delta6*DSigmoid(HiddenLayerNoSigmoid[3])*InNeuron[i] InWeight[4][i] += LearningRate*Delta7*DSigmoid(HiddenLayerNoSigmoid[4])*InNeuron[i] InWeight[5][i] += LearningRate*Delta8*DSigmoid(HiddenLayerNoSigmoid[5])*InNeuron[i] def UpdateHiddenWeights(): global Delta0, Delta1, Delta2 for i in range(3): HiddenWeight[0][i] += LearningRate*Delta0*DSigmoid(OutNeuronNoSigmoid[0])*HiddenLayer[i] HiddenWeight[1][i] += LearningRate*Delta1*DSigmoid(OutNeuronNoSigmoid[1])*HiddenLayer[i] HiddenWeight[2][i] += LearningRate*Delta2*DSigmoid(OutNeuronNoSigmoid[2])*HiddenLayer[i] print("Learning...") #Start playing Learning.wav if available, else play windows default sound #ASYNC ensures the program keeps running while playing the sound winsound.PlaySound("Learning.wav", winsound.SND_ASYNC) #Start timer StartTime = time.clock() Iterations = 0 #Main loop while Iterations <= 100000: for i in range(len(test)): for j in range(21): InNeuron[j] = test[i][j] UpdateHiddenNode() UpdateOutNeuron() Delta0 = test[i][21] - OutNeuron[0] Delta1 = test[i][22] - OutNeuron[1] Delta2 = test[i][23] - OutNeuron[2] UpdateDelta() UpdateInWeights() UpdateHiddenWeights() if Iterations % 1000 == 0: PercentComplete = Iterations / 1000 print("Learning " + str(PercentComplete) + "% Complete") Iterations += 1 #Stop playing any sound winsound.PlaySound(None, winsound.SND_ASYNC) print(Delta0, Delta1, Delta2) #Save brain to SaveFile SaveFileName = input("Save brain as: ") SaveFile = open(SaveFileName+".txt", "w") SaveFile.write(str(InWeight)) SaveFile.write(str(HiddenWeight)) SaveFile.close() ElapsedTime = (time.clock() - StartTime) print(str(ElapsedTime) + "seconds") #Start playing Ready.wav if available, else play default windows sound #ASYNC ensures the program keeps running while playing the sound def Input_Frequency(Document): WantedWords = ["i", "you", "he", "are", "the", "and", "for", "ik", "jij", "hij", "zijn", "het", "niet", "een", "le", "tu", "il", "avez", "une", "alors", "dans"] file = open(Document, "r") file.close() #Create dictionary word_freq ={} #Split text in words text = str.lower(text) word_list = str.split(text) for word in word_list: word_freq[word] = word_freq.get(word, 0) + 1 #Get keys keys = word_freq.keys() #Get frequency of usefull words Usefull = [] for word in WantedWords: if word in keys: word = word_freq[word] Usefull.append(word) else: Usefull.append(0) return Usefull def UseIt(Input): for i in range(len(Input)): InNeuron[i] = Input[i] UpdateHiddenNode() UpdateOutNeuron() if OutNeuron[0] > 0.99: return ("Engelse tekst") if OutNeuron[1] > 0.99: return ("Nederlandse tekst") if OutNeuron[2] > 0.99: return ("Franse tekst") #Documents to investigate #Error handling checks if you input a number while True: try: NumberOfDocuments = int(input("Aantal te onderzoeken documenten: ")) break except ValueError: print("That was not a valid number.") x = 0 while NumberOfDocuments > x: #Error handling checks if document exists while True: try: Document = str(input("Document: ")) file = open(Document, "r") break except IOError: print(UseIt(Input_Frequency(Document))) #Stop playing any sound if x == (NumberOfDocuments - 1): winsound.PlaySound(None, winsound.SND_ASYNC) x += 1 • You might want to cut the soundcode from the code, I just used it to warn me when the calculation was running and when it was done so I could test. – Daquicker Nov 17 '11 at 22:06 import math, time, random, winsound global Usefull A global statement outside of a function has no effect LearningRate = 0.001 The python style guide recommends that global constants be in ALL_CAPS InWeight = [[],[],[],[],[],[]] The python style guide for local variable names is lower_case_with_underscores #Generate random InWeights for i in range(6): for j in range(21): InWeight[i].append(random.uniform(0,1)) Logic like
# LytePOD Podium 1m Product Code: 26/3071-h The LytePod ensures strength, safety, stability and rigidity providing an excellent level of protection for the user. The NEW BS 8620 LYTEPOD from Lyte Ladders & Towers is fully tested and certified to the new Low Level Work Platform standard. Lyte’s UK manufactured BS 8620 low level work platform means you can ensure your fleet is fully compliant to the latest standards when working at height. Specifications Overall Height 2.01m Platform Height 0.72m or 0.97m Platform Size 510x600mm Product Model LytePOD 1m Safe Working Load 150kg Weight Dry 37kg Width 0.67m Working Footprint 510x600mm Product Comparison Large Rota Block 0.985m 1m 313L 22kg 105kg 0.39m N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A Rota Block mini 0.4m 0.8m 96L 9kg 105kg 0.39m N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A Scissorgate 1.5m N/A N/A 55kg N/A 0.8m 0.3m 2.3m N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A ArcGen Cobra 500i Inverter Welder N/A N/A N/A 33kg N/A N/A N/A N/A 20-500a 467x293x595mm 90V 25 KVA 500A 2-50V N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A Active Brushcutter N/A N/A N/A 7kg N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 99 99db(A) 2 Stroke/Oil/Mix 1.1ltr 1.7kW 40B 4.6m/s² N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A Digi Lock Gate 2m 1m N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 20kg N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
cooperative behaviour would emerge. Also with respect to the fish example, the fish is a system that can act on its environment and thus its behavior is a tradeoff between 'exploration vs exploitation' under this idea and would not just be a form of predictive learning that we would see in a system that cannot act on its environment. While there may be many languages, some providing a more detailed and useful definition of purpose, the language in this paper would explain the emergence of what these other languages describe better. Natesh Lee Bloomquist replied on Feb. 24, 2017 @ 12:50 GMT "Also with respect to the fish example, the fish is a system that can act on its environment and thus its behavior is a tradeoff between 'exploration vs exploitation' under this idea and would not just be a form of predictive learning that we would see in a system that cannot act on its environment." Is what you wrote, above, about probability-learning foraging fish implied by the definitions of terms in your hypothesis? That is, do the definitions of the terms in your hypothesis (like "open," "constraints," etc.) imply what you have written, above? Your hypothesis: "Open physical systems with constraints on their finite complexity, that dissipate minimally when driven by external fields, will necessarily exhibit learning and inference dynamics." Or, is more than this required to understand your hypothesis— more than just the above statement of your hypothesis together with definitions of the terms used in your hypothesis? report post as inappropriate Author Natesh Ganesh replied on Feb. 28, 2017 @ 15:31 GMT Hi Lee, Apologies for the late reply. Have been away at a conference to talk about some of the ideas here and how to relate it to computing. "Or, is more than this required to understand your hypothesis— more than just the above statement of your hypothesis together with definitions of the terms used in your hypothesis?" --> All that is needed to understand my hypothesis is that statement. I have provided as many definitions as I can there in the essay but due to space limitations I have had to reference some of the other definitions in former papers. "Is what you wrote, above, about probability-learning foraging fish implied by the definitions of terms in your hypothesis? That is, do the definitions of the terms in your hypothesis (like "open," "constraints," etc.) imply what you have written, above?" --> Yes. It does. Joseph Murphy Brisendine wrote on Feb. 22, 2017 @ 22:20 GMT Hi Natesh, I think this essay is fantastic and basically completely correct. I love that you took the time to make explicit connections between the Landauer limit, which many biochemical processes ahve been shown to assymptotically approach, and the importance of a predictor circuit and feedback between sensing and acting, and you even bring in the flcutuation theorems at the end in discussing the problem of assigning probabilitites to brain states, I think it's wonderful and very informed with regard to current research in stat mech, neuroscience, and machine learning. You have the diversity of background required to address this question which is at the intersection of so many fields. I hope you might take the time to peruse my submission, entitled "A sign without meaning." I took a very different approach and went with an equation-free text in the hopes of being as accessible as possible, but I think you'll find that we agree on a great number of issues, and I'm glad that the question is being addressed from multiple perspectives but with the right foundation in statistical mechanics. Best of luck to you in the competition I think you wrote a hell of an essay! --Joe Brisendine report post as inappropriate Author Natesh Ganesh replied on Feb. 23, 2017 @ 08:06 GMT Dear Joe, Thank you for your very kind and encouraging comments. Inspires me to work harder. I am glad that I managed to communicate the ideas in the essay coherently to you. Yes, the topic of this essay is at a very unique intersection of so many different fields. I wish I wasn't right at the word limit and had more room to discuss a bunch of other things. There is a much needed discussion of semantics, consciousness and the implications of the ideas presented here on the philosophy of the mind that I would have loved to delve into. The title of your essay is very intriguing. I am caught up at a conference for the next two days but I will definitely read your essay in detail over the weekend and get back to you with questions/comments. I look forward to reading your thoughts on this problem. Thanks a lot again for your encouragement. Natesh James Lee Hoover wrote on Feb. 22, 2017 @ 23:18 GMT Quite interesting, Natesh. The emergence of intention, purpose, goals, and learning are automatically achieved with England's restructuring and replication thesis as dissipation occurs -- but humanly done with purpose and goals? Your emphasis on computer modeling seems to blur the distinction between the human and machine but that is probably my failure to view it after one quick read. Impressive study. Jim report post as inappropriate Author Natesh Ganesh replied on Feb. 23, 2017 @ 07:54 GMT Dear Jim, I am glad you find it interesting. Yes, while England's ideas have been a big step forward in the right direction, there are some caveats in his hypothesis and I illustrate those points and present a way to unify individual learning and evolutionary processes under the single fluctuation theorems. "but humanly done with purpose and goals?" I am sorry but I fail to understand your question. Can you help me out here? I might have used finite state automata/machine which are popular in computer engineering and and being one I am very familiar with them. Their popularity in computer engineering does not reduce their general applicability. Thanks for your comments. Let me know if there are other things I can clarify if you get a chance to view it in detail. Natesh Jeff Yee wrote on Feb. 23, 2017 @ 00:50 GMT Natesh - You did a good job on your essay and I like how you've been able to incorporate math, which was suggested in the essay rules. Well done! I gave you a good community rating which I hope helps to give your essay the visibility/rating it deserves. report post as inappropriate Author Natesh Ganesh replied on Feb. 23, 2017 @ 07:58 GMT Dear Jeff, Thank you for your encouraging comments and kind rating. Gives me greater confidence to carry on and to work harder. Yes, it was tough but after several edits I think I managed to find a good balance of math vs no-math. And the language of math is always beautiful and adds so much to the discussion, wouldnt you agree. I am hoping more people will read this essay. Natesh Lee Bloomquist wrote on Feb. 23, 2017 @ 02:16 GMT Natesh, I can't find an equation that defines "dissipation" in the essay. Is it in the references? I can find "the lower bound on dissipation in this system as it undergoes a state transition..." But that seems specific to finite state machines, which are not equivalent in power to Turing machines. Is it just "delta E"? where E is energy lost from something like a thermodynamic engine? report post as inappropriate Author Natesh Ganesh replied on Feb. 23, 2017 @ 07:46 GMT Dear Bloomquist, The dissipation by the system S into the bath B is captured by the \Delta expression of the bath. This would be the change in the average energy of the bath and since only S can exchange energy with B, the increase in energy of the environment would be due to the dissipation by S due to a state transition. A much detailed treatment of the methodology I use is available in the reference as cited in the submission. Addressing your comment
L5*sin(q2)*sin(q5)*cos(q3)*cos(q1 + pi/4) + L5*sin(q3)*sin(q5)*sin(q1 + pi/4) - L5*sin(q1 + pi/4)*cos(q3)*cos(q4)*cos(q5), -L4*sin(q2)*cos(q3)*cos(q4)*cos(q1 + pi/4) - L4*sin(q3)*sin(q1 + pi/4)*cos(q4) - L4*sin(q4)*cos(q2)*cos(q1 + pi/4) + L5*sin(q2)*sin(q4)*cos(q3)*cos(q5)*cos(q1 + pi/4) + L5*sin(q3)*sin(q4)*sin(q1 + pi/4)*cos(q5) - L5*cos(q2)*cos(q4)*cos(q5)*cos(q1 + pi/4), L5*(sin(q2)*sin(q3)*cos(q5)*cos(q1 + pi/4) + sin(q2)*sin(q5)*cos(q3)*cos(q4)*cos(q1 + pi/4) + sin(q3)*sin(q5)*sin(q1 + pi/4)*cos(q4) + sin(q4)*sin(q5)*cos(q2)*cos(q1 + pi/4) - sin(q1 + pi/4)*cos(q3)*cos(q5)), 0, 0], [L1*cos(q1 + pi/4) + L2*cos(q2)*cos(q1 + pi/4) - L3*sin(q2)*cos(q3)*cos(q1 + pi/4) - L3*sin(q3)*sin(q1 + pi/4) - L4*sin(q2)*sin(q4)*cos(q3)*cos(q1 + pi/4) - L4*sin(q3)*sin(q4)*sin(q1 + pi/4) + L4*cos(q2)*cos(q4)*cos(q1 + pi/4) + L5*sin(q2)*sin(q3)*sin(q5)*cos(q1 + pi/4) - L5*sin(q2)*cos(q3)*cos(q4)*cos(q5)*cos(q1 + pi/4) - L5*sin(q3)*sin(q1 + pi/4)*cos(q4)*cos(q5) - L5*sin(q4)*cos(q2)*cos(q5)*cos(q1 + pi/4) - L5*sin(q5)*sin(q1 + pi/4)*cos(q3), (-L2*sin(q2) - L3*cos(q2)*cos(q3) - L4*sin(q2)*cos(q4) - L4*sin(q4)*cos(q2)*cos(q3) + L5*sin(q2)*sin(q4)*cos(q5) + L5*sin(q3)*sin(q5)*cos(q2) - L5*cos(q2)*cos(q3)*cos(q4)*cos(q5))*sin(q1 + pi/4), L3*sin(q2)*sin(q3)*sin(q1 + pi/4) + L3*cos(q3)*cos(q1 + pi/4) + L4*sin(q2)*sin(q3)*sin(q4)*sin(q1 + pi/4) + L4*sin(q4)*cos(q3)*cos(q1 + pi/4) + L5*sin(q2)*sin(q3)*sin(q1 + pi/4)*cos(q4)*cos(q5) + L5*sin(q2)*sin(q5)*sin(q1 + pi/4)*cos(q3) - L5*sin(q3)*sin(q5)*cos(q1 + pi/4) + L5*cos(q3)*cos(q4)*cos(q5)*cos(q1 + pi/4), -L4*sin(q2)*sin(q1 + pi/4)*cos(q3)*cos(q4) + L4*sin(q3)*cos(q4)*cos(q1 + pi/4) - L4*sin(q4)*sin(q1 + pi/4)*cos(q2) + L5*sin(q2)*sin(q4)*sin(q1 + pi/4)*cos(q3)*cos(q5) - L5*sin(q3)*sin(q4)*cos(q5)*cos(q1 + pi/4) - L5*sin(q1 + pi/4)*cos(q2)*cos(q4)*cos(q5), L5*(sin(q2)*sin(q3)*sin(q1 + pi/4)*cos(q5) + sin(q2)*sin(q5)*sin(q1 + pi/4)*cos(q3)*cos(q4) - sin(q3)*sin(q5)*cos(q4)*cos(q1 + pi/4) + sin(q4)*sin(q5)*sin(q1 + pi/4)*cos(q2) + cos(q3)*cos(q5)*cos(q1 + pi/4)), 0, 0], [0, -L2*cos(q2) + L3*sin(q2)*cos(q3) + L4*(sin(q2)*sin(q4)*cos(q3) - cos(q2)*cos(q4)) + L5*((sin(q2)*cos(q3)*cos(q4) + sin(q4)*cos(q2))*cos(q5) - sin(q2)*sin(q3)*sin(q5)), (L3*sin(q3) + L4*sin(q3)*sin(q4) + L5*(sin(q3)*cos(q4)*cos(q5) + sin(q5)*cos(q3)))*cos(q2), L4*(sin(q2)*sin(q4) - cos(q2)*cos(q3)*cos(q4)) + L5*(sin(q2)*cos(q4) + sin(q4)*cos(q2)*cos(q3))*cos(q5), -L5*((sin(q2)*sin(q4) - cos(q2)*cos(q3)*cos(q4))*sin(q5) - sin(q3)*cos(q2)*cos(q5)), 0, 0]]) return z def jacobi_GL(self, q): L, h, H, L0, L1, L2, L3, L4, L5, L6 = self.L, self.h, self.H, self.L0, self.L1, self.L2, self.L3, self.L4, self.L5, self.L6 q1, q2, q3, q4, q5, q6, q7 = q[0, 0], q[1, 0], q[2, 0], q[3, 0], q[4, 0], q[5, 0], q[6, 0] z = np.array([[-L1*sin(q1 + pi/4) - L2*sin(q1 + pi/4)*cos(q2) + L3*sin(q2)*sin(q1 + pi/4)*cos(q3) - L3*sin(q3)*cos(q1 + pi/4) + L4*sin(q2)*sin(q4)*sin(q1 + pi/4)*cos(q3) - L4*sin(q3)*sin(q4)*cos(q1 + pi/4) - L4*sin(q1 + pi/4)*cos(q2)*cos(q4) - L5*sin(q2)*sin(q3)*sin(q5)*sin(q1 + pi/4) + L5*sin(q2)*sin(q1 + pi/4)*cos(q3)*cos(q4)*cos(q5) - L5*sin(q3)*cos(q4)*cos(q5)*cos(q1 + pi/4) + L5*sin(q4)*sin(q1 + pi/4)*cos(q2)*cos(q5) - L5*sin(q5)*cos(q3)*cos(q1 + pi/4) - L6*sin(q2)*sin(q3)*sin(q5)*sin(q6)*sin(q1 + pi/4) + L6*sin(q2)*sin(q4)*sin(q1 + pi/4)*cos(q3)*cos(q6) + L6*sin(q2)*sin(q6)*sin(q1 + pi/4)*cos(q3)*cos(q4)*cos(q5) - L6*sin(q3)*sin(q4)*cos(q6)*cos(q1 + pi/4) - L6*sin(q3)*sin(q6)*cos(q4)*cos(q5)*cos(q1 + pi/4) + L6*sin(q4)*sin(q6)*sin(q1 + pi/4)*cos(q2)*cos(q5) - L6*sin(q5)*sin(q6)*cos(q3)*cos(q1 + pi/4) - L6*sin(q1 + pi/4)*cos(q2)*cos(q4)*cos(q6), (-L2*sin(q2) - L3*cos(q2)*cos(q3) - L4*sin(q2)*cos(q4) - L4*sin(q4)*cos(q2)*cos(q3) + L5*sin(q2)*sin(q4)*cos(q5) + L5*sin(q3)*sin(q5)*cos(q2) - L5*cos(q2)*cos(q3)*cos(q4)*cos(q5) + L6*sin(q2)*sin(q4)*sin(q6)*cos(q5) - L6*sin(q2)*cos(q4)*cos(q6) + L6*sin(q3)*sin(q5)*sin(q6)*cos(q2) - L6*sin(q4)*cos(q2)*cos(q3)*cos(q6) - L6*sin(q6)*cos(q2)*cos(q3)*cos(q4)*cos(q5))*cos(q1 + pi/4), L3*sin(q2)*sin(q3)*cos(q1 + pi/4) - L3*sin(q1 + pi/4)*cos(q3) + L4*sin(q2)*sin(q3)*sin(q4)*cos(q1 + pi/4) - L4*sin(q4)*sin(q1 + pi/4)*cos(q3) + L5*sin(q2)*sin(q3)*cos(q4)*cos(q5)*cos(q1 + pi/4) + L5*sin(q2)*sin(q5)*cos(q3)*cos(q1 + pi/4) + L5*sin(q3)*sin(q5)*sin(q1 + pi/4) - L5*sin(q1 + pi/4)*cos(q3)*cos(q4)*cos(q5) + L6*sin(q2)*sin(q3)*sin(q4)*cos(q6)*cos(q1 + pi/4) + L6*sin(q2)*sin(q3)*sin(q6)*cos(q4)*cos(q5)*cos(q1 + pi/4) + L6*sin(q2)*sin(q5)*sin(q6)*cos(q3)*cos(q1 + pi/4) + L6*sin(q3)*sin(q5)*sin(q6)*sin(q1 + pi/4) - L6*sin(q4)*sin(q1 + pi/4)*cos(q3)*cos(q6) - L6*sin(q6)*sin(q1 + pi/4)*cos(q3)*cos(q4)*cos(q5), -L4*sin(q2)*cos(q3)*cos(q4)*cos(q1 + pi/4) - L4*sin(q3)*sin(q1 + pi/4)*cos(q4) - L4*sin(q4)*cos(q2)*cos(q1 + pi/4) + L5*sin(q2)*sin(q4)*cos(q3)*cos(q5)*cos(q1 + pi/4) + L5*sin(q3)*sin(q4)*sin(q1 + pi/4)*cos(q5) - L5*cos(q2)*cos(q4)*cos(q5)*cos(q1 + pi/4) + L6*sin(q2)*sin(q4)*sin(q6)*cos(q3)*cos(q5)*cos(q1 + pi/4) - L6*sin(q2)*cos(q3)*cos(q4)*cos(q6)*cos(q1 + pi/4) + L6*sin(q3)*sin(q4)*sin(q6)*sin(q1 + pi/4)*cos(q5) - L6*sin(q3)*sin(q1 + pi/4)*cos(q4)*cos(q6) - L6*sin(q4)*cos(q2)*cos(q6)*cos(q1 + pi/4) - L6*sin(q6)*cos(q2)*cos(q4)*cos(q5)*cos(q1 + pi/4), L5*sin(q2)*sin(q3)*cos(q5)*cos(q1 + pi/4) + L5*sin(q2)*sin(q5)*cos(q3)*cos(q4)*cos(q1 + pi/4) + L5*sin(q3)*sin(q5)*sin(q1 + pi/4)*cos(q4) + L5*sin(q4)*sin(q5)*cos(q2)*cos(q1 + pi/4) - L5*sin(q1 + pi/4)*cos(q3)*cos(q5) + L6*sin(q2)*sin(q3)*sin(q6)*cos(q5)*cos(q1 + pi/4) + L6*sin(q2)*sin(q5)*sin(q6)*cos(q3)*cos(q4)*cos(q1 + pi/4) + L6*sin(q3)*sin(q5)*sin(q6)*sin(q1 + pi/4)*cos(q4) + L6*sin(q4)*sin(q5)*sin(q6)*cos(q2)*cos(q1 + pi/4) - L6*sin(q6)*sin(q1 + pi/4)*cos(q3)*cos(q5), L6*(sin(q2)*sin(q3)*sin(q5)*cos(q6)*cos(q1 + pi/4) + sin(q2)*sin(q4)*sin(q6)*cos(q3)*cos(q1 + pi/4) - sin(q2)*cos(q3)*cos(q4)*cos(q5)*cos(q6)*cos(q1 + pi/4) + sin(q3)*sin(q4)*sin(q6)*sin(q1 + pi/4) - sin(q3)*sin(q1 + pi/4)*cos(q4)*cos(q5)*cos(q6) - sin(q4)*cos(q2)*cos(q5)*cos(q6)*cos(q1 + pi/4) - sin(q5)*sin(q1 + pi/4)*cos(q3)*cos(q6) - sin(q6)*cos(q2)*cos(q4)*cos(q1 + pi/4)), 0], [L1*cos(q1 + pi/4) + L2*cos(q2)*cos(q1 + pi/4) - L3*sin(q2)*cos(q3)*cos(q1 + pi/4) - L3*sin(q3)*sin(q1 + pi/4) - L4*sin(q2)*sin(q4)*cos(q3)*cos(q1 + pi/4) - L4*sin(q3)*sin(q4)*sin(q1 + pi/4) + L4*cos(q2)*cos(q4)*cos(q1 + pi/4) + L5*sin(q2)*sin(q3)*sin(q5)*cos(q1 + pi/4) - L5*sin(q2)*cos(q3)*cos(q4)*cos(q5)*cos(q1 + pi/4) - L5*sin(q3)*sin(q1 + pi/4)*cos(q4)*cos(q5) - L5*sin(q4)*cos(q2)*cos(q5)*cos(q1 + pi/4) - L5*sin(q5)*sin(q1 + pi/4)*cos(q3) + L6*sin(q2)*sin(q3)*sin(q5)*sin(q6)*cos(q1 + pi/4) - L6*sin(q2)*sin(q4)*cos(q3)*cos(q6)*cos(q1 + pi/4) - L6*sin(q2)*sin(q6)*cos(q3)*cos(q4)*cos(q5)*cos(q1 + pi/4) - L6*sin(q3)*sin(q4)*sin(q1 + pi/4)*cos(q6) - L6*sin(q3)*sin(q6)*sin(q1 + pi/4)*cos(q4)*cos(q5) - L6*sin(q4)*sin(q6)*cos(q2)*cos(q5)*cos(q1 + pi/4) - L6*sin(q5)*sin(q6)*sin(q1 + pi/4)*cos(q3) + L6*cos(q2)*cos(q4)*cos(q6)*cos(q1 + pi/4), (-L2*sin(q2) - L3*cos(q2)*cos(q3) - L4*sin(q2)*cos(q4) - L4*sin(q4)*cos(q2)*cos(q3) + L5*sin(q2)*sin(q4)*cos(q5) + L5*sin(q3)*sin(q5)*cos(q2) - L5*cos(q2)*cos(q3)*cos(q4)*cos(q5) + L6*sin(q2)*sin(q4)*sin(q6)*cos(q5) - L6*sin(q2)*cos(q4)*cos(q6) + L6*sin(q3)*sin(q5)*sin(q6)*cos(q2) - L6*sin(q4)*cos(q2)*cos(q3)*cos(q6) - L6*sin(q6)*cos(q2)*cos(q3)*cos(q4)*cos(q5))*sin(q1 + pi/4), L3*sin(q2)*sin(q3)*sin(q1 + pi/4) + L3*cos(q3)*cos(q1 + pi/4) + L4*sin(q2)*sin(q3)*sin(q4)*sin(q1 + pi/4) + L4*sin(q4)*cos(q3)*cos(q1 + pi/4) + L5*sin(q2)*sin(q3)*sin(q1 + pi/4)*cos(q4)*cos(q5) + L5*sin(q2)*sin(q5)*sin(q1 + pi/4)*cos(q3) - L5*sin(q3)*sin(q5)*cos(q1 + pi/4) + L5*cos(q3)*cos(q4)*cos(q5)*cos(q1 + pi/4) + L6*sin(q2)*sin(q3)*sin(q4)*sin(q1 + pi/4)*cos(q6) + L6*sin(q2)*sin(q3)*sin(q6)*sin(q1 + pi/4)*cos(q4)*cos(q5) + L6*sin(q2)*sin(q5)*sin(q6)*sin(q1 + pi/4)*cos(q3) - L6*sin(q3)*sin(q5)*sin(q6)*cos(q1 + pi/4) + L6*sin(q4)*cos(q3)*cos(q6)*cos(q1 + pi/4) + L6*sin(q6)*cos(q3)*cos(q4)*cos(q5)*cos(q1 + pi/4), -L4*sin(q2)*sin(q1 + pi/4)*cos(q3)*cos(q4) + L4*sin(q3)*cos(q4)*cos(q1 + pi/4) - L4*sin(q4)*sin(q1 + pi/4)*cos(q2) + L5*sin(q2)*sin(q4)*sin(q1 + pi/4)*cos(q3)*cos(q5) - L5*sin(q3)*sin(q4)*cos(q5)*cos(q1 + pi/4) - L5*sin(q1 + pi/4)*cos(q2)*cos(q4)*cos(q5) + L6*sin(q2)*sin(q4)*sin(q6)*sin(q1 + pi/4)*cos(q3)*cos(q5) - L6*sin(q2)*sin(q1 + pi/4)*cos(q3)*cos(q4)*cos(q6) - L6*sin(q3)*sin(q4)*sin(q6)*cos(q5)*cos(q1 + pi/4) + L6*sin(q3)*cos(q4)*cos(q6)*cos(q1 + pi/4) - L6*sin(q4)*sin(q1 + pi/4)*cos(q2)*cos(q6) - L6*sin(q6)*sin(q1 + pi/4)*cos(q2)*cos(q4)*cos(q5), L5*sin(q2)*sin(q3)*sin(q1 + pi/4)*cos(q5) + L5*sin(q2)*sin(q5)*sin(q1 + pi/4)*cos(q3)*cos(q4) - L5*sin(q3)*sin(q5)*cos(q4)*cos(q1 + pi/4) + L5*sin(q4)*sin(q5)*sin(q1 + pi/4)*cos(q2) + L5*cos(q3)*cos(q5)*cos(q1 + pi/4) + L6*sin(q2)*sin(q3)*sin(q6)*sin(q1 + pi/4)*cos(q5) + L6*sin(q2)*sin(q5)*sin(q6)*sin(q1 + pi/4)*cos(q3)*cos(q4) - L6*sin(q3)*sin(q5)*sin(q6)*cos(q4)*cos(q1 + pi/4) + L6*sin(q4)*sin(q5)*sin(q6)*sin(q1 + pi/4)*cos(q2) + L6*sin(q6)*cos(q3)*cos(q5)*cos(q1 + pi/4), L6*(sin(q2)*sin(q3)*sin(q5)*sin(q1 + pi/4)*cos(q6) + sin(q2)*sin(q4)*sin(q6)*sin(q1 + pi/4)*cos(q3) - sin(q2)*sin(q1 + pi/4)*cos(q3)*cos(q4)*cos(q5)*cos(q6) - sin(q3)*sin(q4)*sin(q6)*cos(q1 + pi/4) + sin(q3)*cos(q4)*cos(q5)*cos(q6)*cos(q1 + pi/4) - sin(q4)*sin(q1 + pi/4)*cos(q2)*cos(q5)*cos(q6) + sin(q5)*cos(q3)*cos(q6)*cos(q1 + pi/4) - sin(q6)*sin(q1 + pi/4)*cos(q2)*cos(q4)), 0], [0, -L2*cos(q2) + L3*sin(q2)*cos(q3) + L4*(sin(q2)*sin(q4)*cos(q3) - cos(q2)*cos(q4)) + L5*((sin(q2)*cos(q3)*cos(q4) + sin(q4)*cos(q2))*cos(q5) - sin(q2)*sin(q3)*sin(q5)) + L6*(((sin(q2)*cos(q3)*cos(q4) + sin(q4)*cos(q2))*cos(q5) - sin(q2)*sin(q3)*sin(q5))*sin(q6) + (sin(q2)*sin(q4)*cos(q3) - cos(q2)*cos(q4))*cos(q6)), (L3*sin(q3) + L4*sin(q3)*sin(q4) + L5*(sin(q3)*cos(q4)*cos(q5) + sin(q5)*cos(q3)) + L6*((sin(q3)*cos(q4)*cos(q5) + sin(q5)*cos(q3))*sin(q6) + sin(q3)*sin(q4)*cos(q6)))*cos(q2), L4*(sin(q2)*sin(q4) - cos(q2)*cos(q3)*cos(q4)) + L5*(sin(q2)*cos(q4) + sin(q4)*cos(q2)*cos(q3))*cos(q5) + L6*((sin(q2)*sin(q4) - cos(q2)*cos(q3)*cos(q4))*cos(q6) + (sin(q2)*cos(q4) + sin(q4)*cos(q2)*cos(q3))*sin(q6)*cos(q5)), -(L5 + L6*sin(q6))*((sin(q2)*sin(q4) - cos(q2)*cos(q3)*cos(q4))*sin(q5) - sin(q3)*cos(q2)*cos(q5)), L6*(((sin(q2)*sin(q4) - cos(q2)*cos(q3)*cos(q4))*cos(q5) + sin(q3)*sin(q5)*cos(q2))*cos(q6) + (sin(q2)*cos(q4) + sin(q4)*cos(q2)*cos(q3))*sin(q6)), 0]]) return z def jacobi_all(self, q): """ヤコビ行列を全部計算""" jWo = self.jacobi_Wo(q) jBL = self.jacobi_BL(q) j0 = self.jacobi_0(q) j1 = self.jacobi_1(q) j2 = self.jacobi_2(q) j3 = self.jacobi_3(q) j4 = self.jacobi_4(q) j5 = self.jacobi_5(q) j6 = self.jacobi_6(q) j7 = self.jacobi_7(q) jGL = self.jacobi_GL(q) return [jWo, jBL, j0, j1, j2, j3, j4, j5, j6, j7, jGL] def djacobi_Wo(self, q, dq): L, h, H, L0, L1, L2, L3, L4, L5, L6 = self.L, self.h, self.H, self.L0, self.L1, self.L2, self.L3, self.L4, self.L5, self.L6 q1, q2, q3, q4, q5, q6, q7 = q[0, 0], q[1, 0], q[2, 0], q[3, 0], q[4, 0], q[5, 0], q[6, 0] dq1, dq2, dq3, dq4, dq5, dq6, dq7 = dq[0, 0], dq[1, 0], dq[2, 0], dq[3, 0], dq[4, 0], dq[5, 0], dq[6, 0] z = np.array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]]) return z def djacobi_BL(self, q, dq): L, h, H, L0, L1, L2, L3, L4, L5, L6 = self.L, self.h, self.H, self.L0, self.L1, self.L2, self.L3, self.L4, self.L5, self.L6 q1, q2, q3, q4, q5, q6, q7 = q[0, 0], q[1, 0], q[2, 0], q[3, 0], q[4, 0], q[5, 0], q[6, 0] dq1, dq2, dq3, dq4, dq5, dq6, dq7 = dq[0, 0], dq[1, 0], dq[2, 0], dq[3, 0], dq[4, 0], dq[5, 0], dq[6, 0] z = np.array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]]) return z def djacobi_0(self, q, dq): L, h, H, L0, L1, L2, L3, L4, L5, L6 = self.L, self.h, self.H, self.L0, self.L1, self.L2, self.L3, self.L4, self.L5, self.L6 q1, q2, q3, q4, q5, q6, q7 = q[0, 0], q[1, 0], q[2, 0], q[3, 0], q[4, 0], q[5, 0], q[6, 0] dq1, dq2, dq3, dq4, dq5, dq6, dq7 = dq[0,
\mathbf Z H$ obviously satisfies $\kappa(x)=x_e\in \mathbf Z$. \begin{prop} Let $G$ be a group and assume that $\mathbf Z(G\times G\times G)$ satisfies the weak Bass Trace Conjecture and let $X(G)$ be the Sidki double of $G$. Then $\mathbf Z X(G)$ satisfies the weak Bass Trace Conjecture as well. \end{prop} \begin{proof} We follow the line of proof of the Proposition \ref{P}. First, we observe that we can lift an idempotent matrix $A\in M_n(\mathbf Z X(G))$ to an idempotent $B\in M_n(\mathbf Z X(K))$ for some finitely generated subgroup $K<G$ (cf. \ref{lift}). As we can see $B$ as a matrix in $M_n(\mathbf Z X(K))$ we just need to observe that elements $x\neq e$ in $X(K)$ of finite order have $r_B(x)=0$ by Linnell's result {\cite{Linnell}*{Lemma 4.1}}. Also, by assumption $G\times G\times G$ satisfy the weak Bass Conjecture for $\mathbf Z (G\times G\times G)$. It follows that the image of $\rho: X(G) \to G\times G\times G$ satisfies the weak Bass Trace Conjecture as well. The rest of the proof is then as in the case of the Proposition \ref{P}. \end{proof} \section{Perfect groups and Stem-Extensions} A concise reference for this section is Kervaire's note \cite{K}, and a more comprehensive treatment can be found in Gruenberg's book \cite{G}*{Chapter 9, \S\! 9.9}. The article by Eckmann-Hilton-Stammbach \cite{EHS} contains everything on stem-extensions we need here. \begin{defi} A central extension $C\to H\to Q$ is called a {\it stem-extension}, if $C<[H,H]$. \end{defi} Central extensions ${\mathcal E}: C\to H\to Q$ are classified by elements in $H^2(Q,C)$. The {\sl{5-term sequence}} of $\mathcal{E}$ has the form $$ H_2(H,\mathbf Z)\to H_2(Q,\mathbf Z)\stackrel{\partial{(\mathcal E)}}\longrightarrow H_1(C,\mathbf Z)\to H_1(H,\mathbf Z)\to H_1(Q,\mathbf Z)\to \{e\}\,.$$ \begin{defi}A group $G$ is called {\it perfect} if $G=[G,G]$, or equivalently, $H_1(G,\mathbf Z)=\{e\}$. A group $G$ is called {\it super-perfect} if it is perfect and moreover $H_2(G,\mathbf Z)=\{e\}$.\end{defi} For $Q$ perfect, the {\sl{Universal Coefficient Theorem}} implies that $$H^2(Q, C)\cong \operatorname{Hom}(H_2(Q,\mathbf Z), C)$$ so that the central extension $\mathcal E$ corresponds to a homomorphism $\mathcal E _*: H_2(Q,\mathbf Z)\to C\,.$ The homomorphism $\mathcal E_*$ can be identified with $\partial(\mathcal E)$ in the 5-term sequence, if one identifies $H_1(C,\mathbf Z)$ with $C$\,. \begin{lemm}[Eckmann-Hilton-Stammbach \cite{EHS}*{Prop. 4.1}]\label{equivalent} Let $\mathcal E: C\to G\to Q$ be a central extension with $Q$ a perfect group. Then the following are equivalent: \begin{enumerate} \item[(1)] the classifying homomorphism $\mathcal E_*: H_2(Q)\to C$ is surjective ; \item[(2)] $\mathcal E$ is a stem-extension; \item[(3)] $G$ is perfect. \end{enumerate} \end{lemm} \begin{proof} Because $Q$ is perfect, the 5-term sequence for $\mathcal E$ ends with $$H_2(Q,\mathbf Z)\stackrel{\partial{(\mathcal E)}}\longrightarrow C \to G/[G,G]\to \{e\}\,.$$ Identifying $\mathcal E_*$ with $\partial(\mathcal E)$ we see that $(1)\Longleftrightarrow (3)$. If $G$ is perfect, $C<[G,G]=G$, so $(3) \Rightarrow (2)$. Since $Q$ is assumed to be perfect, $G=C[G,G]$ and assuming (2), i.e.,\, $C<[G,G]$, we have $G=[G,G]$. Thus $(2)\Rightarrow (3)$. \end{proof} The {\sl{universal}} stem-extension $\mathcal U$ of a perfect group $Q$ is obtained by choosing $C=H_2(Q,\mathbf Z)$ and for $\mathcal U_*: H_2(Q,\mathbf Z)\to C$ the identity homomorphism, yielding % $$\mathcal U: H_2(Q,\mathbf Z)\longrightarrow \tilde{H}\longrightarrow Q\,.$$ % It follows from Lemma \ref{equivalent}, $(1)\Rightarrow (2)$, that $\mathcal U$ is indeed a stem-extension, i.e. that $H_2(Q,\mathbf Z)$ is contained in ${[\tilde{H},\tilde{H}]}$. Moreover, the universal stem-extension has the following property. \begin{lemm}[Kervaire \cite{K}*{Proposition 1}]\label{K} Let $Q$ be a perfect group and $$\mathcal E: C\longrightarrow H \longrightarrow Q$$ % a stem-extension. If $\mathcal U: H_2(Q,\mathbf Z)\to \tilde{H}\to Q$ denotes the universal stem-extension, then $\tilde{H}$ is super-perfect and $\mathcal U$ maps onto $\mathcal E$ to yield a commutative diagram % $$\xymatrix { H_2(Q,\mathbf Z) \ar@{->>}[d]^{\partial(\mathcal E)}\ar[r]& \tilde{H} \ar@{->>}[d] \ar[r] &Q\ar@{=}[d]\\ C\ar[r]&H\ar[r] & Q.\\ } $$ \end{lemm} For $G$ a perfect group, the Sidki Double $X(G)$ is particularly simple to describe. \begin{prop}\label{Sstem} Let $G$ be a perfect group. Then % $$W(G)\to X(G)\stackrel{\rho}\longrightarrow G\times G\times G$$ % is a stem-extension, i.e., $\rho$ is surjective and $W(G)$ is a central subgroup of $X(G)$ contained in $[X(G),X(G)]$. \end{prop} % \begin{proof} First we check that $\rho: X(G)\to G\times G\times G$ is surjective. It is known that $\rho$ maps the subgroup $D(G)L(G)<X(G)$ to a subgroup of $G\times G\times G$ which contains the commutator subgroup of $G\times G\times G$ (see Proof of Lemma \ref{L1} above). Because $G$ and therefore also $G\times G\times G$ is perfect, this implies that $\rho$ is surjective. Next, we check that $W(G)<X(G)$ is central. We know already that $W(G)$ is central as a subgroup of $L(G)D(G)$, so it suffices to see that $X(G)=L(G)D(G)$. Because the kernel of $\rho$ lies in $L(G)D(G)$ it follows that $$G/L(G)D(G)<G\times G\times G /\rho(X(G)) =\{e\}$$ and thus $L(G)D(G)=X(G)$. To see that $W(G)<[X(G),X(G)]$, we observe that as $W(G)<D(G)$ and $D(G)=[G,G^\psi]<[X(G),X(G)]$. \end{proof} \begin{coro} Let $G$ be a super-perfect group. Then $$X(G)\cong G\times G\times G\,.$$ \end{coro} \begin{proof} The assumption on $G$ being perfect implies that $G\times G\times G$ is a perfect group. According to Proposition \ref{Sstem}, the extension $$W(G)\to X(G)\to G\times G\times G$$ is a stem-extension. The associated universal stem-extension has the form $$H_2(G\times G\times G,\mathbf Z)\longrightarrow \widetilde{X(G)}\longrightarrow G\times G\times G\,.$$ From Lemma \ref{K} we see that there is a surjective homomorphism $$H_2(G\times G\times G,\mathbf Z)\longrightarrow W(G)\,.$$ But as $G$ is super-perfect, the $K\ddot{u}nneth$-Formula shows that $G\times G$ as well as $G\times G\times G$ are super-perfect too. Thus $H_2(G\times G\times G,\mathbf Z)=\{e\}$ and therefore $W(G)=\{e\}$. \end{proof} \section{Examples}\label{examples} General theorems on the structure of Sidki Doubles are difficult to obtain. However, it is easy to see that the Sidki Double of an amenable group is amenable using the map $\rho: X(G)\to G\times G\times G$ and observing that both the kernel $W(G)$ of $\rho$ and the image $\rho(X(G))$ are amenable. But, for instance, we don't know whether for an a-T-menable group $G$ its Sidki Double is a-T-menable as well. We give an example of groups $H<G$ for which the induced map of the Sidki doubles $X(H)\to X(G)$ fails to be injective (Example \ref{subgroup}) and we show that for $G$ of finite cohomological dimension, $X(G)$ can have infinite rational cohomological dimension (Example \ref{cd}). We also give an example of a torsion-free finitely presented group $G$ with Sidki Double $X(G)$ admitting a finitely generated non-free, projective $\mathbf Q X(G)$-module for $\mathbf Q$ the rational numbers (Example \ref{nonfreeproj}). \medskip In Section 2 we proved results on $X(G)$ by passing to $X(H)$ for a finitely generated subgroup $H<G$; the induced map $X(H)\to X(G)$ will not be injective in general as the following example shows. \begin{exam}\label{subgroup} There is a finitely presented group $G$ and a finitely generated subgroup $H<G$ such that the natural maps $X(H)\to X(G)$ and $W(H)\to W(G)$ are not injective. \end{exam} \begin{proof} It was proved in \cite{KS}*{Lemma 9.1} that for $\Gamma=\mathbf Z\times \mathbf Z$, there is a natural isomorphism $W(\Gamma)\cong H_2(\Gamma)=\mathbf Z$. Therefore, by writing $\mathbf Q\times\mathbf Q$ as a direct limit of groups $\mathbf Z\times\mathbf Z$ and using Lemma \ref{lim}, we see that $W(\mathbf Q\times\mathbf Q)\cong\mathbf Q$. One can embed $\mathbf Q$ into a finitely presented group $T$ which is simple and has type $FP_\infty$ (cf.\,Belk, Hyde and Matucci \cite{BHM}*{Theorem 3 and 4}). Thus $\mathbf Q\times\mathbf Q<T\times T$ with $T\times T$ perfect and of type $FP_\infty$. By Theorem B of \cite{KS} it follows that $W(T\times T)$ is a finitely generated abelian group. We conclude that $W(\mathbf Q\times \mathbf Q)\cong \mathbf Q$ cannot map injectively to $W(T\times T)$. Now $\mathbf Q\times\mathbf Q$ is the union of subgroups $\{S_\alpha, \alpha\in I\}$ with every $S_\alpha$ isomorphic to $\mathbf Z\times\mathbf Z$. Therefore we can find an index $\beta\in I$ and $S_\beta<\mathbf Q\times\mathbf Q$ such that the map $W(S_\beta)\to W(T\times T)$ induced by $S_\beta\to \mathbf Q\times\mathbf Q\to T\times T$ is not injective either. Taking $$H:=S_\beta<\mathbf Q\times\mathbf Q<T\times T=:G$$ yields the desired example. \end{proof} % Bridson and Kochloukova asked in \cite{BK}*{Question 5.3}, whether for a finitely generated group $F$ of cohomological dimension one, $X(F)$ has finite cohomological dimension. We give an example of a group $G$ with cohomological dimension 2 but with $X(G)$ having infinite rational cohomological dimension. % \begin{exam}\label{cd} There is a group $G$ of cohomological dimension 2 whose Sidki Double has infinite rational cohomological dimension. \end{exam} % \begin{proof} Take a group $\Sigma^2$ with classifying space $K(\Sigma^2,1)$ a finite 2-dimensional $CW$-complex with the same homology as the 2-sphere $S^2$ (for the construction of such a group see for instance Maunder \cite{M}). Put $G=\ast_\mathbf N\Sigma^2$ a countably infinite free product of groups $\Sigma^2$. $G$ has cohomological dimension 2 and $H_2(G,\mathbf Z)=\oplus_\mathbf N \mathbf Z $. According to \cite{KS} (see also \cite{Rocco}), the abelian group $W(G)$ maps onto $H_2(G,\mathbf Z)$ and
} \partial_{i_2} x^{ [\nu_1 }\partial_{i_3} x^{\nu_2} \partial_{i_4} x^{\nu_3} \delta_{\mu}^{\nu_4]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ % \left\{ Z_{9}(\sigma), Z^{[4]\nu_1 \cdots \nu_4 }_{\mbox{\tiny RR}}(\sigma') \right\}&= { {\mathbbm i} \over 4! } \epsilon^{i i_1 \cdots i_4} \partial_{ i_1 } x^{ [\nu_1 } \cdots \partial_{i_4} x^{\nu_4]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ % \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[2]\nu9}_{\mbox{\tiny RR}}(\sigma') \right\}&=4{\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} x^{\mu} \partial_{i_4} x^{\nu} \partial_i \delta(\sigma-\sigma'), \nonumber \\ % \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[0]}_{\mbox{\tiny RR}}(\sigma') \right\}&=4{\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} x^{\mu} \partial_{i_4} \tilde{x}^9 \partial_i \delta(\sigma-\sigma'), \nonumber \\ % \left\{ Z^{[1]9}_{\mbox{\tiny NS}}(\sigma), Z^{[2]\nu9}_{\mbox{\tiny RR}}(\sigma') \right\}&=4{\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} \tilde{x}^9 \partial_{i_4} x^{\nu} \partial_i \delta(\sigma-\sigma'), \nonumber \\ % \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[4]\nu_1 \nu_2 \nu_39}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} x^{\mu} \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} x^{\nu_3} \partial_i \delta(\sigma-\sigma'), \nonumber \\ % \left\{ Z^{[1]9}_{\mbox{\tiny NS}}(\sigma), Z^{[4]\nu_1 \nu_2 \nu_39}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} \tilde{x}^9 \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} x^{\nu_3} \partial_i \delta(\sigma-\sigma'), \nonumber \\ % \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[2]\nu_1 \nu_2}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} x^{\mu} \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} \tilde{x}^9 \partial_i \delta(\sigma-\sigma'), \label{eq:IIAKK5_algebra}\end{align} where each component is defined by \eqref{eq:D5_algebra} but all the background fields in the momenta $p_{\mu}, p_9 \ (\mu \not= 9)$ are replaced by the T-dualized ones. \subsection{IIB $5^2_2$-brane} A further T-duality transformation to another transverse isometric direction to the type IIB KK5-brane results in the exotic $5^2_2$-brane in type IIB theory. The worldvolume effective action of the type IIB $5^2_2$-brane has been obtained in \cite{Chatzistavrakidis:2013jqa,Kimura:2014upa}. The non-zero components of the type IIB $5^2_2$-brane algebra are \begin{align} \left\{ Z_{\mu}(\sigma), Z^{[1]\nu}_{\mbox{\tiny NS}}(\sigma') \right\}&={\mathbbm i} E^i \delta_{\mu}^{\nu} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{9}(\sigma), Z^{[1]9}_{\mbox{\tiny NS}}(\sigma') \right\}&= \left\{ Z_{8}(\sigma), Z^{[1]8}_{\mbox{\tiny NS}}(\sigma') \right\}= {\mathbbm i} E^i \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{\mu}(\sigma), Z^{[1]\nu}_{\mbox{\tiny RR}}(\sigma') \right\}&={\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} F_{i_3i_4} \delta^{\nu}_{\mu} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{9}(\sigma), Z^{[1]9}_{\mbox{\tiny RR}}(\sigma') \right\}&= \left\{ Z_{8}(\sigma), Z^{[1]8}_{\mbox{\tiny RR}}(\sigma') \right\}= {\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} F_{i_3i_4} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{\mu}(\sigma), Z^{[3]\nu_1\nu_2\nu_3}_{\mbox{\tiny RR}}(\sigma') \right\}&= { {\mathbbm i} \over 2! } \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{ i_3 } x^{ [\nu_1 } \partial_{i_4} x^{\nu_2} \delta_{\mu}^{\nu_3]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{\mu}(\sigma), Z^{[3]\nu_1\nu_2 9}_{\mbox{\tiny RR}}(\sigma') \right\}&={\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{ i_3 } \tilde{x}^{ 9 } \partial_{i_4} x^{[\nu_1} \delta_{\mu}^{\nu_2]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{\mu}(\sigma), Z^{[3]\nu_1\nu_2 9}_{\mbox{\tiny RR}}(\sigma') \right\}&= {\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{ i_3 } \tilde{x}^{ 8 } \partial_{i_4} x^{[\nu_1} \delta_{\mu}^{\nu_2]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{9}(\sigma), Z^{[3]\nu_1\nu_2 9}_{\mbox{\tiny RR}}(\sigma') \right\}&= \left\{ Z_{8}(\sigma), Z^{[3]\nu_1\nu_2 8}_{\mbox{\tiny RR}}(\sigma') \right\} \nonumber\\ &= { {\mathbbm i} \over 2! } \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{ i_3 } x^{ [\nu_1 } \partial_{i_4} x^{\nu_2]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{\mu}(\sigma), Z^{[3]\nu_1 8 9}_{\mbox{\tiny RR}}(\sigma') \right\}&= {\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{ i_3 } \tilde{x}^{ 8 } \partial_{i_4} \tilde{x}^9 \delta_{\mu}^{\nu_1} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{9}(\sigma), Z^{[3]\nu_1 8 9}_{\mbox{\tiny RR}}(\sigma') \right\}&= {\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{ i_3 } x^{ \nu_1 } \partial_{i_4} \tilde{x}^8 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{8}(\sigma), Z^{[3]\nu_1 9 8}_{\mbox{\tiny RR}}(\sigma') \right\}&={\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{ i_3 } x^{ \nu_1 } \partial_{i_4} \tilde{x}^9 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{\mu}(\sigma), Z^{[5]\nu_1 \cdots \nu_5}_{\mbox{\tiny RR}}(\sigma') \right\}&= { {\mathbbm i} \over 4! } \epsilon^{i i_1 \cdots i_4} \partial_{ i_1 } x^{ [\nu_1 } \cdots \partial_{i_4} x^{\nu_4} \delta_{\mu}^{\nu_5]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{\mu}(\sigma), Z^{[5]\nu_1 \cdots \nu_4 9}_{\mbox{\tiny RR}}(\sigma') \right\}&= { {\mathbbm i} \over 3! } \epsilon^{i i_1 \cdots i_4} \partial_{ i_1 } \tilde{x}^{ 9 } \partial_{i_2} x^{ [\nu_1 }\partial_{i_3} x^{\nu_2} \partial_{i_4} x^{\nu_3} \delta_{\mu}^{\nu_4]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{\mu}(\sigma), Z^{[5]\nu_1 \cdots \nu_4 8}_{\mbox{\tiny RR}}(\sigma') \right\}&= { {\mathbbm i} \over 3! } \epsilon^{i i_1 \cdots i_4} \partial_{ i_1 } \tilde{x}^{ 8 } \partial_{i_2} x^{ [\nu_1 }\partial_{i_3} x^{\nu_2} \partial_{i_4} x^{\nu_3} \delta_{\mu}^{\nu_4]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{9}(\sigma), Z^{[5]\nu_1 \cdots \nu_4 9}_{\mbox{\tiny RR}}(\sigma') \right\}&= \left\{ Z_{8}(\sigma), Z^{[5]\nu_1 \cdots \nu_4 8}_{\mbox{\tiny RR}}(\sigma') \right\}= { {\mathbbm i} \over 4! } \epsilon^{i i_1 \cdots i_4} \partial_{ i_1 } x^{ [\nu_1 } \cdots \partial_{i_4} x^{\nu_4]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{\mu}(\sigma), Z^{[5]\nu_1 \nu_2 \nu_3 8 9}_{\mbox{\tiny RR}}(\sigma') \right\}&= { {\mathbbm i} \over 2! } \epsilon^{i i_1 \cdots i_4} \partial_{ i_1 } \tilde{x}^{ 8 } \partial_{i_2} \tilde{x}^{ 9 }\partial_{i_3} x^{[\nu_1} \partial_{i_4} x^{\nu_2} \delta_{\mu}^{\nu_3]} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{9}(\sigma), Z^{[5]\nu_1 \nu_2 \nu_3 8 9}_{\mbox{\tiny RR}}(\sigma') \right\}&= { {\mathbbm i} \over 3! } \epsilon^{i i_1 \cdots i_4} \partial_{ i_1 } x^{ [\nu_1 } \partial_{i_2} x^{\nu_2} \partial_{i_3} x^{\nu_3]} \partial_{i_4} \tilde{x}^8 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z_{8}(\sigma), Z^{[5]\nu_1 \nu_2 \nu_3 9 8}_{\mbox{\tiny RR}}(\sigma') \right\}&= { {\mathbbm i} \over 3! } \epsilon^{i i_1 \cdots i_4} \partial_{ i_1 } x^{ [\nu_1 } \partial_{i_2} x^{\nu_2} \partial_{i_3} x^{\nu_3]} \partial_{i_4} \tilde{x}^9 \partial_i \delta(\sigma-\sigma'), \nonumber\\ \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[1]\nu}_{\mbox{\tiny RR}}(\sigma') \right\}&=4 {\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} x^{\mu} \partial_{i_4} x^{\nu} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[1]9}_{\mbox{\tiny RR}}(\sigma') \right\}&=4 {\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} x^{\mu} \partial_{i_4} \tilde{x}^9 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]9}_{\mbox{\tiny NS}}(\sigma), Z^{[1]\nu}_{\mbox{\tiny RR}}(\sigma') \right\}&=4 {\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} \tilde{x}^9 \partial_{i_4} x^{\nu} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[1]8}_{\mbox{\tiny RR}}(\sigma') \right\}&=4 {\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} x^{\mu} \partial_{i_4} \tilde{x}^8 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]8}_{\mbox{\tiny NS}}(\sigma), Z^{[1]\nu}_{\mbox{\tiny RR}}(\sigma') \right\}&=4 {\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} \tilde{x}^8 \partial_{i_4} x^{\nu} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]9}_{\mbox{\tiny NS}}(\sigma), Z^{[1]8}_{\mbox{\tiny RR}}(\sigma') \right\}&=4{\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} \tilde{x}^9 \partial_{i_4} \tilde{x}^8 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]8}_{\mbox{\tiny NS}}(\sigma), Z^{[1]9}_{\mbox{\tiny RR}}(\sigma') \right\}&=4{\mathbbm i} \epsilon^{i i_1 \cdots i_4} F_{i_1i_2} \partial_{i_3} \tilde{x}^8 \partial_{i_4} \tilde{x}^9 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[3]\nu_1 \nu_2 \nu_3}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} x^{\mu} \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} x^{\nu_3} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]9}_{\mbox{\tiny NS}}(\sigma), Z^{[3]\nu_1 \nu_2 \nu_3}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} \tilde{x}^9 \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} x^{\nu_3} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[3]\nu_1 \nu_2 9}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} x^{\mu} \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} \tilde{x}^9 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]8}_{\mbox{\tiny NS}}(\sigma), Z^{[3]\nu_1 \nu_2 \nu_3}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} \tilde{x}^8 \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} x^{\nu_3} \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[3]\nu_1 \nu_2 8}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} x^{\mu} \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} \tilde{x}^8 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]\mu}_{\mbox{\tiny NS}}(\sigma), Z^{[3]\nu_1 8 9}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} x^{\mu} \partial_{i_2} x^{\nu_1} \partial_{i_3} \tilde{x}^8 \partial_{i_4} \tilde{x}^9 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{[1]9}_{\mbox{\tiny NS}}(\sigma), Z^{[3]\nu_1 \nu_2 8}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} \tilde{x}^9 \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} \tilde{x}^8 \partial_i \delta(\sigma-\sigma'), \nonumber \\ \left\{ Z^{8[1]}_{\mbox{\tiny NS}}(\sigma), Z^{[3]\nu_1 \nu_2 9}_{\mbox{\tiny RR}}(\sigma') \right\}&=2{\mathbbm i} \epsilon^{i i_1 \cdots i_4} \partial_{i_1} \tilde{x}^8 \partial_{i_2} x^{\nu_1} \partial_{i_3} x^{\nu_2} \partial_{i_4} \tilde{x}^9 \partial_i \delta(\sigma-\sigma'), \end{align} where each component is given by \eqref{eq:D5_algebra} but all the background fields in the momenta $p_{\mu}, p_8, p_9 \ (\mu \not= 8,9)$ are replaced by the T-dualized ones. We have now written down the current algebras of the five-branes in type II string theories. A comment on the furture T-duality transformations is in order. Since there are two yet untouched transverse directions to the five-branes, there ramain two further T-dualized five-branes. They are denoted as $5^3_2$- and $5^4_2$-branes and have distinct nature compared with the $5^b_2$-branes $(b=0,1,2)$ discussed above. We will address these issues in section \ref{sect:conclusion}. In the following, we show that the worldvolume currents of five-branes give rise to the notion of the extended (doubled or exceptional) spaces in DFT and EFT and they are directly tied with the supersymmetry algebras. \section{Extended spaces from current algebras } \label{sect:SUSY_algebras} In this section, after the review of the current algebra approach \cite{Siegel:1993xq, Siegel:1993th, Siegel:1993bj, Hatsuda:2014qqa} to the double field theory, we extend this approach to the exceptional field theory by using brane current algebras. We focus on brane currents which correspond to brane charges in superalgebras, since a superalgebra includes not only perturbative states but also non-perturbative states representing S-duality. Supersymmetric branes in ten-dimensional ${\cal N}=2$ theories are classified in \cite{Townsend:1995gp , Hull:1997kt} by the superalgebras of 32 supercharges $Q_\alpha$ \begin{eqnarray} \{Q_\alpha,Q_\beta\}=Z_{\alpha\beta}=Z_{M}\Gamma^M{}_{\alpha\beta}~~~. \end{eqnarray} The index $M$ is decomposed by the form and $(\Gamma^M)_{\alpha\beta}$ is anti-symmetric gamma matrices depending on theories. The bosonic subalgebra $Z_{M}$ includes not only the momentum and the winding mode but also brane charges, such as KK5-brane, NS5-brane, D5-brane. Exotic branes are variations of them obtained by T-dualities. We
community for the realm, click New, and to edit an existing community, click its link: the Configure Community page appears. Follow the steps described in Creating and Configuring Communities on page 62. Creating and Configuring Communities Creating a community involves these basic steps: • • • • Assign members to the community Select access methods for the community Optionally, specify End Point Control restrictions for the community Specify a style and layout for the WorkPlace portal. Assigning Members to a Community The first step in creating a community involves specifying which users will be members. By default, a community is configured to include all users from the authentication realm to which it is assigned. However, you can configure a community to permit access to only a subset of users or user groups in a realm. This is useful, for example, if you want to segment a realm into one community for employees and another community for business partners. You can then provide each community with the appropriate access agents or impose End Point Control restrictions if users are logging in from non-secure computers. Communities can also be referenced in access control rules to permit To assign members to an existing community 1. 2. Within the realm, click the link for the community you want to configure. The Configure Community page appears with the Members tab displayed. 62 | Aventail E-Class SRA 10.7 Administrator Guide 3. The Members box specifies which users or groups belong to this community. Click Edit to select from a list of users and groups. If no users or groups are specified, the default value of this field is Any, meaning that any users from the authentication realm that references this community belong to this community. 4. In the Maximum active sessions box you can limit the number of sessions each member of this community is allowed to have active at one time. For mobile users, for example, you may want to restrict the number of sessions to 1—each session consumes one user license, and it’s impractical for a mobile user to have more than one active session. With other communities, such as employees who alternate between working from home and in the office, the number of allowed sessions should probably be higher. See How 5. To select which access methods will be available to members of the community, click the Access Methods tab. See Selecting Access Methods for a Community on page 63 for more information. 6. To restrict user access based on the security of client devices, click the End Point Control restrictions tab and specify which zones are available to users in this community. See Using End Point Control Restrictions in a Community on page 65. 7. Click Save. Selecting Access Methods for a Community The second step in creating a community is to determine which access methods will be available for community members to connect to the appliance and access your network resources. For information on which access methods are compatible with your users’ environments, see User Access Components on page 15. To specify the access methods available to community members 1. User Management | 63 2. Click the link for the community you want to configure, and then click the Access Methods tab. 3. Select the access methods community members can use with a browser to connect to resources on your network. Based on the capabilities of the user's system, the appliance activates the access agents you have selected. For information on the capabilities and system requirements of the various access agents, see User Access Components and Services on page 409. 4. If you want to provide network tunnel client access to members of a community, select a combination of the following: – In the tunnel access area, select Network tunnel client. You can use a built-in resource and shortcut if you want users to download the Connect Tunnel client and activate it from a link in WorkPlace. 64 | Aventail E-Class SRA 10.7 Administrator Guide – For Web-based proxy access, select Client/server proxy agent (OnDemand), and then click Auto-activate from Aventail WorkPlace. This will automatically provision or activate the Web-based OnDemand Tunnel agent to users when they connect to WorkPlace. – In the Web access (HTTP) area, select Web proxy agent for clientless access to most types of Web-based resources for Windows clients. Select Translated Web access for clientless access to Web resources that are mapped to custom ports or custom FQDNs for improved application compatibility or that use aliases to obscure internal host names. Translated Web access can be used as a fallback if the default Web proxy agent cannot run. See Web Access on page 416 for information about the different types of Web access, and see Adding Resources on page 215 for information about Note 5. To deploy the network tunnel clients to users, you must first make one or more IP address pools available to the community. By default, AMC makes all configured IP address pools available to a community; however, you can select specific IP address pools if necessary. See Network Tunnel Client Configuration on page 68. 6. You can require users to install an E-Class SRA agent or client before granting them access to network resources when they log in to WorkPlace. Selecting Require agent in order to access network provides better application compatibility for applications that need an agent: it means broader access for users, and fewer Help Desk calls for you. When this setting is disabled, a user logging in to WorkPlace can choose not to install an agent and proceed with translated, custom port mapped or custom FQDN mapped Web access. In this case, the user is placed in either the Default zone or a Quarantine zone, depending on how the community is configured. 7. When you have finished selecting access methods for the community, click Next to proceed to the End Point Control restrictions area, where you can restrict access to community members based on the security of their client devices. See Using End Point Control Restrictions in a Community on page 65. If you don’t want to employ End Point Control for the community, click Finish. If the network tunnel client option is not enabled for a particular community, users who previously had access to the Connect Tunnel client are still able to use it to access the appliance. If the community is configured to provide only Translated Web access, terminal resources are unavailable because the client PC will not have the network transport required to access a proprietary application protocol. For information on configuring graphical terminal agents, see Managing the E-Class SRA Access Services on page 444. Using End Point Control Restrictions in a Community When you’re creating a community, you have the option of restricting access to users based on the security of their client devices. To do this, specify which End Point Control zones are available to users in this community. There are four types of zones—Deny, Standard, Quarantine, and Default. For more information on how to create and configure End Point Control zones, and the device profiles they use to classify connection requests, see Managing EPC with Zones and Device Profiles on page 336. You can also set an inactivity timer, even if you don’t use End Point Control zones for a community, if your users access the appliance using the Connect Tunnel client. User Management | 65 To apply End Point Control restrictions for a community 1. 2. Click the link for the community you want to configure, and then click the End Point Control restrictions tab. 3. Use a Deny zone if you have a device profile that is unacceptable in your deployment. You the PC with which they are trying to connect. Select (or create) an entry in the Deny zones list and click the >> button to move it to the In use list. Deny zones are evaluated first (if there’s a match, the user is logged off). To create a new EPC zone and then add it to the list, click the New button. For information on how to create a zone, see Defining Zones on page 331. 4. You can assign one or more End Point Control Standard zones to the community, which are used to determine which devices are authorized to access a community. If you don’t select a zone, community members are assigned to the default zone, which could limit or even zone in the Standard zones list and then click the >> button to move it
x)|$ elements, i.e., $\mathcal{S}(\hat{\mathbf x}) \cup \mathcal{T} = \mathcal{S}(\mathbf x)$, and $\mathcal{S}(\hat{\mathbf x}) \cap \mathcal{T} = \emptyset$. For simplicity I denote $\widehat{\mathcal{S}} = \mathcal{S}(\hat{\mathbf x})$ and $\mathcal{S} = \mathcal{S}(\mathbf x)$. If we assume that the atoms associated with the elements in $\mathcal{T}$ are orthogonal to the rest of the atoms in the support $\widehat{\mathcal{S}}$, then we can write the solution as \begin{equation} \mathbf x = \mathbf{{\Phi}}_{\widehat{\mathcal{S}}}^\dagger \mathbf u + \mathbf{{\Phi}}_{\mathcal{T}}^\dagger \mathbf u = \hat{\mathbf x} +\mathbf{{\Phi}}_{\mathcal{T}}^\dagger \mathbf u. \end{equation} Using the orthogonality assumption, we can write the left hand side of (\ref{eq:successcriterion1}) as \begin{equation} \frac{||\mathbf x - \hat{\mathbf x}||_2^2}{||\mathbf x||_2^2} = \frac{|| \mathbf{{\Phi}}_{\mathcal{T}}^\dagger \mathbf u||_2^2}{||\mathbf x||_2^2} = \frac{|| \mathbf{{\Phi}}_{\mathcal{T}}^\dagger \mathbf{{\Phi}} \mathbf x ||_2^2}{||\mathbf x||_2^2} = \frac{|| \mathbf{{\Phi}}_{\mathcal{T}}^\dagger [\mathbf{{\Phi}}_{\mathcal{T}} | \square] \mathbf x ||_2^2}{||\mathbf x||_2^2} = \frac{|| [ \mathbf{{\Phi}}_{\mathcal{T}}^\dagger \mathbf{{\Phi}}_{\mathcal{T}} | \mathbf 0] \mathbf x ||_2^2}{||\mathbf x||_2^2} = \frac{||\mathbf x_\mathcal{T} ||_2^2}{||\mathbf x||_2^2} \end{equation} as expected. And thus, we see one way to guarantee condition (\ref{eq:successcriterion1}) is for $||\mathbf x_\mathcal{T} ||_2 \le \epsilon_\mathbf x ||\mathbf x||_2$. If we remove the orthogonality assumption, this becomes \begin{equation} ||\mathbf x_{\mathcal{S}}^\perp||_2 := ||\mathbf x - \mathbf{{\Phi}}_{\mathcal{S}} \mathbf{{\Phi}}_{\mathcal{S}}^\dagger \mathbf x||_2 \le \epsilon_\mathbf x ||\mathbf x||_2. \end{equation} Now, consider that $|\mathcal{T}| = |\mathcal{S}| - 1$, i.e., that we have missed all but one of the elements, but the recovery algorithm has found and precisely estimated the largest element with a magnitude $\alpha$. Assume that all other non-zero elements have magnitudes less than or equal to $\beta$. Thus, $||\mathbf x - \hat{\mathbf x}||_2^2 \le (|\mathcal{S}|-1)\beta^2$; and $||\mathbf x||_2^2 \le (|\mathcal{S}|-1)\beta^2 + \alpha^2$. From the above analysis, we see \begin{equation} \frac{||\mathbf x - \hat{\mathbf x}||_2^2}{||\mathbf x||_2^2} \le \frac{(|\mathcal{S}|-1)\beta^2}{(|\mathcal{S}|-1)\beta^2 + \alpha^2} \le \epsilon_\mathbf x^2 \end{equation} We can see that the criterion (\ref{eq:successcriterion1}) is still guaranteed as long as \begin{equation} \frac{\alpha^2}{\beta^2} \ge \frac{(1-\epsilon_\mathbf x^2)(|\mathcal{S}|-1)}{\epsilon_\mathbf x^2}. \end{equation} This analysis shows that we can substantially violate the criterion (\ref{eq:successcriterion2}) and still meet (\ref{eq:successcriterion1}) as long as the distribution of the sparse signal permits it. For instance, if all the elements of $\mathbf x$ have unit magnitudes, then missing $|\mathcal{T}|$ elements of the support produces $||\mathbf x_{\mathcal{S}}^\perp||_2^2/||\mathbf x||_2^2 = |\mathcal{T}|/|\mathcal{S}|$; and to satisfy (\ref{eq:successcriterion1}) this requires $\sqrt{|\mathcal{T}|/|\mathcal{S}|} \le \epsilon_\mathbf x $. This is not likely unless $|\mathcal{S}|$ is very large and we miss only a few elements. If instead our sparse signal is distributed such that it has only a few extremely large magnitudes and the rest small, then we can miss much more of the support and still satisfy (\ref{eq:successcriterion1}). \begin{figure}[tb] \centering \subfigure[Normal]{ \includegraphics[width=0.49\textwidth]{normal_phaseSuppvsl2.pdf}}\hspace{-0.1in} \subfigure[Laplacian]{ \includegraphics[width=0.49\textwidth]{laplacian_phaseSuppvsl2.pdf}}\\ \subfigure[Uniform]{ \includegraphics[width=0.49\textwidth]{uniform_phaseSuppvsl2.pdf}}\hspace{-0.1in} \subfigure[Bimodal Rayleigh]{ \includegraphics[width=0.49\textwidth]{bimodalrayleigh_phaseSuppvsl2.pdf}} \caption{Differences between the empirical phase transitions for five recovery algorithms using criterion ($R_{\mathcal{S}}$) and criterion ($R_{\ell_2}$). Note different axis scaling in (d).} \label{fig:Phasedifferencesrecoverycriterion} \end{figure} The following experiments test the variability of the phase transitions of several algorithms depending on these success criteria, and the distribution underlying the sparse signals. Figure \ref{fig:Phasedifferencesrecoverycriterion} shows the differences in empirical phase transitions of five recovery algorithms using ($R_{\mathcal{S}}$) or (\ref{eq:successcriterion1}). Since ($R_{\mathcal{S}}$) implies (\ref{eq:successcriterion1}), the empirical phase transition of the latter will always be equal to or greater than that of the former. The empirical phase transitions difference is zero across all problem indeterminacies when there is no difference between these two criteria. Figure \ref{fig:Phasedifferencesrecoverycriterion} shows a significant dependence of the empirical phase transition and the success criteria for four sparse vector distributions. For all other algorithms, and the three other distributions (Bernoulli, bimodal uniform, and bimodal Gaussian), the differences are nearly always zero. I do not completely know the reason why only these five algorithms out of the 15 I test show significant differences in their empirical phase transitions, or why GPSR and StOMP appear the most volatile of these algorithms. It could be that they are better than the others at estimating the large non-zero components at the expense of modeling the small ones. However, this experiment clearly reveals that for sparse vectors distributed with probability density concentrated near zero, criterion (\ref{eq:successcriterion1}) does allow many small components to pass detection without consequence. The more probability density is distributed around zero, the more criterion (\ref{eq:successcriterion1}) is likely to be satisfied while criterion (\ref{eq:successcriterion2}) is violated. It is clear from this experiment that we must take caution when judging the performance of recovery algorithms by a success criterion that can be very lax. In the following experiments, I use criterion (\ref{eq:successcriterion2}) to measure and compare the success of the algorithms since it also implies (\ref{eq:successcriterion1}). \subsection{Sparse Vector Distributions and their Effects} Figures \ref{fig:phasevsdistributions1} and \ref{fig:phasevsdistributions2} show the variability of the empirical phase transitions for each algorithm depending on the sparse vector distributions. The empirical phase transitions of BP and AMP are exactly the same, and so I only show one in Figure \ref{fig:phasevsdistributions1}(a). Clearly, the performance of BP, AMP, and recommended TST and IST, appear extremely robust to the distribution underlying sparse vectors. In \cite{Donoho2009}, Donoho et al., prove AMP to have this robustness. ROMP also shows a robustness, but across all indeterminacies its performance is relatively poor (note the difference in y-axis scaling). IRl1 and GPSR shows the same robustness to Bernoulli and the bimodal distributions; but they both show a surprisingly poor performance for Laplacian distributed vectors, even as the number of measurements increase. GPSR appears to have poorer empirical phase transitions as the probability density around zero increases. I do not yet know why IRl1 fails so poorly for Laplacian vectors. The specific results for recommended IST, IHT, and TST however, appear very different to those reported in \cite{Maleki2010}, though the main findings of their work comport with what I show here. The recovery performance of OMP, PrOMP, and SL0 varies to the largest degree of all algorithms that I test. It is clear that these algorithms perform in proportion to the probability density around zero. In fact, for eight of the algorithms I test (IHT, ALPS, StOMP, CoSaMP, SP, OMP, PrOMP, and SL0) we can predict the order of performance for each distribution by the amount of probability density concentrated near zero. From Fig. \ref{fig:empiricalPDFs} we can see that these are, in order from most to least concentrated: Laplacian, Normal, Uniform, Bimodal Rayleigh, Bimodal Gaussian, Bimodal uniform, and Bernoulli. This behavior is reversed for only recommended IST, GPSR, and ROMP, where their performance increases the less concentrated probability density is around zero. \begin{figure}[tb] \centering \subfigure[BP, AMP]{ \includegraphics[width=0.49\textwidth]{BPphasevsdist.pdf}}\hspace{-0.1in} \subfigure[Recommended TST]{ \includegraphics[width=0.49\textwidth]{TSTphasevsdist.pdf}}\\ \vspace{-0.1in} \subfigure[Recommended IST]{ \includegraphics[width=0.49\textwidth]{ISTphasevsdist.pdf}}\hspace{-0.1in} \subfigure[IRl1]{ \includegraphics[width= 0.49\textwidth]{IRl1phasevsdist.pdf}}\\ \vspace{-0.1in} \subfigure[GPSR]{ \includegraphics[width=0.49\textwidth]{GPSRphasevsdist.pdf}}\hspace{-0.1in} \subfigure[ROMP]{ \includegraphics[width=0.49\textwidth]{ROMPphasevsdist.pdf}}\\ \vspace{-0.1in} \subfigure[Recommended IHT]{ \includegraphics[width= 0.49\textwidth]{IHTphasevsdist.pdf}}\hspace{-0.1in} \subfigure[ALPS]{ \includegraphics[width= 0.49\textwidth]{ALPSphasevsdist.pdf}} \caption{Empirical phase transitions using criterion (\ref{eq:successcriterion2}) of nine recovery algorithms for a variety of sparse vector distributions: Normal (N), Laplacian (L), Uniform (U), Bernoulli (B), Bimodal Gaussian (BG), Bimodal Uniform (BU), Bimodal Rayleigh (BR). Note different y-scaling for ROMP in (f).} \label{fig:phasevsdistributions1} \end{figure} \begin{figure}[tb] \centering \subfigure[StOMP]{ \includegraphics[width=0.49\textwidth]{StOMPphasevsdist.pdf}}\hspace{-0.1in} \subfigure[CoSaMP]{ \includegraphics[width= 0.49\textwidth]{CoSaMPphasevsdist.pdf}}\\ \vspace{-0.1in} \subfigure[SP]{ \includegraphics[width=0.49\textwidth]{SPphasevsdist.pdf}} \hspace{-0.1in} \subfigure[OMP]{ \includegraphics[width= 0.49\textwidth]{OMPphasevsdist.pdf}} \\ \vspace{-0.1in} \subfigure[PrOMP]{ \includegraphics[width=0.49\textwidth]{PrOMPphasevsdist.pdf}}\hspace{-0.1in} \subfigure[SL0]{ \includegraphics[width= 0.49\textwidth]{SL0phasevsdist.pdf}} \caption{Empirical phase transitions using criterion (\ref{eq:successcriterion2}) of six recovery algorithms for a variety of sparse vector distributions: Normal (N), Laplacian (L), Uniform (U), Bernoulli (B), Bimodal Gaussian (BG), Bimodal Uniform (BU), Bimodal Rayleigh (BR). Note different y-scale in (f).} \label{fig:phasevsdistributions2} \end{figure} \clearpage \subsection{Comparison of Recovery Algorithms for Each Distribution} Figure \ref{fig:phasevsalgorithms} shows the same information as Figs. \ref{fig:phasevsdistributions1} and \ref{fig:phasevsdistributions2}, but compares all fifteen algorithms together for single distributions. Here we can see that for sparse vectors distributed Bernoulli, bimodal uniform, and bimodal Gaussian, $\ell_1$-minimization methods (BP, IRl1, GPSR) and the thresholding approach AMP (which uses soft thresholding to approximate $\ell_1$ minimization), perform better than all the greedy methods and the other thresholding methods, and the majorization SL0. For the other four distributions, SL0, OMP and/or PrOMP outperform all the other algorithms I test, with a significantly higher phase transition in the case of vectors distributed Laplacian. In every case, AMP and BP perform the same. We can also see in all cases the phase transition for SP is higher than that for recommended TST, and sometimes much higher, even though for Maleki and Donoho's recommended algorithm they find that the $(\alpha, \beta)$ pair that works best is that that makes it closest in appearance to SP. As I discuss about recommended TST above though, it is only similar to SP, and is not guaranteed to behave the same. Furthermore, and most importantly, in my simulations I provide SP (as well as CoSaMP, ROMP and ALPS) with the exact sparsity of the sensed signal, while recommended TST instead estimates it. Finally, in the case of sparse vectors distributed Laplacian, Normal, and Uniform, we can see that recommended IHT performs better than recommended TST, and sometimes even BP (Normal and Laplacian), which is different from how Maleki and Donoho orders their performance based on recovering sparse vectors distributed Bernoulli \cite{Maleki2010}. \begin{figure}[htb] \centering \subfigure[Bernoulli]{ \includegraphics[width=0.49\textwidth]{bernoulli_phasevsdistSupp.pdf}}\hspace{-0.1in} \subfigure[Bimodal Uniform]{ \includegraphics[width=0.49\textwidth]{bimodaluniform_phasevsdistSupp.pdf}}\\ \vspace{-0.1in} \subfigure[Bimodal Gaussian]{ \includegraphics[width=0.505\textwidth]{bimodalgaussian_phasevsdistSupp.pdf}}\hspace{-0.1in} \subfigure[Bimodal Rayleigh]{ \includegraphics[width=0.485\textwidth]{bimodalrayleigh_phasevsdistSupp.pdf}}\\ \vspace{-0.1in} \subfigure[Uniform]{ \includegraphics[width=0.485\textwidth]{uniform_phasevsdistSupp.pdf}}\hspace{-0.1in} \subfigure[Normal]{ \includegraphics[width=0.50\textwidth]{normal_phasevsdistSupp.pdf}}\\ \vspace{-0.1in} \subfigure[Laplacian]{ \includegraphics[width=0.49\textwidth]{laplacian_phasevsdistSupp.pdf}} \caption{Comparison of phase transitions of all algorithms I test for each distribution. Note different y-scale in (g).} \label{fig:phasevsalgorithms} \end{figure} Figure \ref{fig:bestphase} shows the empirical phase transitions of the best performing
# MINLP written by GAMS Convert at 04/21/18 13:52:42 # # Equation counts # Total E G L N X C B # 318 234 0 84 0 0 0 0 # # Variable counts # x b i s1s s2s sc si # Total cont binary integer sos1 sos2 scont sint # 352 184 168 0 0 0 0 0 # FX 0 0 0 0 0 0 0 0 # # Nonzero counts # Total const NL DLL # 2509 521 1988 0 # # Reformulation has removed 1 variable and 1 equation from pyomo.environ import * model = m = ConcreteModel() m.x1 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x2 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x3 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x4 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x5 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x6 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x7 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x8 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x9 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x10 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x11 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x12 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x13 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x14 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x15 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x16 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x17 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x18 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x19 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x20 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x21 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x22 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x23 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x24 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x25 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x26 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x27 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x28 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x29 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x30 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x31 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x32 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x33 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x34 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x35 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x36 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x37 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x38 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x39 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x40 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x41 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x42 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x43 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x44 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x45 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x46 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x47 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x48 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x49 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x50 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x51 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x52 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x53 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x54 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x55 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x56 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x57 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x58 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x59 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x60 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x61 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x62 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x63 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x64 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x65 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x66 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x67 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x68 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x69 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x70 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x71 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x72 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x73 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x74 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x75 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x76 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x77 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x78 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x79 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x80 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x81 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x82 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x83 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x84 = Var(within=Reals,bounds=(0,None),initialize=0.0892857142857143) m.x85 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x86 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x87 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x88 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x89 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x90 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x91 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x92 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x93 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x94 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x95 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x96 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x97 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x98 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x99 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x100 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x101 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x102 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x103 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x104 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x105 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x106 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x107 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x108 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x109 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x110 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x111 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x112 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x113 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x114 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x115 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x116 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x117 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x118 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x119 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x120 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x121 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x122 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x123 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x124 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x125 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x126 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x127 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x128 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x129 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x130 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x131 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x132 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x133 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x134 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x135 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x136 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x137 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x138 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x139 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x140 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x141 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x142 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x143 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x144 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x145 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x146 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x147 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x148 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x149 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x150 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x151 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x152 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x153 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x154 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x155 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x156 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x157 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x158 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x159 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x160 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x161 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x162 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x163 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x164 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x165 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x166 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x167 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x168 = Var(within=Reals,bounds=(0,None),initialize=1.25) m.x169 = Var(within=Reals,bounds=(0,None),initialize=0.956145) m.x170 = Var(within=Reals,bounds=(0,None),initialize=0.956145) m.x171 = Var(within=Reals,bounds=(0,None),initialize=0.956145) m.x172 = Var(within=Reals,bounds=(0,None),initialize=0.956145) m.x173 = Var(within=Reals,bounds=(0,None),initialize=0.956145) m.x174 = Var(within=Reals,bounds=(0,None),initialize=0.956145) m.b175 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b176 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b177 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b178 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b179 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b180 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b181 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b182 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b183 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b184 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b185 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b186 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b187 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b188 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b189 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b190 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b191 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b192 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b193 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b194 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b195 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b196 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b197 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b198 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b199 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b200 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b201 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b202 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b203 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b204 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b205 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b206 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b207 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b208 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b209 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b210 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b211 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b212 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b213 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b214 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b215 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b216 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b217 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b218 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b219 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b220 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b221 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b222 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b223 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b224 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b225 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b226 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b227 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b228 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b229 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b230 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b231 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b232 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b233 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b234 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b235 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b236 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b237 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b238 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b239 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b240 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b241 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b242 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b243 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b244 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b245 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b246 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b247 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b248 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b249 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b250 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b251 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b252 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b253 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b254 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b255 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b256 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b257 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b258 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b259 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b260 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b261 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b262 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b263 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b264 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b265 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b266 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b267 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b268 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b269 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b270 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b271 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b272 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b273 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b274 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b275 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b276 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b277 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b278 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b279 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b280 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b281 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b282 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b283 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b284 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b285 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b286 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b287 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b288 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b289 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b290 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b291 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b292 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b293 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b294 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b295 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b296 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b297 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b298 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b299 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b300 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b301 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b302 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b303 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b304 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b305 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b306 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b307 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b308 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b309 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b310 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b311 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b312 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b313 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b314 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b315 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b316 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b317 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b318 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b319 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b320 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b321 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b322 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b323 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b324 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b325 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b326 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b327 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b328 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b329 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b330 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b331 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b332 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b333 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b334 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b335 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b336 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b337 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b338 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b339 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b340 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b341 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.b342 = Var(within=Binary,bounds=(0,1),initialize=0.0714285714285714) m.x344 = Var(within=Reals,bounds=(None,None),initialize=0) m.x345 = Var(within=Reals,bounds=(None,None),initialize=0) m.x346 = Var(within=Reals,bounds=(None,None),initialize=0) m.x347 = Var(within=Reals,bounds=(None,None),initialize=0) m.x348 = Var(within=Reals,bounds=(None,None),initialize=0) m.x349 = Var(within=Reals,bounds=(None,None),initialize=0) m.x350 = Var(within=Reals,bounds=(None,None),initialize=0) m.x351 = Var(within=Reals,bounds=(None,None),initialize=0) m.x352 = Var(within=Reals,bounds=(None,None),initialize=0) m.obj = Objective(expr= - m.x174, sense=minimize) m.c2 = Constraint(expr= 0.5*m.b175 + m.b187 + m.b199 + m.b211 + m.b223 + m.b235 + 0.5*m.b247 + m.b259 + m.b271 + m.b283 + 0.5*m.b295 + m.b307 + m.b319 + 0.5*m.b331 == 1) m.c3 = Constraint(expr= 0.5*m.b176 + m.b188 + m.b200 + m.b212 + m.b224 + m.b236 + 0.5*m.b248 + m.b260 + m.b272 + m.b284 + 0.5*m.b296 + m.b308 + m.b320 + 0.5*m.b332 == 1) m.c4 = Constraint(expr= 0.5*m.b177 + m.b189 + m.b201 + m.b213 + m.b225 + m.b237 + 0.5*m.b249 + m.b261 + m.b273 + m.b285 + 0.5*m.b297 + m.b309 + m.b321 + 0.5*m.b333 == 1) m.c5 = Constraint(expr= 0.5*m.b178 + m.b190 + m.b202 + m.b214 + m.b226 + m.b238 + 0.5*m.b250 + m.b262 + m.b274 + m.b286 + 0.5*m.b298 + m.b310 + m.b322 + 0.5*m.b334 == 1) m.c6 = Constraint(expr= 0.5*m.b179 + m.b191 + m.b203 + m.b215 + m.b227 + m.b239 + 0.5*m.b251 + m.b263 + m.b275 + m.b287 + 0.5*m.b299 + m.b311 + m.b323 + 0.5*m.b335 == 1) m.c7 = Constraint(expr= 0.5*m.b180 + m.b192 + m.b204 + m.b216 + m.b228 + m.b240 + 0.5*m.b252 + m.b264 + m.b276 + m.b288 + 0.5*m.b300 + m.b312 + m.b324 + 0.5*m.b336 == 1) m.c8 = Constraint(expr= 0.5*m.b181 + m.b193 + m.b205 + m.b217 + m.b229 + m.b241 + 0.5*m.b253 + m.b265 + m.b277 + m.b289 + 0.5*m.b301 + m.b313 + m.b325 + 0.5*m.b337 == 1) m.c9 = Constraint(expr= 0.5*m.b182 + m.b194 + m.b206 + m.b218 + m.b230 + m.b242 + 0.5*m.b254 + m.b266 + m.b278 + m.b290 + 0.5*m.b302 + m.b314 + m.b326 + 0.5*m.b338 == 1) m.c10 = Constraint(expr= 0.5*m.b183 + m.b195 + m.b207 + m.b219 + m.b231 + m.b243 + 0.5*m.b255 + m.b267 + m.b279 + m.b291 + 0.5*m.b303 + m.b315 + m.b327 + 0.5*m.b339 == 1) m.c11 = Constraint(expr= 0.5*m.b184 + m.b196 + m.b208 + m.b220 + m.b232 + m.b244 + 0.5*m.b256 + m.b268 + m.b280 + m.b292 + 0.5*m.b304 + m.b316 + m.b328 + 0.5*m.b340 == 1) m.c12 = Constraint(expr= 0.5*m.b185 + m.b197 + m.b209 + m.b221 + m.b233 + m.b245 + 0.5*m.b257 + m.b269 + m.b281 + m.b293 + 0.5*m.b305 + m.b317 + m.b329 + 0.5*m.b341 == 1) m.c13 = Constraint(expr= 0.5*m.b186 + m.b198 + m.b210 + m.b222 + m.b234 + m.b246 + 0.5*m.b258 + m.b270 + m.b282 + m.b294 + 0.5*m.b306 + m.b318 + m.b330 + 0.5*m.b342 == 1) m.c14 = Constraint(expr= m.b175 + m.b176 + m.b177 + m.b178 + m.b179 + m.b180 + m.b181 + m.b182 + m.b183 + m.b184 + m.b185 + m.b186 == 1) m.c15 = Constraint(expr= m.b187 + m.b188 + m.b189 + m.b190 + m.b191 + m.b192 + m.b193 + m.b194 + m.b195 + m.b196 + m.b197 + m.b198 == 1) m.c16 = Constraint(expr= m.b199 + m.b200 + m.b201 + m.b202 + m.b203 + m.b204 + m.b205 + m.b206 + m.b207 + m.b208 + m.b209 + m.b210 == 1) m.c17 = Constraint(expr= m.b211 + m.b212 + m.b213 + m.b214 + m.b215 + m.b216 + m.b217 + m.b218 + m.b219 + m.b220 + m.b221 + m.b222 == 1) m.c18 = Constraint(expr= m.b223
basis of such a targeting strategy. In Appendix \ref{Apsec:dhscomp}, we demonstrate in detail how such data can be used for CLTS targeting in the case of Nigeria. We develop a targeting strategy based on our impact estimates from the Nigerian RCT and the 2013 Nigerian Demographic and Health Survey. Even though the DHS contains a less detailed list of assets than our data, the simpler DHS index of community wealth strongly predicts the more sophisticated measure of community wealth used in our study, supporting the notion that readily available surveys collecting asset wealth information, such as the Demographic and Health Survey, are well-suited for CLTS targeting. The resulting targeting map in Figure \ref{f:mapCLTSNigeria} highlights priority areas for targeting, i.e. poor areas where toilet ownership rates are low, in darker shaded areas. \begin{figure}[htbp] \centering \caption{CLTS targeting in Nigeria}\label{f:mapCLTSNigeria} \includegraphics[width=0.5\textwidth]{content/CLTStargetingMapNigeria1.png} \caption*{\emph{Source}: Own calculations based on DHS Nigeria 2013.} \label{f:targetting} \end{figure} \section{Discussion and Conclusion}\label{sec:conclusion} The design of effective policies to address the urgent sanitation concerns in the developing world requires a nuanced understanding of households' investment choices and drivers of behavioral change. In this paper we provide evidence on the effectiveness of Community-Led Total Sanitation (CLTS), a participatory information intervention widely implemented around the world. Our study uses a large cluster randomized experiment in Nigeria for which we collected data up to three years after treatment. Implementation of CLTS was conducted at-scale, i.e. by WASH civil servants trained by local NGOs. We show that CLTS, a participatory community intervention without financial components, had positive but moderate effects on open defecation and toilet construction overall. However, average impacts hide important heterogeneity by communities' socio-economic status, as the intervention has strong and lasting effects on open defecation habits in poorer communities, and increases sanitation investments. In poor communities, OD rates decreased by 9pp from a baseline level of 75\%, while we find no effect in richer communities. The reduction in OD is achieved mainly through increased toilet ownership (+8pp from a baseline level of 24\%). While this result is robust across several measures of community socio-economic status, and is not driven by baseline differences in toilet coverage, our data does not allow us to pin down why households in poorer communities are more susceptible to the programme. However, in addition to the more effective targeting strategies, highlighted in the previous section, that governments can adopt, our results have three further important implications. First, our results provide an additional reason why scale-up of interventions is not trivial \citep{Ravallion2012,BoldEtAl2013,BanerjeeEtAl2017,DeatonCartwright2018}. Discussions on why interventions may not scale-up successfully in a national roll-out have focused on general equilibrium and spillover effects, and recently on aspects of implementation and delivery. The literature has suggested that spillovers and moderating general equilibrium effects may lead to lower returns to interventions, when interventions conducted in areas with specific characteristics are being rolled out universally, e.g. in richer areas. We show that community-specific, heterogeneous treatment impacts are an additional impediment to successful scale-up in terms of effectiveness of interventions. Second, community SES also provides plausible external validity beyond our Nigerian-based RCT. Using data from our study and five other RCTs of similar interventions, we find an inverse relationship between area-level wealth, measured by night light intensity, and program effectiveness across these studies. Thus, we have identified a characteristic that rationalizes the wide range of impact estimates in the literature. Last but not least, we show that interventions relying on information and collective action mechanisms can have substantial impacts on households' health investments and behaviour, specifically relating to sanitation. Yet , there is an important caveat for policy-makers working towards meeting the sanitation-related sustainable development goals. CLTS achieves convergence between poor and rich communities in terms of OD and toilet coverage in our study - and thus levels the playing field. However, it is not a silver bullet to achieve open defecation \emph{free} status in poor communities. Hence, more research on alternative or supplementary interventions to close the sanitation gap in low income countries is needed. These may either seek to magnify CLTS impacts (e.g. through complementary financial incentives, loans or subsidies or more intensive followup), or improve sanitation in rich communities where CLTS is ineffective, e.g. via infrastructure investment and supply side interventions. \vspace{10mm} \begin{singlespace} \noindent THE INSTITUTE FOR FISCAL STUDIES \noindent THE INSTITUTE FOR FISCAL STUDIES \noindent ROYAL HOLLOWAY AND THE INSTITUTE FOR FISCAL STUDIES \noindent INSTITUTE OF EDUCATION, UNIVERSITY COLLEGE LONDON \noindent ROYAL HOLLOWAY AND THE INSTITUTE FOR FISCAL STUDIES \end{singlespace} \section*{Appendix} \section{Variable definitions}\label{Apsec:vardesc} In this section, we provide details on a series of measurements used to construct household and community-level characteristics. These are based on our household surveys and other auxiliary datasets. \subsection{Household characteristics}\label{Apsubsec:hhchars} \noindent\emph{Asset wealth}\\ Household survey measures of annual household income had relatively low response rates: 27\% of the households interviewed reported no income at all or refused to answer. A higher response rate was achieved in a list of questions regarding the ownership of consumer durables. We applied a principal component analysis to this list, and constructed an index of asset wealth based on the first principal component, following \cite{Filmer2001EstimatingIndia}. Such weatlh indices are widely used as a proxy for household long-term wealth, for example, in the USAID Demographic and Health Surveys (DHS) run over 90 countries, or as a targeting tool for the PROGRESA conditional cash-transfer programme \citep{mckenzie2005measuring}. Table \ref{t:rwi} lists the asset items elicited in our household survey, and shows their factor loadings. \begin{table}[ht] \centering \begin{threeparttable} \caption{Asset items used in the asset wealth index} \label{t:rwi} \footnotesize \begin{tabular}{ l c } \hline \textbf{Survey question} & \textbf{Factor loading} \\ \hline \textit{Ownership of the following durable assets: (1=Yes, 0=No)}& \\ Motorcycle/scooter/tricycle & 0.1302 \\ Furniture: chairs & 0.1561 \\ Furniture: tables & 0.1823 \\ Furniture: beds & 0.1075 \\ Refrigerator & 0.2998 \\ Washing machine & 0.1826 \\ Microwave oven & 0.1914 \\ Gas cooker & 0.2507 \\ Plasma/flat screen TV & 0.2173 \\ Other TV & 0.2867 \\ Satellite dish (monthly subscription) & 0.2272 \\ Other satellite dish (DSTV, etc) & 0.2391 \\ Radio/CD/DVD Player & 0.2241 \\ Smart phones & 0.1265 \\ Other Telephone / phones & 0.0886 \\ Computer & 0.2195\\ Air conditioner & 0.1061\\ Power generator & 0.2777\\ Sewing machine & 0.1323\\ Electric iron & 0.3172\\ Pressure cooker & 0.1557\\ Electric fans & 0.3162\\ \hline Number of households included (N=4,722) & 4,622 \\ \hline \end{tabular} \begin{tablenotes}[flushleft] \item \emph{Notes:} Questions were coded equal to one if the household reported to own at least one of each of the items listed in each category. The wealth index was then constructed using the first component of the principal component analysis. Households with missing data for at least one of the categories were excluded. \end{tablenotes} \end{threeparttable} \end{table} \subsection{Community characteristics}\label{Apsubsec:commchars} A community is on average composed of one to two villages or neighborhoods, consisting of 220 households (see details in Section \ref{sec:design}).\bigskip \noindent\emph{Community wealth}\\ Community asset wealth is estimated as the median household's asset wealth score. Our household survey randomly interviewed 20 households per community, so we chose the median to limit possible distortions due to outliers (i.e. households with extremely high or low asset wealth). Our main results discretise community wealth along the median. Poor (rich) communities were those with asset wealth below (above or equal) the median community. In addition to community wealth, we propose three alternative measures of communities' socio-economic status: a night light index, population density, and isolation.\bigskip \noindent \emph{Night light index}\\ The first alternative measure is the average night light index recorded in 2013, before the intervention began. Using household GPS coordinates, we calculate the geographical centroid of each community, and define a 5km radius around the centoid. We used nighttime lights data made available by the U.S. National Oceanographic and Atmospheric Administration (NOAA). The observations on which the data is assembled are made by the Operational Linescan System (OLS) flown on the Defense Meteorological Satellite Program (DMSP) satellites. The intensity of nighttime lighting has
fitting factor for this signal against the bank using Eq.~(\ref{eq:FF}). The calculation has to be repeated over different values of $\bm{\theta}$, which describes the masses, spins and other parameters describing the relative location and orientation of the ``target binary'' with respect to the detector. The intrinsic luminosity of the target binary as well as the fitting factor of the templates depend not only on the masses and spins, but also on the parameters describing the location and orientation of the target binary. For example, the modulational effects of precession are the highest for binaries highly inclined with respect to the detector, while the intrinsic luminosity of such binaries is lower (as compared to binaries which are nearly ``face on''). Thus, highly inclined binaries (which show the largest modulational effects of precession) are intrinsically less likely to be observed as compared to binaries that are face-on. In order to take into account such selection effects in evaluating the effectualness of the template bank, we perform a Monte-Carlo simulation of generic spinning binaries and average the fitting factor over the population. The waveforms are generated by solving the ordinary differential equations given by Eq.~(\ref{eq:PNEvlEqns}) in the ``\textsc{TaylorT5}'' approximation (see Sec.~III of Ref.~\cite{Ajith:2011ec} for the full description)~\footnote{This particular approximant is chosen so as to disentangle the effect precession from the effect of the difference between different PN approximants; see Appendix~\ref{app:EffectualnessSTT4} for a discussion.}. The target binaries (for a set of fixed values of component masses) are uniformly distributed in volume throughout the local universe. Spin magnitudes are distributed uniformly between zero and a maximum value (see Table~\ref{tab:MonteCarloParams}) and the spin angles are isotropically distributed. Cosine of the angle $\iota$ describing the relative orientation of the initial total angular momentum of the binary with respect to the line of sight is uniformly distributed in the interval $(0, 1)$, while the polarization angle $\psi$ is uniformly distributed in $(0,\pi)$. A summary of the parameters of the Monte-Carlo simulations is given in Table~\ref{tab:MonteCarloParams}. In order to evaluate the effectualness of the bank, we compute the \emph{effective fitting factor} $\mathrm{FF_{eff}}$~\cite{BCV2}, in the following way: \begin{equation} \mathrm{FF_{eff}} = \left( \frac{\overline{\rho^3_\mathrm{bank}} }{\overline{\rho^3} }\right)^{1/3}, \label{eq:FF_eff} \end{equation} where $\rho \equiv \left<h^\mathrm{targ}, h^\mathrm{targ}\right>$ is the \emph{optimal} SNR in detecting the target binary, and $\rho_\mathrm{bank} \equiv \rho \, \mathrm{FF} $ is the \emph{suboptimal} SNR extracted by the template bank. The bars indicate ensemble averages over the full parameter space (while keeping the component masses fixed). The effective fitting factor $\mathrm{FF_{eff}}$ describes average detection range by a suboptimal template bank as a fraction of the detection range using an optimal template bank. The corresponding fractional detection volume (and hence the fractional event rates assuming that the binaries are uniformly distributed throughout the universe) is given by the cube of $\mathrm{FF_{eff}}$. The estimated effective fitting factor $\mathrm{FF_{eff}}$ of the reduced-spin template bank is shown in the left panel of Figure~\ref{fig:FFeSpinTaylorT5BankSim}. The figure suggests that the bank is effectual towards detecting generic spinning binaries over almost all the relevant regions in the ``low-mass'' parameter space ($m_1 + m_2 < 12\,M_\odot$)~\footnote{We conveniently define the ``low-mass'' range of the parameter space based on the previous studies using non-spinning inspiral waveforms, where it was shown that it is essential to include the effects of post-inspiral stages in the waveform for binaries with total mass $\gtrsim 12 M_\odot$~\cite{Buonanno:2009zt,Ajith:2007xh}.}. The effective fitting factor is always greater than \red{$\sim 0.92$}, and over a significant fraction of the ``low-mass'' parameter space the fitting factor is greater than $0.95$ (note that the minimum match requirement $\mathcal{M}_\mathrm{min}$ on the template bank was chosen to be 0.95). Note that the region above the gray line in the figure is the region where the contribution from the post-inspiral stages are expected to be significant, and the inspiral template bank needs to be replaced by an inspiral-merger-ringdown bank. \begin{figure}[t] \begin{center} \includegraphics[width=3.5in]{Figure6.pdf} \caption{Fitting factor (indicated by the color of the dots) of the reduced-spin template bank in detecting generic spinning binaries with component masses $(6 M_\odot, 6 M_\odot)$ [left plot] and $(10 M_\odot, 1.4 M_\odot)$ [right plot]. The x-axis corresponds to the spin magnitude of the more massive compact object, while the y-axis corresponds to the cosine of the angle between the spin and initial Newtonian orbital angular momentum. In the left plot (equal-mass binary) fitting factors are $\sim 1$ irrespective of the magnitude and orientation of the spin vector, while in the right plot (highly unequal-mass binary) fitting factors can be as low as $\sim 0.7$ for binaries with large, misaligned spins.} \label{fig:FFScatterPlot_Chi1Mag_Chi1Angle} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=3.5in]{Figure7.pdf} \caption{Normalized SNR (such that the maximum SNR is 1) of generic spinning binaries plotted against the fitting factor (FF) of the reduced-spin template bank in detecting them. It can be seen that fitting factors are high towards binaries with large SNR. The color of the dots correspond to the sine of the inclination of the total angular momentum vector with respect to the line of sight (darker shades correspond to binaries whose total angular momentum is along the line of sight). The left plot corresponds to binaries with component masses $(6 M_\odot, 6 M_\odot)$ and the right plot to binaries with component masses $(10 M_\odot, 1.4 M_\odot)$.} \label{fig:ScatterPlot_SNR_FF_SinIncl} \end{center} \end{figure} The high effectualness of the reduced-spin template bank towards generic spinning binaries can be attributed to two reasons. Firstly, for binaries with comparable masses ($m_1 \sim m_2$) the total angular momentum is dominated by the orbital angular momentum, and hence the modulational effects of spin precession on the orbit, and hence on the observed signal, is small. In this regime, non-precessing waveforms provide a good approximation to the observed signal. However, as the mass ratio increases, spin angular momentum becomes comparable to the orbital angular momentum and the modulational effects of precession become appreciable. Effectualness of non-precessing templates thus decrease with increasing mass ratio (see Fig.~\ref{fig:FFScatterPlot_Chi1Mag_Chi1Angle}). Secondly, there is an intrinsic selection bias towards binaries that are nearly ``face-on'' with the detector (where the modulational effects of precession are weak while the signal is strong) as opposed to binaries that are nearly ``edge-on'' (where the modulational effects are strong while the signal is weak). Thus the fitting factors are high towards binaries with large SNR. This effect is illustrated in Fig.~\ref{fig:ScatterPlot_SNR_FF_SinIncl} for the case of an equal-mass binary (left) and for the case of a highly unequal-mass binary (right). This helps the reduced-spin template bank to have reasonably high effective fitting factor towards a population of generic spinning binaries. The reduction in the fitting factor of the reduced-spin template bank in the high-mass and high-mass-ratio regimes is due to multiple reasons. The modulational effects of precession increase with increasing mass ratio, which are not modeled by our templates. There are additional factors causing the loss: The difference between different PN approximants become considerable at the high-mass-, high-mass-ratio regime (reflecting the lack of knowledge of the higher order spin-dependent PN terms), causing appreciable mismatch between the target waveforms and the template waveforms even in regions where they should agree (e.g., in the limit of non-precessing spins). Hence, it is likely that the fitting factor can be further improved by including the higher order PN terms, assuming that these higher order terms will reduce the difference between different PN approximants (see, e.g.,~\cite{Nitz:2013mxa}). The effective fitting factor of a \emph{non-spinning} template bank (covering the same mass range) is shown in the right panel of Figure~\ref{fig:FFeSpinTaylorT5BankSim}. The fitting factor of the non-spinning bank is \red{$0.83$--$0.88$} over the same parameter space. The average increase in the detection volume provided by a search employing the reduced-spin template bank (as compared against the corresponding non-spinning template bank) is shown in Figure~\ref{fig:VolIncr}. The figure suggests that we can expect an increase of \red{$\sim 20$--$52\%$} in the average detection volume at a \emph{fixed SNR threshold}. Note that the real figure of merit of the improvement would be the increase in the detection volume
Lambda = dot(Phi, R) u,s,vh = svd(dot(Phi.T,asarray(Lambda)**3 - (gamma/p) * dot(Lambda, diag(diag(dot(Lambda.T,Lambda)))))) R = dot(u,vh) d = sum(s) if d/d_old < tol: break return dot(Phi, R) from scipy import stats y=iris.target X=iris.data stats.describe(X) # uneven variance X = StandardScaler().fit_transform(X) pca = PCA(n_components=3) X_ = pca.fit_transform(X) pca.components_ pca.explained_variance_ pca.explained_variance_ratio_ pca_biplot(X_, pca.components_, (1,2), labels=iris.feature_names) pca_biplot(X_, pca.components_, (1,3), labels=iris.feature_names) pca_biplot(X_, pca.components_, (2,3), labels=iris.feature_names) # varimax rotation of components varimax(pca.components_).round(2) iris.feature_names """ IncrementalPCA? Init signature: IncrementalPCA(n_components=None, whiten=False, copy=True, batch_size=None) a replacement for principal component analysis (PCA) when the dataset to be decomposed is too large to fit in memory. IPCA builds a low-rank approximation for the input data using an amount of memory which is independent of the number of input data samples. It is still dependent on the input data features, but changing the batch size allows for control of memory usage. The computational overhead of each SVD is ``O(batch_size * n_features ** 2)``, but only 2 * batch_size samples remain in memory at a time. SparsePCA? Init signature: SparsePCA(n_components=None, alpha=1, ridge_alpha=0.01, max_iter=1000, tol=1e-08, method='lars', n_jobs=1, U_init=None, V_init=None, verbose=False, random_state=None) Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. TruncatedSVD? TruncatedSVD(n_components=2, algorithm='randomized', n_iter=5, random_state=None, tol=0.0) SVD suffers from a problem called "sign indeterminancy", which means the sign of the ``components_`` and the output from transform depend on the algorithm and random state. To work around this, fit instances of this class to data once, then keep the instance around to do transformations. """ y=iris.target X=iris.data X.shape stats.describe(X) # uneven variance X = StandardScaler().fit_transform(X) pca = IncrementalPCA(n_components=3, batch_size=50) X_ = pca.fit_transform(X) pca.components_ pca.explained_variance_ pca.explained_variance_ratio_ pca_biplot(X_, pca.components_, (1,2), labels=iris.feature_names) pca_biplot(X_, pca.components_, (1,3), labels=iris.feature_names) pca_biplot(X_, pca.components_, (2,3), labels=iris.feature_names) """ http://scikit-learn.org/stable/modules/manifold.html#multi-dimensional-scaling-mds Multidimensional Scaling (MDS) - a low-dimensional representation of the data in which the distances respect well the relative distances in the original high-dimensional space. - model similarity or dissimilarity data as distances in a geometric spaces - OPEN TO CUSTOM MEASUREMENT OF DISSIMILARITY BETWEEN ROWS OF N-FEATURES - two types of MDS algorithm: metric and non metric metric - similarity matrix arises from a metric (and thus respects the triangular inequality), the distances between output two points are then set to be as close as possible to the similarity or dissimilarity data Know as 'absolute' MDS non-metric - algorithms will try to preserve the order of the distances, and hence seek for a monotonic relationship between the distances in the embedded space and the similarities/dissimilarities obj to min is stress stress = foreach(i<j)sum [dist(i,j)(X) - transformed_dist(i,j)(X)] Init signature: MDS(n_components=2, metric=True, n_init=4, max_iter=300, verbose=0, eps=0.001, n_jobs=1, random_state=None, dissimilarity='euclidean') Parameters ---------- n_components : int, optional, default: 2 Number of dimensions in which to immerse the dissimilarities. metric : boolean, optional, default: True If ``True``, perform metric MDS; otherwise, perform nonmetric MDS. n_init : int, optional, default: 4 Number of times the SMACOF algorithm will be run with different initializations. The final results will be the best output of the runs, determined by the run with the smallest final stress. max_iter : int, optional, default: 300 Maximum number of iterations of the SMACOF algorithm for a single run. verbose : int, optional, default: 0 Level of verbosity. eps : float, optional, default: 1e-3 Relative tolerance with respect to stress at which to declare convergence. n_jobs : int, optional, default: 1 The number of jobs to use for the computation. If multiple initializations are used (``n_init``), each run of the algorithm is computed in parallel. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all, which is useful for debugging. For ``n_jobs`` below -1, (``n_cpus + 1 + n_jobs``) are used. Thus for ``n_jobs = -2``, all CPUs but one are used. random_state : int, RandomState instance or None, optional, default: None The generator used to initialize the centers. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. dissimilarity : 'euclidean' | 'precomputed', optional, default: 'euclidean' Dissimilarity measure to use: - 'euclidean': Pairwise Euclidean distances between points in the dataset. - 'precomputed': Pre-computed dissimilarities are passed directly to ``fit`` and ``fit_transform``. Attributes ---------- embedding_ : array-like, shape (n_components, n_samples) Stores the position of the dataset in the embedding space. stress_ : float The final value of the stress (sum of squared distance of the disparities and the distances for all constrained points). """ y=iris.target X=iris.data X = StandardScaler().fit_transform(X) mds = MDS(n_components=2, metric=True, dissimilarity='euclidean', n_init=20, n_jobs=1, verbose=True) X_ = mds.fit_transform(X) mds.embedding_ mds.stress_ mds.dissimilarity_matrix_ mds.dissimilarity_matrix_.shape plt.scatter(X_[:,0], X_[:,1], c=iris.target) plt.axvline(0);plt.axhline(0) #non-metric MDS nmds = MDS(n_components=2, metric=False, dissimilarity='euclidean', n_init=1, n_jobs=1, verbose=True) nX_ = nmds.fit_transform(X, init=mds.embedding_) plt.scatter(nX_[:,0], nX_[:,1], c=iris.target) plt.axvline(0);plt.axhline(0) y=wine.target X=wine.data X.shape X = StandardScaler().fit_transform(X) mds = MDS(n_components=2, metric=True, dissimilarity='euclidean', n_init=20, n_jobs=1, verbose=True) X_ = mds.fit_transform(X) mds.embedding_ mds.stress_ mds.dissimilarity_matrix_ mds.dissimilarity_matrix_.shape plt.scatter(X_[:,0], X_[:,1], c=wine.target) plt.axvline(0);plt.axhline(0) mds = MDS(n_components=3, metric=True, dissimilarity='euclidean', n_init=20, n_jobs=1, verbose=True) X_ = mds.fit_transform(X) mds.embedding_ mds.stress_ mds.dissimilarity_matrix_ mds.dissimilarity_matrix_.shape plt.scatter(X_[:,0], X_[:,1], c=wine.target) plt.scatter(X_[:,1], X_[:,2], c=wine.target) plt.scatter(X_[:,0], X_[:,2], c=wine.target) """ Kernal PCA - choice of kernel to best find projection in data that's not linearly seperable http://scikit-learn.org/stable/modules/metrics.html#metrics sklearn.metrics.pairwise.PAIRWISE_KERNEL_FUNCTIONS sklearn.metrics.pairwise.PAIRWISE_BOOLEAN_FUNCTIONS sklearn.metrics.pairwise.PAIRWISE_DISTANCE_FUNCTIONS from sklearn.metrics.pairwise import (linear_kernel, cosine_similarity, #L2 normalised equivalent to linear, #euclidean measure distance, cos measure angle polynomial_kernel, # degree d sigmoid_kernel, #tanh, NN activation function rbf_kernel, # exp(|X-Y|^2) laplacian_kernel, # exp(|X-Y|) chi2_kernel, # data non-negative, normalised to (0,1) like discrete ) """ X, y = make_circles(n_samples=400, factor=.3, noise=.05) kpca = KernelPCA(kernel="rbf", fit_inverse_transform=True, gamma=10) X_kpca = kpca.fit_transform(X) #X_back = kpca.inverse_transform(X_kpca) #X_back == X plt.figure() plt.subplot(1, 2, 1, aspect='equal') plt.scatter(X[:, 0], X[:,1], c=y, s=20, edgecolor='k') plt.title("Original space") plt.xlabel("$x^1$") plt.ylabel("$x_2$") plt.subplot(1, 2, 2, aspect='equal') plt.scatter(X_kpca[:, 0], X_kpca[:, 1], c=y, s=20, edgecolor='k') plt.title("Projection by KPCA") plt.xlabel("1st principal component in space induced by $\phi$") plt.ylabel("2nd component") """ Factor Analysis linear generative model with Gaussian latent variables. - observations are assumed to be caused by a linear transformation of lower dimensional latent factors and unequal added Gaussian noise. - Factors are distributed according to a Gaussian with zero mean and unit covariance. - The noise is also zero mean and has an arbitrary diagonal covariance matrix. If we would restrict the model further, by assuming that the Gaussian noise is even isotropic (all diagonal entries are the same) we would obtain PCA http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_vs_fa_model_selection.html#sphx-glr-auto-examples-decomposition-plot-pca-vs-fa-model-selection-py compare PCA and FA with cross-validation on low rank data corrupted with homoscedastic noise (noise variance is the same for each feature) or heteroscedastic noise (noise variance is the different for each feature). PCA fails and overestimates the rank when heteroscedastic noise is present. Parameters ---------- n_components : int | None Dimensionality of latent space, the number of components of ``X`` that are obtained after ``transform``. If None, n_components is set to the number of features. tol : float Stopping tolerance for EM algorithm. copy : bool Whether to make a copy of X. If ``False``, the input X gets overwritten during fitting. max_iter : int Maximum number of iterations. noise_variance_init : None | array, shape=(n_features,) The initial guess of the noise variance for each feature. If None, it defaults to np.ones(n_features) svd_method : {'lapack', 'randomized'} Which SVD method to use. If 'lapack' use standard SVD from scipy.linalg, if 'randomized' use fast ``randomized_svd`` function. Defaults to 'randomized'. For most applications 'randomized' will be sufficiently precise while providing significant speed gains. Accuracy can also be improved by setting higher values for `iterated_power`. If this is not sufficient, for maximum precision you should choose 'lapack'. iterated_power : int, optional Number of iterations for the power method. 3 by default. Only used if ``svd_method`` equals 'randomized' random_state : int, RandomState instance or None, optional (default=0) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. Only used when ``svd_method`` equals 'randomized'. Attributes ---------- components_ : array, [n_components, n_features] Components with maximum variance. loglike_ : list, [n_iterations] The log likelihood at each iteration. noise_variance_ : array, shape=(n_features,) The estimated noise variance for each feature. n_iter_ : int Number of iterations run. """ y=iris.target X=iris.data X = StandardScaler().fit_transform(X) fa = FactorAnalysis(n_components=2, noise_variance_init=np.ones(X.shape[1]), svd_method = 'lapack') X_ = fa.fit_transform(X) fa.components_ fa.loglike_ fa.noise_variance_ plt.scatter(X_[:,0], X_[:,1], c=iris.target); plt.show() pca_biplot(X_, fa.components_, (1,2), labels=iris.feature_names) y=wine.target X=wine.data X = StandardScaler().fit_transform(X) fa = FactorAnalysis(n_components=2, noise_variance_init=np.ones(X.shape[1]), svd_method = 'lapack') X_ = fa.fit_transform(X) fa.components_ fa.loglike_ fa.noise_variance_ plt.scatter(X_[:,0], X_[:,1], c=wine.target); plt.show() pca_biplot(X_, fa.components_, (1,2), labels=wine.feature_names) y=breast_cancer.target X=breast_cancer.data X = StandardScaler().fit_transform(X) fa = FactorAnalysis(n_components=2, noise_variance_init=np.ones(X.shape[1]), svd_method = 'lapack') X_ = fa.fit_transform(X) fa.components_ fa.loglike_ fa.noise_variance_ plt.scatter(X_[:,0], X_[:,1], c=breast_cancer.target); plt.show() pca_biplot(X_, fa.components_, (1,2), labels=breast_cancer.feature_names) """ Independent Component Analysis (ICA) - separates a multivariate signal into additive subcomponents that are maximally independent - ICA model does not include a
Therefore, we want to construct an initial guess in $\mathcal{N}_{\epsilon}\cap \mathcal{N}_{\widetilde{F}}$ so that $\boldsymbol{z}^{(0)}$ is sufficiently close to the ground truth and then analyze the behavior of $\boldsymbol{z}^{(t)}$. \vskip0.25cm In the rest of this section, we basically try to prove the following relation: \begin{equation*} \underbrace{ \frac{1}{\sqrt{3}}\mathcal{N}_d\cap \frac{1}{\sqrt{3}} \mathcal{N}_{\mu} \cap \mathcal{N}_{\frac{2\varepsilon}{5\sqrt{s}\kappa}}}_{\text{Initial guess}} \subset \underbrace{\mathcal{N}_{\epsilon}\cap \mathcal{N}_{\widetilde{F}}}_{ \{\boldsymbol{z}^{(t)}\}_{t\geq 0} \text{ in } \mathcal{N}_{\epsilon}\cap \mathcal{N}_{\widetilde{F}} } \subset \underbrace{\mathcal{N}_d \cap \mathcal{N}_{\mu} \cap \mathcal{N}_{\epsilon}}_{\text{Key conditions hold over }\mathcal{N}_d \cap \mathcal{N}_{\mu} \cap \mathcal{N}_{\epsilon} }. \end{equation*} Now we give a more detailed explanation of the relation above, which constitutes the main structure of the proof: \begin{enumerate} \item We will show $\frac{1}{\sqrt{3}}\mathcal{N}_d\cap \frac{1}{\sqrt{3}} \mathcal{N}_{\mu} \cap \mathcal{N}_{\frac{2\varepsilon}{5\sqrt{s}\kappa}} \subset \mathcal{N}_{\epsilon}\cap \mathcal{N}_{\widetilde{F}}$ in the proof of Theorem~\ref{thm:main} in Section~\ref{s:mainthm}, which is quite straightforward. \item Lemma~\ref{lem:betamu} explains why it holds that $\mathcal{N}_{\epsilon}\cap \mathcal{N}_{\widetilde{F}}\subset \mathcal{N}_d \cap \mathcal{N}_{\mu} \cap \mathcal{N}_{\epsilon}$ and where the $s^2$-bottleneck comes from. \item Lemma~\ref{lem:line_section} implicitly shows that the iterates $\boldsymbol{z}^{(t)}$ will remain in $\mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}$ if the initial guess $\boldsymbol{z}^{(0)}$ is inside $\mathcal{N}_{\epsilon}\cap \mathcal{N}_{\widetilde{F}}$ and $\widetilde{F}(\boldsymbol{z}^{(t)})$ is monotonically decreasing (simply by induction). Lemma~\ref{lem:induction} makes this observation explicit by showing that $\boldsymbol{z}^{(t)}\in \mathcal{N}_{\epsilon} \cap \mathcal{N}_{\widetilde{F}}$ implies $\boldsymbol{z}^{(t+1)} : = \boldsymbol{z}^{(t)} - \eta\nabla \widetilde{F}(\boldsymbol{z}^{(t)})\in \mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}$ if the stepsize $\eta$ obeys $\eta \leq \frac{1}{C_L}$. Moreover, Lemma~\ref{lem:induction} guarantees sufficient decrease of $\widetilde{F}(\boldsymbol{z}^{(t)})$ in each iteration, which paves the road towards the proof of linear convergence of $\widetilde{F}(\boldsymbol{z}^{(t)})$ and thus $\boldsymbol{z}^{(t)}.$ \end{enumerate} \vskip0.25cm Remember that $\mathcal{N}_d$ and $\mathcal{N}_{\mu}$ are both convex sets, and the purpose of introducing regularizers $G_i(\boldsymbol{h}_i, \boldsymbol{x}_i)$ is to approximately project the iterates onto $\mathcal{N}_d\cap\mathcal{N}_{\mu}.$ Moreover, we hope that once the iterates are inside $\mathcal{N}_{\epsilon}$ and inside a sublevel subset $\mathcal{N}_{\widetilde{F}}$, they will never escape from $\mathcal{N}_{\widetilde{F}}\cap\mathcal{N}_{\epsilon}$. Those ideas are fully reflected in the following lemma. \begin{lemma} \label{lem:betamu} Assume $0.9d_{i0}\leq d_i\leq 1.1d_{i0}$ and $0.9d_{0}\leq d\leq 1.1d_0$. There holds $\mathcal{N}_{\widetilde{F}} \subset \mathcal{N}_d \cap \mathcal{N}_{\mu}$; moreover, under Conditions~\ref{cond:rip} and~\ref{cond:robust}, we have $\mathcal{N}_{\widetilde{F}} \cap \mathcal{N}_{\epsilon}\subset \mathcal{N}_d \cap \mathcal{N}_{\mu}\cap\mathcal{N}_{\frac{9}{10}\epsilon}$. \end{lemma} \begin{proof} If $(\vct{h}, \vct{x}) \notin \mathcal{N}_d \cap \mathcal{N}_{\mu}$, by the definition of $G$ in~\eqref{def:G}, at least one component in $G$ exceeds $\rho G_0\left(\frac{2d_{i0}}{d_i}\right)$. We have \begin{eqnarray*} \widetilde{F}(\boldsymbol{h}, \boldsymbol{x}) & \geq & \rho G_0\left(\frac{2d_{i0}}{d_i}\right) \geq (d^2 + 2\|\boldsymbol{e}\|^2) \left( \frac{2d_{i0}}{d_i} - 1\right)^2 \\ & \geq & (2/1.1 - 1)^2 (d^2 + 2\|\boldsymbol{e}\|^2) \\ & \geq & \frac{1}{2} d_{0}^2 + \|\boldsymbol{e}\|^2 > \frac{\varepsilon^2 d_0^2}{3s \kappa^2} + \|\boldsymbol{e}\|^2, \end{eqnarray*} where $\rho \geq d^2 + 2\|\boldsymbol{e}\|^2$, $0.9d_0 \leq d \leq 1.1d_0$ and $0.9d_{i0} \leq d_i\leq 1.1d_{i0}.$ This implies $(\boldsymbol{h}, \boldsymbol{x}) \notin \mathcal{N}_{\widetilde{F}}$ and hence $\mathcal{N}_{\widetilde{F}} \subset \mathcal{N}_d \cap \mathcal{N}_{\mu}$. \\ Note that $(\boldsymbol{h}, \boldsymbol{x})\in \mathcal{N}_d \cap \mathcal{N}_{\mu} \cap \mathcal{N}_{\epsilon}$ if $(\boldsymbol{h}, \boldsymbol{x}) \in \mathcal{N}_{\widetilde{F}} \cap \mathcal{N}_{\epsilon}$. Applying~\eqref{eq:LUBD} gives \begin{equation*} \frac{2}{3}\delta^2d_0^2 -\frac{\varepsilon\delta d_0^2}{5\sqrt{s}\kappa} + \|\boldsymbol{e}\|^2 \leq F(\boldsymbol{h}, \boldsymbol{x})\leq \widetilde{F}(\boldsymbol{h}, \boldsymbol{x})\leq\frac{\varepsilon^2 d_0^2}{3s\kappa^2} + \|\boldsymbol{e}\|^2 \end{equation*} which implies that $\delta \leq \frac{9}{10}\frac{\varepsilon}{\sqrt{s}\kappa}.$ By definition of $\delta$ in~\eqref{def:delta}, there holds \begin{equation}\label{eq:bottleneck} \frac{81\varepsilon^2}{100s\kappa^2} \geq \delta^2 = \frac{\sum_{i=1}^s \delta_i^2d_{i0}^2}{\sum_{i=1}^s d_{i0}^2} \geq \frac{\sum_{i=1}^s \delta_i^2}{s\kappa^2} \geq \frac{1}{s\kappa^2} \max_{1\leq i\leq s}\delta_i^2, \end{equation} which gives $\delta_i \leq \frac{9}{10}\varepsilon$ and $(\boldsymbol{h}, \boldsymbol{x})\in \mathcal{N}_{\frac{9}{10}\varepsilon}.$ \end{proof} \begin{remark} The $s^2$-bottleneck comes from~\eqref{eq:bottleneck}. If $\delta \leq \varepsilon$ is small, we cannot guarantee that each $\delta_i$ is also smaller than $\varepsilon$. Just consider the simplest case when all $d_{i0}$ are the same: then $d_0^2 = \sum_{i=1}^s d_{i0}^2 = s d_{i0}^2$ and there holds \begin{equation*} \varepsilon^2\geq \delta^2 = \frac{1}{s}\sum_{i=1}^s \delta_i^2. \end{equation*} Obviously, we cannot conclude that $\max \delta_i \leq \varepsilon$ but only say that $\delta_i \leq \sqrt{s}\varepsilon.$ This is why we require $\delta ={\cal O}(\frac{\varepsilon}{\sqrt{s}})$ to ensure $\delta_i \leq \varepsilon$, which gives $s^2$-dependence in $L.$ \end{remark} \begin{lemma} \label{lem:line_section} Denote $\vct{z}_1 = (\boldsymbol{h}_1, \boldsymbol{x}_1)$ and $\vct{z}_2 = (\boldsymbol{h}_2, \boldsymbol{x}_2)$. Let $\vct{z}(\lambda):=(1-\lambda)\vct{z}_1 + \lambda \vct{z}_2$. If $\vct{z}_1 \in \mathcal{N}_{\epsilon}$ and $\vct{z}(\lambda) \in \mathcal{N}_{\widetilde{F}}$ for all $\lambda \in [0, 1]$, we have $\vct{z}_2 \in \mathcal{N}_{\epsilon}$. \end{lemma} \begin{proof} Note that for $\boldsymbol{z}_1\in\mathcal{N}_{\epsilon}\cap \mathcal{N}_{\widetilde{F}}$, we have $\boldsymbol{z}_1\in \mathcal{N}_d\cap\mathcal{N}_{\mu}\cap\mathcal{N}_{\frac{9}{10}\varepsilon}$ which follows from the second part of Lemma~\ref{lem:betamu}. Now we prove $\boldsymbol{z}_2\in\mathcal{N}_{\epsilon}$ by contradiction. Let us suppose that $\boldsymbol{z}_2 \notin \mathcal{N}_{\epsilon}$ and $\boldsymbol{z}_1 \in \mathcal{N}_{\epsilon}$. There exists $\vct{z}(\lambda_0):=(\vct{h}(\lambda_0), \vct{x}(\lambda_0)) \in \mathcal{N}_{\epsilon}$ for some $\lambda_0 \in [0, 1]$ such that $\max_{1\leq i\leq s}\frac{\|\boldsymbol{h}_i\boldsymbol{x}_i^* - \boldsymbol{h}_{i0}\boldsymbol{x}_{i0}^*\|_F}{d_{i0}} = \epsilon$. Therefore, $\vct{z}(\lambda_0) \in \mathcal{N}_{\widetilde{F}}\cap\mathcal{N}_{\epsilon}$ and \prettyref{lem:betamu} implies $\max_{1\leq i\leq s}\frac{\|\boldsymbol{h}_i\boldsymbol{x}_i^* - \boldsymbol{h}_{i0}\boldsymbol{x}_{i0}^*\|_F}{d_{i0}} \leq \frac{9}{10}\epsilon$, which contradicts $\max_{1\leq i\leq s}\frac{\|\boldsymbol{h}_i\boldsymbol{x}_i^* - \boldsymbol{h}_{i0}\boldsymbol{x}_{i0}^*\|_F}{d_{i0}} = \epsilon$. \end{proof} \begin{lemma} \label{lem:induction} Let the stepsize $\eta \leq \frac{1}{C_L}$, $\boldsymbol{z}^{(t)} : = (\boldsymbol{u}^{(t)}, \boldsymbol{v}^{(t)})\in\mathbb{C}^{s(K + N)}$ and $C_L$ be the Lipschitz constant of $\nabla\widetilde{F}(\boldsymbol{z})$ over $\mathcal{N}_d \cap \mathcal{N}_{\mu} \cap \mathcal{N}_{\epsilon}$ in~\eqref{def:CL}. If $\boldsymbol{z}^{(t)}\in \mathcal{N}_{\epsilon} \cap \mathcal{N}_{\widetilde{F}}$, we have $\boldsymbol{z}^{(t+1)} \in \mathcal{N}_{\epsilon} \cap \mathcal{N}_{\widetilde{F}}$ and \begin{equation} \label{eq:decreasing2} \widetilde{F}(\boldsymbol{z}^{(t+1)}) \leq \widetilde{F}(\boldsymbol{z}^{(t)}) - \eta \|\nabla \widetilde{F}(\boldsymbol{z}^{(t)})\|^2 \end{equation} where $\boldsymbol{z}^{(t+1)} = \boldsymbol{z}^{(t)} - \eta\nabla\widetilde{F}(\boldsymbol{z}^{(t)}).$ \end{lemma} \begin{remark} This lemma tells us that once $\boldsymbol{z}^{(t)}\in\mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}$, the next iterate $\boldsymbol{z}^{(t+1)} = \boldsymbol{z}^{(t)} - \eta \nabla \widetilde{F}(\boldsymbol{z}^{(t)})$ is also inside $\mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}$ as long as the stepsize $\eta \leq \frac{1}{C_L}$. In other words, $\mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}$ is in fact a stronger version of the basin of attraction. Moreover, the objective function will decay sufficiently in each step as long as we can control the lower bound of the $\nabla \widetilde{F}$, which is guaranteed by the Local Regularity Condition~\ref{cond:rip}. \end{remark} \begin{proof} Let $\phi(\tau) := \widetilde{F}(\boldsymbol{z}^{(t)} - \tau \nabla \widetilde{F}(\boldsymbol{z}^{(t)}))$, $\phi(0) = \widetilde{F}(\boldsymbol{z}^{(t)})$ and consider the following quantity: \begin{equation*} \tau_{\max}: = \max \{\mu: \phi(\tau) \leq \widetilde{F}(\boldsymbol{z}^{(t)}), 0\leq\tau \leq \mu \}, \end{equation*} where $\tau_{\max}$ is the largest stepsize such that the objective function $\widetilde{F}(\boldsymbol{z})$ evaluated at any point over the whole line segment $\{\boldsymbol{z}^{(t)} -\tau \widetilde{F}(\boldsymbol{z}^{(t)}), 0\leq \tau\leq \tau_{\max}\}$ is not greater than $\widetilde{F}(\boldsymbol{z}^{(t)})$. Now we will show $\tau_{\max} \geq \frac{1}{C_L}$. Obviously, if $\|\nabla\widetilde{F}(\boldsymbol{z}^{(t)})\| = 0$, it holds automatically. Consider $\|\nabla\widetilde{F}(\boldsymbol{z}^{(t)})\|\neq 0$ and assume $\tau_{\max} < \frac{1}{C_L}$. First note that, \begin{equation*} \frac{\diff}{\diff \tau} \phi(\tau) < 0 \Longrightarrow\tau_{\max} > 0. \end{equation*} By the definition of $\tau_{\max}$, there holds $\phi(\tau_{\max}) = \phi(0)$ since $\phi(\tau)$ is a continuous function w.r.t. $\tau$. Lemma~\ref{lem:line_section} implies \begin{equation*} \{ \boldsymbol{z}^{(t)} - \tau \nabla\widetilde{F}(\boldsymbol{z}^{(t)}), 0\leq \tau \leq \tau_{\max} \} \subseteq \mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}. \end{equation*} Now we apply Lemma~\ref{lem:DSL}, the modified descent lemma, and obtain \begin{equation*} \widetilde{F}(\boldsymbol{z}^{(t)} - \tau_{\max}\nabla\widetilde{F}(\boldsymbol{z}^{(t)})) \leq \widetilde{F}(\boldsymbol{z}^{(t)}) - (2\tau_{\max} - C_L\tau_{\max}^2)\|\widetilde{F}(\boldsymbol{z}^{(t)})\|^2 \leq \widetilde{F}(\boldsymbol{z}^{(t)}) - \tau_{\max}\|\widetilde{F}(\boldsymbol{z}^{(t)})\|^2 \end{equation*} where $C_L\tau_{\max} \leq 1.$ In other words, $\phi(\tau_{\max}) \leq \widetilde{F}(\boldsymbol{z}^{(t)} - \tau_{\max}\nabla\widetilde{F}(\boldsymbol{z}^{(t)})) < \widetilde{F}(\boldsymbol{z}^{(t)}) = \phi(0)$ contradicts $\phi(\tau_{\max}) = \phi(0)$. Therefore, we conclude that $\tau_{\max} \geq \frac{1}{C_L}$. For any $\eta \leq \frac{1}{C_L}$, Lemma~\ref{lem:line_section} implies \begin{equation*} \{ \boldsymbol{z}^{(t)} - \tau \nabla\widetilde{F}(\boldsymbol{z}^{(t)}), 0\leq \tau \leq \eta \} \subseteq \mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}} \end{equation*} and applying Lemma~\ref{lem:DSL} gives \begin{equation*} \widetilde{F}(\boldsymbol{z}^{(t)} - \eta \nabla\widetilde{F}(\boldsymbol{z}^{(t)})) \leq \widetilde{F}(\boldsymbol{z}^{(t)}) - (2\eta - C_L\eta^2)\|\widetilde{F}(\boldsymbol{z}^{(t)})\|^2 \leq \widetilde{F}(\boldsymbol{z}^{(t)}) - \eta\|\widetilde{F}(\boldsymbol{z}^{(t)})\|^2. \end{equation*} \end{proof} \subsection{Proof of Theorem~\ref{thm:main}} \label{s:mainthm} Combining all the considerations above, we now prove Theorem~\ref{thm:main} to conclude this section. \begin{proof The proof consists of three parts: \paragraph{Part I: Proof of $\boldsymbol{z}^{(0)} : = (\boldsymbol{u}^{(0)}, \boldsymbol{v}^{(0)}) \in \mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}$.} From the assumption of Theorem~\ref{thm:main}, \begin{equation*} \boldsymbol{z}^{(0)} \in \frac{1}{\sqrt{3}}\mathcal{N}_d \bigcap \frac{1}{\sqrt{3}}\mathcal{N}_{\mu}\cap \mathcal{N}_{\frac{2\varepsilon}{5\sqrt{s}\kappa}}. \end{equation*} First we show $G(\boldsymbol{u}^{(0)}, \boldsymbol{v}^{(0)}) = 0$: for $0\leq i\leq s$ and the definition of $\mathcal{N}_d$ and $\mathcal{N}_{\mu}$, \begin{equation*} \frac{\|\boldsymbol{u}^{(0)}_i\|^2}{2d_i} \leq \frac{2d_{i0}}{3d_i} < 1, \quad \frac{L|\boldsymbol{b}_l^* \boldsymbol{u}^{(0)}_i|^2}{8d_i\mu^2} \leq \frac{L}{8d_i\mu^2} \cdot\frac{16d_{i0}\mu^2}{3L} \leq \frac{2d_{i0}}{3d_i} < 1, \end{equation*} where $\|\boldsymbol{u}^{(0)}_i\| \leq \frac{2\sqrt{d_{i0}}}{\sqrt{3}}$, $\sqrt{L}\|\boldsymbol{B}\boldsymbol{u}^{(0)}_i\|_{\infty} \leq \frac{4 \sqrt{d_{i0}}\mu}{\sqrt{3}}$ and $\frac{9}{10}d_{i0} \leq d_i\leq \frac{11}{10}d_{i0}.$ Therefore $$G_0\left( \frac{\|\boldsymbol{u}^{(0)}_i\|^2}{2d_i}\right) = G_0\left( \frac{\|\boldsymbol{v}^{(0)}_i\|^2}{2d_i}\right) = G_0\left(\frac{L|\boldsymbol{b}_l^*\boldsymbol{u}_i^{(0)}|^2}{8d_i\mu^2}\right) = 0$$ for all $1\leq l\leq L$ and $G(\boldsymbol{u}^{(0)}, \boldsymbol{v}^{(0)}) = 0.$ For $\boldsymbol{z}^{(0)} = (\boldsymbol{u}^{(0)}, \boldsymbol{v}^{(0)})\in \mathcal{N}_{\frac{2\varepsilon}{5\sqrt{s}\kappa}}$, we have $\delta(\boldsymbol{z}^{(0)}) := \frac{\sqrt{\sum_{i=1}^s \delta_i^2d_{i0}^2 }}{d_0} \leq \frac{2\varepsilon}{5\sqrt{s}\kappa}.$ By~\eqref{eq:LUBD}, there holds $\delta(\boldsymbol{z}^{(0)}) \leq \frac{2\varepsilon}{5\sqrt{s}\kappa}$ and $G(\boldsymbol{u}^{(0)}, \boldsymbol{v}^{(0)}) = 0$, \begin{equation*} \widetilde{F}(\boldsymbol{u}^{(0)}, \boldsymbol{v}^{(0)}) = F(\boldsymbol{u}^{(0)}, \boldsymbol{v}^{(0)}) \leq \|\boldsymbol{e}\|^2 + \frac{3}{2}\delta^2(\boldsymbol{z}^{(0)})d_0^2 + \frac{\varepsilon \delta(\boldsymbol{z}^{(0)}) d_0^2}{5\sqrt{s}\kappa} \leq \|\boldsymbol{e}\|^2 + \frac{\varepsilon^2 d_0^2}{3s\kappa^2} \end{equation*} and hence $\boldsymbol{z}^{(0)} = (\boldsymbol{u}^{(0)}, \boldsymbol{v}^{(0)})\in \mathcal{N}_{\epsilon}\bigcap \mathcal{N}_{\widetilde{F}}.$ \paragraph{Part II: The linear convergence of the objective function $\widetilde{F}(\boldsymbol{z}^{(t)})$.} Denote $\boldsymbol{z}^{(t)} : = (\boldsymbol{u}^{(t)}, \boldsymbol{v}^{(t)}).$ Note that $\boldsymbol{z}^{(0)}\in\mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}$, Lemma~\ref{lem:induction} implies $\boldsymbol{z}^{(t)}\in \mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}$ for all $t\geq 0$ by induction if $\eta \leq \frac{1}{C_L}$. Moreover, combining Condition~\ref{cond:reg} with Lemma~\ref{lem:induction} leads to \begin{equation*} \widetilde{F}(\boldsymbol{z}^{(t )}) \leq \widetilde{F}(\boldsymbol{z}^{(t-1)}) - \eta\omega \left[ \widetilde{F}(\boldsymbol{z}^{(t-1)}) - c \right]_+, \quad t\geq 1 \end{equation*} with $c = \|\boldsymbol{e}\|^2 + a\|\mathcal{A}^*(\boldsymbol{e})\|^2$ and $a = 2000s$. Therefore, by induction, we have \begin{equation*} \left[ \widetilde{F}(\boldsymbol{z}^{(t)}) - c\right]_+ \leq (1 - \eta\omega) \left[ \widetilde{F}(\boldsymbol{z}^{(t-1)}) - c \right]_+ \leq \left(1 - \eta\omega\right)^t \left[ \widetilde{F}(\boldsymbol{z}^{(0)}) - c\right]_+ \leq \frac{\varepsilon^2 d_0^2}{3s\kappa^2} (1 - \eta\omega)^{t} \end{equation*} where $\widetilde{F}(\boldsymbol{z}^{(0)}) \leq \frac{\varepsilon^2d_0^2}{3s\kappa^2} + \|\boldsymbol{e}\|^2$ and $\left[ \widetilde{F}(\boldsymbol{z}^{(0)}) - c \right]_+ \leq \left[ \frac{1}{3s\kappa^2}\varepsilon^2 d_0^2 - a\|\mathcal{A}^*(\boldsymbol{e})\|^2 \right]_+ \leq \frac{\varepsilon^2 d_0^2}{3s\kappa^2}.$ Now we conclude that $\left[ \widetilde{F}(\boldsymbol{z}^{(t)}) - c\right]_+$ converges to $0$ linearly. \paragraph{Part III: The linear convergence of the iterates $(\boldsymbol{u}^{(t)}, \boldsymbol{v}^{(t)})$.} Denote \begin{equation*} \delta(\boldsymbol{z}^{(t)}) : = \frac{\|\mathcal{H}(\boldsymbol{u}^{(t)}, \boldsymbol{v}^{(t)}) - \mathcal{H}(\boldsymbol{h}_0,\boldsymbol{x}_0)\|_F}{d_0}. \end{equation*} Note that $\boldsymbol{z}^{(t)}\in \mathcal{N}_{\epsilon}\cap\mathcal{N}_{\widetilde{F}}\subseteq \mathcal{N}_d \cap \mathcal{N}_{\mu} \cap \mathcal{N}_{\epsilon}$ and over $\mathcal{N}_d \cap \mathcal{N}_{\mu} \cap \mathcal{N}_{\epsilon}$, there holds $F_0(\boldsymbol{z}^{(t)}) \geq \frac{2}{3}\delta^2(\boldsymbol{z}^{(t)})d_0^2$ which follows from
The lifetime cap is the maximum interest rate that is allowed to be charged on an adjustable-rate mortgage. ARMs may start with lower monthly payments than fi xed-rate mortgages, but keep in mind the following: Your monthly payments could change. Fixed for 120 months, adjusts annually for the remaining term of the loan. Payment caps detail increases in dollars rather than based on percentage points. At the new rate, your quarterly payment works out to $13,450: $$\text{PMT}=\frac{\text{\390,301}}{\frac{\text{1}-{(\text{1}+\frac{\text{7%}}{\text{4}})}^{-\text{10}\times\text{4}}}{\frac{\text{7%}}{\text{4}}}}=\text{\13,650}$$eval(ez_write_tag([[250,250],'xplaind_com-box-4','ezslot_5',134,'0','0'])); Your quarterly interest payment has increased by 15% which has stretched your finances. However, if the index is at only 2% the next time the interest rate adjusts, the rate falls to 4%, based on the loan's 2% margin. Typically an ARM is expressed as two numbers. A fixed-rate mortgage has the same interest rate … On a typical ARM, the interest rate adjusts every 6 or 12 months, but it may change as frequently as monthly. Mr. Bean has taken a loan for a very short-term mortgage loan that is for 5 years, and the term is 3/1 ARM, and which means that the rate of interest will remain fixed for 3 years and after that rate of interest shall change for the remaining of the term annually. With adjustable rate mortgages, the interest rate is set to be reviewed and adjusted at specific times. Examples: 10/1 ARM: Your interest rate is set for 10 years then adjusts for 20 years. In many cases, the lender may offer a fixed rate for a period before the adjustment period begins. In most cases, mortgages are tied to one of three indexes: the maturity yield on one-year Treasury bills, the 11th District cost of funds index, or the London Interbank Offered Rate. After this initial period of time, the interest rate resets periodically, at yearly or even monthly intervals. What is an adjustable rate mortgage? The bank (usually) rewards you with a lower initial rate because you’re taking the risk that interest rates could rise in the future. An adjustable rate mortgage (ARM) is a mortgages in which the interest rate is typically fixed for a few initial years but varies based on certain index such as the LIBOR, federal funds rate, etc. The higher the increase in market interest rates, the more pronounced will be the payment shock. Several Ninth District banks introduced, or reintroduced, adjustable rate mortgage (ARM) loans recently. An ARM is also known as an adjustable-rate loan, variable rate mortgage, or variable rate loan. It is sweet when interest rates fall and lethal when they rise. Contrast the situation with a fixed-rate mortgage, where the bank takes that risk. Which can really cost you an arm and a leg, pun intended. Regulations around ARMs have important distinctions from other mortgage loans, many of which have changed over the past few years. What Is an Adjustable-Rate Mortgage (ARM)? • The loan also features a “teaser” rate of 3%. 10/1 ARM. (For example, the monthly payment for a new loan amount of$60,000 would be $60,000 divided by$10,000 = 6. Same goes for the 3/27, except only the first three years are fixed, and the remaining 27 years are adjustable. An adjustable-rate mortgage, or ARM, has an introductory interest rate that lasts a set period of time and adjusts annually thereafter for the remaining time period. And up. × × An ARM can be a smart financial choice for home buyers that are planning to pay off the loan in full within a specific amount of time or those who will not be financially hurt when the rate adjusts. In subprime mortgage …the United States is the adjustable rate mortgage (ARM), which charges a fixed interest rate for an initial period and a floating interest rate … Both companies use the same index for ARM calculation, but they have different margins (or “markups”). You bought a house for $600,000 on 1 January 20X5 paying 10% of your own savings and financing the rest with a 15-year mortgage 5/1-ARM that required interest at 3.5% per annum compounded and paid quarterly. ADJUSTABLE-RATE MORTGAGE LOAN PROGRAM DISCLOSURE ... • Your monthly payment can increase or decrease substantially based on changes in the interest rate. A 3/1 ARM usually refers to an adjustable rate mortgage with an interest rate that is fixed for 3 years and adjusts annually after that. However, the initial lower fixed interest rates might cause you to overestimate your periodic payment appetite. Adjustable-Rate Mortgage Benefits . The bank (usually) rewards you with a lower initial rate because you’re taking the risk that interest rates could rise in the future. Common Adjustable Rate Mortgages. a loan which requires the borrower to make equal periodic payments which comprise of both interest payment and a principal payment. Most adjustable-rate mortgages have an introductory period where the rate of interest and monthly payments are fixed. No problem. It's typically several percentage points. An Adjustable Rate Mortgage If the mortgage has interest rates that adjust monthly subject only to a lifetime cap, the following modifications to the model adjustable rate note form are mandatory: (a)change paragraph 5(A) to read: (A)Change Date The interest rate may change on the first day of , 20 , and on the first day of each succeeding month. Can’t remember your Username? That extra 2% is called the margin. Afterwards, the rate is adjusted quarterly to a benchmark rate plus 2.5%. Examples of ARM Loan Calculation Let’s say you obtain rate quotes from two different companies, for a 5/1 adjustable-rate mortgage. Your monthly payment will be just$10,710: $$\text{PMT}=\frac{\text{\390,301}}{\frac{\text{1}-{(\text{1}+\frac{\text{2%}}{\text{4}})}^{-\text{10}\times\text{4}}}{\frac{\text{2%}}{\text{4}}}}=\text{\10,710}$$eval(ez_write_tag([[300,250],'xplaind_com-banner-1','ezslot_4',135,'0','0'])); An adjustable rate mortgage is a double-edged sword. In contrast, a 5/1 ARM boasts a fixed rate for five years, followed by a variable rate that adjusts every year (as indicated by the number one). An annual cap is a clause in the contract of an adjustable-rate mortgage (ARM) limiting the possible increase in the loan's interest rate during each year. At the close of the fixed-rate period, ARM interest rates increase or decrease based on an index plus a set margin. Mortgage Company ‘A’ uses the 1- … Months Fixed. For example, if the index is 5% and the margin is 2%, the interest rate on the mortgage adjusts to 7%. Although the index rate can change, the margin stays the same. For example, your adjustable rate may be the rate of the one-year T-bill plus 2%. After the initial introductory period the loan shifts from acting like a fixed-rate mortgage to behaving like an adjustable-rate mortgage, where rates … The result is that an ARM will have a lower initial rate, allowing a home buyer to purchase a more expensive home or have a lower payment. Update 5/25/2017: A new "Tabulated" worksheet has been added that allows you to use a table to list interest rate changes by date. Can’t remember your Username? 7/1 ARM. The benchmark rate hovers around 6%. A 3/1 ARM, for example, is a mortgage that carries a fixed rate for the first three years and then adjusts every year thereafter. Let's connect. An “adjustable-rate mortgage” is a loan program with a variable interest rate that can change throughout the life of the loan. It’s common for this cap to be either two or five percent – meaning that at the first rate change, the new rate can’t be more than two (or five) percentage points higher than the initial rate during the fixed-rate period. ... For example, if you took out a 5/1 ARM with a rate of 2.5% and a loan amount of $200,000, the monthly payment would be$790.24 for the first 60 months. So, for example, if there is a 6% lifetime cap and you have
import numpy as np import time as timc def findToE(signal, noise, mult): ''' define Time of Emergence (ToE) from last time index at which signal is larger than mult*noise signal is [time,space] noise is [space] mult is float ''' #tcpu0 = timc.clock() timN = signal.shape[0] toe_wrk = np.ma.ones(signal.shape)*1. # init toe_wrk array to 1 signaltile = np.ma.reshape(np.tile(noise,timN),signal.shape) # repeat noise timN toe_idx = np.argwhere(abs(signal) >= mult*signaltile) # find indices of points where signal > noise if signal.size > timN: # if there are at least 2 dimensions toe_wrk[toe_idx[:,0],toe_idx[:,1]] = 0. # set corresponding points in toe_wrk to zero else: # if there is only the time dimension toe_wrk[toe_idx[:,0]] = 0 toe = timN-np.flipud(toe_wrk).argmax(axis=0) # compute ToE as last index when signal > noise #tcpu1 = timc.clock() # perf #print ' ToE CPU = ',tcpu1-tcpu0 return toe def findToE_2thresholds(signal, noise1, noise2, tidx, mult): ''' define Time of Emergence (ToE) from last time index at which signal is larger than mult*noise signal is [time] noise1 and noise2 are single-values mult is float tidx is an integer use noise1 over first part of the signal (until tidx), noise2 over second part 1D case for now ''' timN = signal.shape[0] toe_wrk = np.ma.ones(signal.shape)*1. # init toe_wrk array to 1 signaltile1 = np.tile(noise1,tidx) # repeat noise1 signaltile2 = np.tile(noise2,timN-tidx) signaltile = np.concatenate((signaltile1, signaltile2)) toe_idx = np.argwhere(abs(signal) >= mult*signaltile) # find indices of points where signal > noise # if signal.size > timN: # if there are at least 2 dimensions # toe_wrk[toe_idx[:,0],toe_idx[:,1]] = 0. # set corresponding points in toe_wrk to zero # else: # if there is only the time dimension toe_wrk[toe_idx[:,0]] = 0 toe = timN-np.flipud(toe_wrk).argmax(axis=0) # compute ToE as last index when signal > noise return toe def ToEdomainhistvshistNat(model_name, domain_name): ''' Save domain boxes for hist vs. histNat ensemble means (salinity/temperature) :param model_name: name of model :param domain_name: Southern ST, Southern Ocean, etc... :return: box(es) of the specified model and domain ''' if domain_name == 'Southern ST': # Southern Subtropics (cooling/freshening in all three basins) domains = [ {'name':'bcc-csm1-1' , 'Atlantic': [-30,-25,25.4,26.2], 'Pacific': [-20,-12,24.4,25.4], 'Indian': [-30,-20,25.4,26.1]}, # 0 {'name':'CanESM2' , 'Atlantic': [-18,-10,25.4,26.1], 'Pacific': [-30,-10,24.8,26], 'Indian': [-38,-20,26,26.4]}, # 1 {'name':'CCSM4' , 'Atlantic': [-35,-15,25,26], 'Pacific': [-40,-15,24.8,26.4], 'Indian': [-37,-20,25.6,26.2]}, # 2 {'name':'CESM1-CAM5' , 'Atlantic': [-20,-7,25,26.4], 'Pacific': [-40,-20,25.2,26.25],'Indian': [-35,-20,25.6,26.2]}, # 3 {'name':'CNRM-CM5' , 'Atlantic': [-40,-15,25.2,26.3], 'Pacific': [-25,-10,24.6,25.6], 'Indian': [-35,-15,25.4,26.2]}, # 4 {'name':'CSIRO-Mk3-6-0' , 'Atlantic': [-18,-10,26,26.3], 'Pacific': [-30,-15,24.6,25.6], 'Indian': [-40,-15,26.3,26.6]}, #5 {'name':'FGOALS-g2' , 'Atlantic': [-15,-10,26.1,26.6], 'Pacific': [-25,-15,24.4,25.4], 'Indian': [-35,-20,25.4,26]}, # 6 {'name':'GFDL-CM3' , 'Atlantic': [-33,-25,25.8,26.4], 'Pacific': [-35,-10,24.6,26.2], 'Indian': [-37,-30,26.2,26.5]}, # 7 {'name':'GFDL-ESM2M' , 'Atlantic': [-30,-20,25.4,26], 'Pacific': [-28,-15,26.1,26.4], 'Indian': [-35,-20,25.6,26.2]}, # 8 {'name':'GISS-E2-R' , 'Atlantic': [-35,-25,25.6,26.5], 'Pacific': [-25,-10,24,25.8], 'Indian': [-30,-15,25.4,26.1]}, # 9 {'name':'HadGEM2-ES' , 'Atlantic': [-23,-5,26.1,27], 'Pacific': [-25,-15,24,25.2], 'Indian': [-30,-20,25.6,26]}, # 10 {'name':'IPSL-CM5A-LR' , 'Atlantic': [-43,-35,27.2,27.5], 'Pacific': [-25,-15,25.8,26.6], 'Indian': [-35,-20,26.5,26.9]}, # 11 {'name':'IPSL-CM5A-MR' , 'Atlantic': [-40,-30,26.9,27.2], 'Pacific': [-30,-15,25.4,26.6], 'Indian': [-40,-25,26.6,26.9]}, # 12 {'name':'MIROC-ESM-CHEM', 'Atlantic': [-25,-10,25.2,26.1], 'Pacific': [-20,-8,24.6,26], 'Indian': [-37,-20,25.6,26.3]}, # 13 {'name':'MIROC-ESM' , 'Atlantic': [-35,-20,25.8,26.4], 'Pacific': [-35,-25,25.8,26.2], 'Indian': [-35,-20,25.6,26.4]}, # 14 {'name':'MME' , 'Atlantic': [-30,-10,25.8,26.5], 'Pacific': [-25,-10,24.6,26.2], 'Indian': [-40,-15,25.8,26.5]}, # 15 ] domain_char = {'nb_basins': 3, 'Atlantic': True, 'Pacific': True, 'Indian': True} if domain_name == 'SO': # Southern Ocean (warmer/saltier in all three basins) domains = [ {'name':'bcc-csm1-1' , 'Atlantic': [-60,-47,27,27.5], 'Pacific': [-63,-57,27.1,27.4], 'Indian': [-60,-50,27.1,27.5]}, # 0 {'name':'CanESM2' , 'Atlantic': [-48,-40,26.4,27.4], 'Pacific': [-60,-50,27.1,27.5], 'Indian': [-55,-45,27.1,27.6]}, # 1 {'name':'CCSM4' , 'Atlantic': [-57,-50,27.5,27.9], 'Pacific': [-57,-50,27,27.6], 'Indian': [-55,-45,27,27.6]}, # 2 {'name':'CESM1-CAM5' , 'Atlantic': [-55,-47,26.9,27.7], 'Pacific': [-60,-50,26.8,27.5], 'Indian': [-55,-50,26.9,27.5]}, # 3 {'name':'CNRM-CM5' , 'Atlantic': [-55,-45,26.5,27.1], 'Pacific': [-55,-50,26.4,27], 'Indian': [-55,-47,26.5,27.1]}, # 4 {'name':'CSIRO-Mk3-6-0' , 'Atlantic': [-58,-40,27.2,27.9], 'Pacific': None, 'Indian': [-60,-50,27.6,28]}, #5 {'name':'FGOALS-g2' , 'Atlantic': [-65,-55,26.2,27.3], 'Pacific': [-65,-60,27.6,28], 'Indian': [-60,-45,27.1,27.4]}, # 6 {'name':'GFDL-CM3' , 'Atlantic': [-65,-50,27.6,28], 'Pacific': [-67,-60,27.9,28], 'Indian': [-70,-60,27.7,28]}, # 7 {'name':'GFDL-ESM2M' , 'Atlantic': [-60,-50,27,27.4], 'Pacific': [-70,-65,27.6,27.9], 'Indian': [-60,-50,27.7,27.9]}, # 8 {'name':'GISS-E2-R' , 'Atlantic': [-55,-45,27,27.5], 'Pacific': [-57,-50,26.9,27.4], 'Indian': [-53,-45,27,27.3]}, # 9 {'name':'HadGEM2-ES' , 'Atlantic': [-55,-47,27,27.3], 'Pacific': [-60,-58,27.2,27.6], 'Indian': [-55,-50,27,27.3]}, # 10 {'name':'IPSL-CM5A-LR' , 'Atlantic': [-60,-50,27.7,27.9], 'Pacific': [-65,-60,27.7,27.9], 'Indian': [-55,-50,27.7,27.8]}, # 11 {'name':'IPSL-CM5A-MR' , 'Atlantic': [-55,-50,27.6,27.9], 'Pacific': [-65,-60,27.5,27.8], 'Indian': [-55,-50,27.6,27.8]}, # 12 {'name':'MIROC-ESM-CHEM', 'Atlantic': [-55,-50,27.2,27.8], 'Pacific': [-60,-55,27.4,27.6], 'Indian': [-55,-50,27.3,27.7]}, # 13 {'name':'MIROC-ESM' , 'Atlantic': [-55,-50,27.4,27.9], 'Pacific': None, 'Indian': [-55,-50,27.3,27.8]}, # 14 {'name':'MME' , 'Atlantic': [-55,-45,27,27.5], 'Pacific': [-70,-60,27.7,27.9], 'Indian': [-70,-55,27.6,27.9]} # 15 ] domain_char = {'nb_basins': 3, 'Atlantic': True, 'Pacific': True, 'Indian': True} if domain_name == 'North Atlantic': # North Atlantic (warmer/saltier) domains = [ {'name':'bcc-csm1-1' , 'Atlantic': [45,50,26.3,27.2], 'Pacific': None, 'Indian': None}, # 0 {'name':'CanESM2' , 'Atlantic': [30,45,26,26.8], 'Pacific': None, 'Indian': None}, # 1 {'name':'CCSM4' , 'Atlantic': [25,40,26,27], 'Pacific': None, 'Indian': None}, # 2 {'name':'CESM1-CAM5' , 'Atlantic': [20,40,26.,26.9], 'Pacific': None, 'Indian': None}, # 3 {'name':'CNRM-CM5' , 'Atlantic': [40,50,26.2,26.9], 'Pacific': None, 'Indian': None}, # 4 {'name':'CSIRO-Mk3-6-0' , 'Atlantic': [30,50,26.1,27.2], 'Pacific': None, 'Indian': None}, #5 {'name':'FGOALS-g2' , 'Atlantic': [25,35,26.1,27.2], 'Pacific': None, 'Indian': None}, # 6 {'name':'GFDL-CM3' , 'Atlantic': [45,60,26.4,27.1], 'Pacific': None, 'Indian': None}, # 7 {'name':'GFDL-ESM2M' , 'Atlantic': [20,35,26,27], 'Pacific': None, 'Indian': None}, # 8 {'name':'GISS-E2-R' , 'Atlantic': [45,65,26.5,27.8], 'Pacific': None, 'Indian': None}, # 9 {'name':'HadGEM2-ES' , 'Atlantic': [20,40,26,26.8], 'Pacific': None, 'Indian': None}, # 10 {'name':'IPSL-CM5A-LR' , 'Atlantic': [30,45,26.4,27], 'Pacific': None, 'Indian': None}, # 11 {'name':'IPSL-CM5A-MR' , 'Atlantic': [30,45,26.5,27.1], 'Pacific': None, 'Indian': None}, # 12 {'name':'MIROC-ESM-CHEM', 'Atlantic': [35,45,26.4,27.2], 'Pacific': None, 'Indian': None}, # 13 {'name':'MIROC-ESM' , 'Atlantic': [45,60,26.6,27.6], 'Pacific': None, 'Indian': None}, # 14 {'name':'MME' , 'Atlantic': [35,70,26.3,27], 'Pacific': None, 'Indian': None}, # 15 ] domain_char = {'nb_basins': 1, 'Atlantic': True, 'Pacific': False, 'Indian': False} if domain_name == 'Northern ST': # Northern Subtropics (cooling/freshening in the Pacific and Indian oceans) domains = [ {'name':'bcc-csm1-1' , 'Atlantic': None, 'Pacific': [30,35,25,25.2], 'Indian': [22,24,25.5,26.1]}, # 0 {'name':'CanESM2' , 'Atlantic': None, 'Pacific': [20,35,24.8,25.2], 'Indian': [20,25,25.4,26.5]}, # 1 {'name':'CCSM4' , 'Atlantic': None, 'Pacific': [15,30,24.2,25.2], 'Indian': [20,25,25.8,26.3]}, # 2 {'name':'CESM1-CAM5' , 'Atlantic': None, 'Pacific': None, 'Indian': [20,25,25.8,26.5]}, # 3 {'name':'CNRM-CM5' , 'Atlantic': None, 'Pacific': [25,30,24.8,25.2], 'Indian': [20,25,25.8,26.5]}, # 4 {'name':'CSIRO-Mk3-6-0' , 'Atlantic': None, 'Pacific': [10,20,23.4,25], 'Indian': None}, #5 {'name':'FGOALS-g2' , 'Atlantic': None, 'Pacific': [15,17,24,25], 'Indian': [23,25,26.6,27]}, # 6 {'name':'GFDL-CM3' , 'Atlantic': None, 'Pacific': None, 'Indian': None}, # 7 {'name':'GFDL-ESM2M' , 'Atlantic': None, 'Pacific': [15,28,25.4,26], 'Indian': [20,25,25.8,27]}, # 8 {'name':'GISS-E2-R' , 'Atlantic': None, 'Pacific': [10,30,24,25.6], 'Indian': [15,25,25.8,26.7]}, # 9 {'name':'HadGEM2-ES' , 'Atlantic': None, 'Pacific': [22,32,25.8,26.5], 'Indian': [5,10,23.4,24.4]}, # 10 {'name':'IPSL-CM5A-LR' , 'Atlantic': None, 'Pacific': [20,30,25,25.8], 'Indian': [18,22,26.1,26.7]}, # 11 {'name':'IPSL-CM5A-MR' , 'Atlantic': None, 'Pacific': [20,35,25,26], 'Indian': [15,20,25.8,26.7]}, # 12 {'name':'MIROC-ESM-CHEM', 'Atlantic': None, 'Pacific': [25,35,25.2,25.8], 'Indian': [15,25,26,26.7]}, # 13 {'name':'MIROC-ESM' , 'Atlantic': None, 'Pacific': [15,25,26,26.3], 'Indian': [15,20,22.8,24]}, # 14 {'name':'MME' , 'Atlantic': None, 'Pacific': [20,30,24.8,25.8], 'Indian': [20,25,25.8,26.9]}, # 15 ] domain_char = {'nb_basins': 2, 'Atlantic': False, 'Pacific': True, 'Indian': True} if domain_name == 'North Pacific': # North Pacific (warmer/saltier) domains = [ {'name':'bcc-csm1-1' , 'Atlantic': None, 'Pacific': [40,55,26,26.4], 'Indian': None}, # 0 {'name':'CanESM2' , 'Atlantic': None, 'Pacific': [45,60,26.5,26.8], 'Indian': None}, # 1 {'name':'CCSM4' , 'Atlantic': None, 'Pacific': [40,60,26,26.7], 'Indian': None}, # 2 {'name':'CESM1-CAM5' , 'Atlantic': None, 'Pacific': [60,65,26,26.7], 'Indian': None}, # 3 {'name':'CNRM-CM5' , 'Atlantic': None, 'Pacific': [50,65,25.8,26.4], 'Indian': None}, # 4 {'name':'CSIRO-Mk3-6-0' , 'Atlantic': None, 'Pacific': [60,65,25.4,26.3], 'Indian': None}, #5 {'name':'FGOALS-g2' , 'Atlantic': None, 'Pacific': [40,60,26.6,26.9], 'Indian': None}, # 6 {'name':'GFDL-CM3' , 'Atlantic': None, 'Pacific': [45,60,26.5,26.9], 'Indian': None}, # 7 {'name':'GFDL-ESM2M' , 'Atlantic': None, 'Pacific': [38,50,26,26.5], 'Indian': None}, # 8 {'name':'GISS-E2-R' , 'Atlantic': None, 'Pacific': [40,60,26.7,27.1], 'Indian': None}, # 9 {'name':'HadGEM2-ES' , 'Atlantic': None, 'Pacific': [45,65,26.4,26.9], 'Indian': None}, # 10 {'name':'IPSL-CM5A-LR' , 'Atlantic': None, 'Pacific': [45,60,26.8,27.1], 'Indian': None}, # 11 {'name':'IPSL-CM5A-MR' , 'Atlantic': None, 'Pacific': [45,60,26.8,27], 'Indian': None}, # 12 {'name':'MIROC-ESM-CHEM', 'Atlantic': None, 'Pacific': [45,60,27,27.4], 'Indian': None}, # 13 {'name':'MIROC-ESM' , 'Atlantic': None, 'Pacific': [50,62,27,27.3], 'Indian': None}, # 14 {'name':'MME' , 'Atlantic': None, 'Pacific': [55,65,25.8,26.7], 'Indian': None}, # 15 ] domain_char = {'nb_basins': 1, 'Atlantic': False, 'Pacific': True, 'Indian': False} for imodel in range(len(domains)): if domains[imodel]['name'] == model_name : varout = domains[imodel] return varout, domain_char def ToEdomainrcp85vshistNat(model_name, domain_name): ''' Save domain boxes for rcp8.5 vs. histNat ensemble means (salinity/temperature) November-December 2018 :param model_name: name of model :param domain_name: Southern ST, Southern Ocean, etc... :return: box(es) of the specified
\section*{Introduction} Recent work [1] suggests that recurrent ``neural network" models of several types perform better than sequential models in acquiring and processing hierarchical structure. Indeed, recurrent networks have achieved state-of-the-art results in a number of natural language processing tasks, including named-entity recognition [2], language modeling [3], sentiment analysis [4], natural language generation [5], and beyond. \bigskip \noindent The hierarchical structure associated with natural languages is often modeled as some variant of context-free languages, whose languages may be defined over an alphabet $\Sigma$. These context-free languages are exactly those that can be recognized by pushdown automata (PDAs). Thus it is natural to ask whether these modern natural language processing tools, including simple recurrent neural networks (RNNs) and other, more advanced recurrent architectures, can learn to recognize these languages. \bigskip \noindent The computational power of RNNs has been studied extensively using empirical testing. Much of this research [8], [9] focused on the ability of RNNs to recognize simple context-free languages such as $a^nb^n$ and $a^nb^mB^mA^n$, or context-sensitive languages such as $a^nb^nc^n$. Related works [10], [11], [12] focus instead on Dyck languages of balanced parenthesis, which motivates some of our methods. Gated architectures such as the Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) obtain high accuracies on each of these tasks. While simpler RNNs have also been tested, one difficulty is that the standard hyperbolic tangent activation function makes counting difficult. On the other hand, RNNs with ReLU activations were found to perform better, but suffer from what is known as the ``exploding gradient problem" and thus are more difficult to train [8]. \bigskip \noindent Instead of focusing on a single task, many researchers have studied the broader {\it{theoretical}} computational power of recurrent models, where weights are not trained but rather initialized to recognize a desired language. A celebrated result [6] shows that a simple recurrent architecture with 1058 hidden nodes and a saturated-linear activation $\sigma$ is a universal Turing Machine, with: $$ \sigma(x) = \begin{cases}0, & x < 0\\x, & 0 \le x \le 1\\1, & x > 1\end{cases} $$ However, their architecture encodes the whole input in its internal state and the relevant computation is only performed after reading a terminal token. This differs from more common RNN variants that consume tokenized inputs at each time step. Furthermore, the authors admit that were the saturated-linear activation to be replaced with the similar and more common sigmoid or hyperbolic tangent activation functions, their methodology would fail. \bigskip \noindent More recent work [7] suggests that single-layer RNNs with rectified linear unit (ReLU) activations and softmax outputs can also be simulated as universal Turing Machines, but this approach again suffers from the assumption that the entire input is read before computation occurs. \bigskip \noindent Motivated by these earlier theoretical results, in this report we seek to show results about the computational power of recurrent architectures actually used in practice - namely, those that read tokens one at a time and that use standard rather than specially chosen activation functions. In particular we will prove that, allowing infinite precision, RNNs with just one hidden layer and ReLU activation are at least as powerful as PDAs, and that GRUs are at least as powerful as deterministic finite automata (DFAs). Furthermore, we show that using infinite edge weights and a non-standard output function, GRUs are also at least as powerful as PDAs. \section{Simple RNNs} Let a {\it{simple RNN}} be an RNN with the following architecture: \begin{align*} h_t &= f(W_xx_t + W_hh_{t - 1} + b_h)\\ o_t &= W_oh_t + b_o \end{align*} where $o_i \in \mathbb{R}$ for all $i$, for some chosen activation function $f$, usually the ReLU or the hyperbolic tangent functions. We assume that the inputs are {\it{one-hots}} of a given set of symbols $\Sigma$, vectors of length $|\Sigma|$ where each element but one is $0$ and the remaining element is $1$. \bigskip \noindent Say that an RNN {\it{accepts}} an input $w$ of length $n$ if after passing $w$ through the RNN, its final output $o_n$ belongs to a predetermined set $S \subseteq \mathbb{R}$, for which membership can be tested in $O(1)$ time. Let the $S$-{\it{language}} of an RNN consist exactly of all inputs that it accepts given set $S$. \bigskip \noindent In practice, the inputs and hidden nodes of an RNN are stored as numbers with finite precision. Including this restriction, we show the following result: \bigskip \noindent {\bf{Theorem 1.1}}. For every language $L \subseteq \Sigma^*$, $L$ is regular if and only if $L$ is the $S$-language of some finite precision simple RNN. \bigskip \noindent {\it{Proof.}} We begin with the ``if" direction. Suppose we are given some simple RNN and set $S \subseteq \mathbb{R}$. It suffices to show that there exists a DFA that accepts the $S$-language of this RNN. Assume that the RNN has $m$ hidden nodes, and that these hidden nodes are precise up to $k$ bits. Then there are exactly $2^{mk}$ possible hidden states for the RNN. Construct the following DFA with: \begin{itemize} \item set of $2^{mk}$ states $Q = \{q_h:h\ \text{is a possible hidden state of the RNN}\}$ \item alphabet $\Sigma$ \item transition function $\delta$ where $\delta(q_h, x) = q_{f(W_xx + W_hh + b_h)}$ \item initial state $q_{h_0}$ \item set of accepting states $F = \{q_h|W_hh + b_o \in S\}$ \end{itemize} It's clear that after reading the first $n$ inputs of a word $w$, the current state of this DFA is $q_{h_n}$, which immediately completes the proof of this direction. \bigskip \noindent For the ``only if" direction, suppose we have a DFA $D = (Q, \Sigma, \delta, q_0, F)$ with corresponding language $L$. We will construct a simple RNN whose inputs are one-hotted symbols from $\Sigma$, with ReLU activation function $f(x) = \text{max}(0, x)$, and with $|Q||\Sigma|$ hidden nodes whose $\{0\}$-language is $L$. \bigskip \noindent The RNN has three layers: the first layer (input layer) has $|\Sigma| + |Q||\Sigma|$ nodes; the second layer (hidden layer) has $|Q||\Sigma|$ nodes; and the third layer (output layer) has one node. For the $|\Sigma|$ nodes in the input layer associated with the one-hot of the current symbol, label each node with its corresponding symbol from $\Sigma$. Label the $|Q||\Sigma|$ hidden nodes (in both the first and second layers) with all $|Q||\Sigma|$ symbol-state combinations $(x, q)$ for $x \in \Sigma$ and $q \in Q$. \bigskip \noindent For every $x \in \Sigma$, connect the node in the input layer with label $x$ to all nodes in the hidden layer with labels $(x, q)$ for any $q \in Q$ with edges with weight $1$. For all $(x, q) \in \Sigma \times Q$, connect the node in the input layer with label $(x, q)$ to all nodes in the hidden layer with labels $(x', q')$ where $\delta(q, x') = q'$ with edges also of weight $1$. Finally, for all $(x, q) \in \Sigma \times Q/F$, connect the node in the hidden layer with label $(x, q)$ to the single node in the output layer with an edge of weight $1$. \bigskip \noindent Each of the hidden nodes are initialized to $0$ except a single hidden node with label $(x, q_0)$ for a randomly chosen $x \in \Sigma$, which is initialized to $1$. To complete the description of the RNN, we set $b_h = -1$ and $b_o = 0$. We claim that the following invariant is maintained: after reading some word, suppose the current state of $D$ is $q$. Then after reading the same word, the hidden nodes of the RNN would all be equal to $0$ except for one node with label $(x, q)$ for some $x \in \Sigma$, which would equal $1$. \bigskip \noindent We prove the claim by induction on the length of the inputted word $n$. The base case of $n = 0$ is trivial. Now assume that after reading a word of length $n$ the current state of $D$ is $q$, and after reading that same word all hidden nodes of the RNN are equal to $0$ except one node with label $(x, q)$ for some $x \in \Sigma$, which is equal to $1$. If the
restrict a system of variables in Schur functors to the symmetric powers $S^d$ with $d=0,1,\ldots$, then the proof of \cite[Theorem 1.1]{Erman18} shows that these are algebraically independent (over $\Omega$) generators of the ring $\bigoplus_d S^d_\infty(\Omega)$, and that this ring is therefore abstractly isomorphic to a polynomial ring. This fact can also be deduced from Theorem~\ref{thm:Variables} above. \end{re} \subsection{Narrowing down the search for $y(t)$} In this section, we retain the notation from \S\ref{ssec:InfDim}. The following diagram represents the situation: \[ \xymatrix{ \ & (a',p') \in A' \times P'_\infty \ar[d]^{\alpha'_\infty} \\ y(t)=(a(t),p(t)) \in (A \times P_\infty)(\widetilde{\Omega}((t))) \ar[r]^{\quad \quad \alpha_\infty} & x=(b,q) \in (B \times Q_\infty)(\Omega) } \] where $\Omega=K(A')$, $a'$ is the generic point of $A'$, $p'$ is a $K$-point of $P_\infty$ with dense $\GL$-orbit, and we want to certify the existence of $y(t)$, defined over a finite extension $\widetilde{\Omega}$ of $\Omega$, such that $\lim_{t \to 0} \alpha_\infty(y(t))=x$. \begin{prop} \label{prop:Easier} If a $y(t)$ as in Theorem~\ref{thm:Limit} exists, then it can be chosen of the form $(a(t),\gamma(t)_\infty(p'))$ with $a(t) \in A(\widetilde{\Omega}((t)))$ and $\gamma(t)$ a $\widetilde{\Omega}[t,t^{-1}]$-valued point of the finite-dimensional affine space $\mathbf{Map}(P',P)$. \end{prop} \begin{proof} Write $y(t)=(a(t),p(t))$. First, terms in $p(t)$ of sufficiently high degree in $t$ do not contribute to $\lim_{t \to 0} \alpha_\infty(a(t),p(t))$, so we may truncate $p(t)$ and assume that it is a finite sum $\sum_{d=m_1}^{m_2} t^d p_d$ where each $p_d \in P_\infty(\widetilde{\Omega})$. Now write $P'=S_{\lambda_1} \oplus \cdots \oplus S_{\lambda_k}$, where the $\lambda_i$ are partitions. Accordingly, decompose $p'=(p'_1,\ldots,p'_k)$ with $p'_i \in S_{\lambda_i,\infty}$. Over the field extension $\widetilde{\Omega}$ of $K$, choose a system of variables $\xi$ in such a manner that $p'_1,\ldots,p'_k$ are among these variables; this can be done by Proposition~\ref{prop:DenseOrbit} because $\GL \cdot p'$ is dense in $P'_\infty$. Then, by Theorem~\ref{thm:Variables}, we have $p_d=f_{d,\infty}(\xi)$ for all $d$, where $f_d$ is an (essentially unique) morphism into $P$ with coefficients in $\widetilde{\Omega}$. Recall from Remark~\ref{re:Split} that $\alpha$ splits as $\alpha^{(0)}:A \to B$ and $\alpha^{(1)}:A \times P \to Q$, and similarly for $\alpha'$. The limit $\lim_{t \to 0} \alpha^{(1)}_\infty(a(t),p(t)) $ equals \[ g_\infty(p_{m_1},\ldots,p_{m_2}) = g_\infty(f_{m_1,\infty}(\xi),\ldots, f_{m_2,\infty}(\xi)) \] for some $\widetilde{\Omega}$-point $g$ of $\mathbf{Map}(P^{m_2-m_1+1},Q)$. On the other hand, by the choice of $y(t)$, it equals $(\alpha')^{(1)}_\infty(p')$. In the latter expression, only the variables $p'_1,\ldots,p'_k$ appear. By the uniquess statement in Theorem~\ref{thm:Variables}, the same must apply to $g_\infty(f_{m_1,\infty}(\xi),\ldots,f_{m_2,\infty}(\xi))$. Therefore, replacing each $p_d$ by $\tilde{p}_d:=f_{d,\infty}(p',0)$, where all variables not among the variables $p_1',\ldots,p_k'$ are set to zero, yields a $\tilde{y}(t)$ with the same property as $y(t)$: $\lim_{t \to 0} \alpha_\infty(\tilde{y}(t))=x$. Now $\gamma(t):=\sum_d t^d f_d(\cdot,0)$ is the desired $\widetilde{\Omega}[t,t^{-1}]$-valued point of $\mathbf{Map}(P',P)$. \end{proof} Recall from Remark~\ref{re:Split} that $(\alpha')^{(1)}$ can be regarded an $\Omega$-point of $\mathbf{Map}(P',Q)$. Similarly, $\alpha^{(1)}(a(t),\cdot)$ can be regarded an $\widetilde{\Omega}((t))$-point of $\mathbf{Map}(P,Q)$. \begin{lm} \label{lm:Explicit} A point $(a(t),\gamma(t)_\infty(p'))$ as in Proposition~\ref{prop:Easier} satisfies the property $\lim_{t \to 0} \alpha_\infty(a(t),\gamma(t)_\infty(p'))=\alpha'(a',p')=:(b,q) \in (B \times Q_\infty)(\Omega)$ if and only if, first, $ \lim_{t \to 0} \alpha^{(0)}(a(t))=b$ and, second, the $\widetilde{\Omega}((t))$-point $\alpha^{(1)}(a(t),\cdot) \circ \gamma(t)$ of $\mathbf{Map}(P',Q)$ satisfies \[ \lim_{t \to 0} \alpha^{(1)}(a(t),\cdot) \circ \gamma(t) = (\alpha')^{(1)}; \] an equality of $\widetilde{\Omega}$-points in $\mathbf{Map}(P',Q)$. \end{lm} \begin{proof} The statement ``if'' is immediate, by substituting $p'$; and the statement ``only if'' follows from the fact that the $\GL$-orbit of $p'$ is dense in $P'_\infty$. \end{proof} \subsection{Greenberg's approximation theorem} We have almost arrived at a countable chain of finite-dimensional varieties in which we can look for $y(t)$. The only problem is that the point $a(t) \in A(\widetilde{\Omega}((t)))$ does not yet have a finite representation. For concreteness, assume that $A$ is given by a prime ideal $I=(f_1,\ldots,f_r)$ in $K[x_1,\ldots,x_m]$, $B$ is embedded in $K^n$, and $\alpha^{(0)}:A \to B$ is the restriction of some polynomial map $\alpha^{(0)}: K^m \to K^n$. Then $a(t)$ is an $m$-tuple in $\widetilde{\Omega}((t))^m$, and together with $\gamma(t)$ it is required to satisfy the following properties from Lemma~\ref{lm:Explicit}: \begin{enumerate} \item[(i)] $f_i(a(t))=0$ for $i=1,\ldots,r$; \item[(ii)] $\lim_{t \to 0} \alpha^{(0)}(a(t))=b$; \item[(iii)] and $\lim_{t \to 0} \alpha^{(1)}(a(t),\cdot) \circ \gamma(t) = (\alpha')^{(1)}$. \end{enumerate} Suppose that we fix a lower bound $-d_1$, with $d_1 \in \ZZ_{\geq 0}$, on the exponents of $t$ appearing in $a(t)$ or in $\gamma(t)$. From the data of $\alpha$ and $d_1$, one can compute a bound $d_2 \in \ZZ_{\geq 0}$ such that the validity of (ii) and (iii) do not depend on the terms in $a(t)$ or $\gamma(t)$ with exponents $>d_2$. However, (i) does depend on all (typically infinitely many) terms of $a(t)$. Here Greenberg's approximation theorem comes to the rescue. As this theorem requires formal power series rather than Laurent series, we put $\tilde{a}(t):=t^{d_1} a(t)$. Accordingly, replace each $f_i$ by $\tilde{f}_i:=t^e f_i(t^{-d_1}x_1,\ldots,t^{-d_1}x_n)$ where $e$ is large enough such that all coefficients of $\tilde{f}_i$ for all $i$ are in $\widetilde{\Omega}[[t]]$. Note that $a(t)$ is a root of all $f_i$ if and only if $\tilde{a}(t)$ is a root of all $\tilde{f}_i$. \begin{thm}[Greenberg, \cite{Greenberg66}]\label{thm:Greenberg} There exists numbers $N_0 \geq 1,c \geq 1,s \geq 0$ such that for all $N \geq N_0$ and $\overline{a}(t) \in \widetilde{\Omega}[[t]]^n$ with $\tilde{f}_i(\overline{a}(t)) \equiv 0 \mod t^N$ for all $i=1,\ldots,r$ there exists an $\tilde{a}(t) \in \widetilde{\Omega}[[t]]^n$ such that $\tilde{a}(t) \equiv \overline{a}(t) \mod t^{\lceil \frac{N}{c} \rceil-s}$ and moreover $f_i(\tilde{a}(t))=0$ for all $i$. Moreover, $N_0,c,s$ can be computed from $\tilde{f}_1,\ldots,\tilde{f}_r$. \end{thm} As a matter of fact, the computability, which is crucial to our work, is only implicit in \cite{Greenberg66}; it is made explicit in the overview paper \cite{Rond18}. \begin{cor} \label{cor:Bound} There exist natural numbers $d_2,N_1$, which can be computed from $d_1$ and $f_1,\ldots,f_r$, $\alpha$, such that the following statements are equivalent: \begin{enumerate} \item a pair $a(t),\gamma(t)$ with properties (i)--(iii) exists that has no exponents of $t$ smaller than $-d_1$; \item a pair $a(t),\gamma(t)$ exists with all exponents of $t$ in the interval $\{-d_1,\ldots,d_2\}$ that satisfies (ii) and (iii), and that satisfies (i) modulo $t^{N_1}$. \end{enumerate} \end{cor} \begin{proof} The implication (1) $\Rightarrow$ (2) holds for any choice of $N_1$ if $d_2$ is chosen large enough so that the terms of $a(t),\gamma(t)$ with degree $>d_2$ in $t$ do not affect (ii),(iii), and do not contribute to the terms of degree $<N_1$ in $f_i(a(t))$ for any $i$. For the converse, first we compute $\tilde{f}_i$ and $e$ as above; they depend on the choice of $d_1$. Then we compute $N_0,c,s$ as in Greenberg's theorem. Compute $N_1 \geq N_0-e$ such that terms in $a(t),\gamma(t)$ in which $t$ has exponent at least $\lceil \frac{N_1+e}{c} \rceil-s-d_1$ do not affect properties (ii)--(iii), and then compute $d_2$ as in the first paragraph. Given a pair $a(t),\gamma(t)$ as in (2), set $\bar{a}(t):=t^{d_1} a(t)$. Then, for each $i$, \[ \tilde{f}_i(\bar{a}(t))=t^e f_i(a(t)) \equiv 0 \mod t^{N_1+e}. \] Then, since $N_1+e \geq N_0$, Greenberg's theorem yields $\tilde{a}(t) \in \widetilde{\Omega}[[t]]^n$ such that $\tilde{f}_i(\tilde{a}(t))=0$ for all $i$ and such that \[ \tilde{a}(t) = \bar{a}(t) \mod t^{\lceil \frac{N_1+e}{c} \rceil-s}. \] Now set $a_1(t):=t^{-d_1}\tilde{a}(t)$, so that $f_i(a_1(t))=0$ for all $i$---this is property (i)---and \[ a_1(t) \equiv a(t) \mod t^{\lceil \frac{N_1+e}{c} \rceil-s-d_1}. \] Since the terms of $a(t)$ with exponent of degree at least $\lceil\frac{N_1+e}{c} \rceil-s-d_1$ do not affect (ii) and (iii), the pair $a_1(t),\gamma(t)$ also satisfy these conditions. \end{proof} \subsection{The procedure $\mathbf{certify}$} To compute $\mathbf{certify}(B;Q;A;P;\alpha;A';P';\alpha')$, we proceed as follows. For convenience, we again assume that we have sufficiently many processors working in parallel. \begin{enumerate} \item If $A$ and $A'$ are not both irreducible, decompose $A$ into irreducible components $A_i$ and $A'$ into irreducible components $A'_j$, and assign the computation of $\mathbf{certify}(B;Q;A_i;P;\alpha|_{A_i \times P};A_j',P',\alpha'_{A'_j \times P'})$ for all $i,j$ to distinct processors. As soon as for each $j$ there exists at least one $i$ such that the computation returns ``true'', return ``true''. ({\em So in what follows we may assume that $A,A'$ are irreducible. They are given by prime ideals $I \subseteq K[x_1,\ldots,x_n]$ and $J \subseteq K[y_1,\ldots,y_m]$, respectively.}) \item Let $f_1,\ldots,f_r$ be generators of $I$. \item Compute $b:=(\alpha')^{(0)}(a')$ where $a'$ is the generic point of $A'$. {\em So $a'$ is just the $m$-tuple $(y_1+J,\ldots,y_m+J) \in \Omega^m$, where $\Omega$ is the fraction field of $K[y_1,\ldots,y_m]/J$. } \item Compute the $\Omega$-valued point $(\alpha')^{(1)}$ of $\mathbf{Map}(P',Q)$. \item Construct a $K$-basis $\gamma_1,\ldots,\gamma_m$ of the vector space $\mathbf{Map}(P',P)$. \item Set $d_1:=0$, $r:=$``false''. \item While not $r$, perform the steps (8)--(10): \item From $\alpha$ and $d_1$, compute the natural numbers $N_1,d_2$ from Corollary~\ref{cor:Bound} and make the {\em Ansatz} $\gamma(t)=\sum_{i=1}^m c_i(t) \gamma_m$ where $c_i(t)$ is a linear combination of $t^{-d_1},\ldots,t^{d_2}$ with coefficients to be determined in an extension of $\Omega$; and the {\em Ansatz} $a(t)=(a_1(t),\ldots,a_n(t))$, where $a_i$ is also a linear combination of $t^{-d_1},\ldots,t^{d_2}$ with coefficients to be determined. \item The desired properties of $(a(t),\gamma(t))$ from the second item of Corollary~\ref{cor:Bound} translate into a system of polynomial equations for the $(m+n)\cdot(d_2+d_1+1)$ coefficients of the $c_i(t)$ and the $a_i(t)$. By a Gr\"obner basis computation, test whether a solution exists over an algebraic closure of $\Omega$. If so, set $r:=$``true''. \item Set $d_1:=d_1+1$. \item Return ``true''. \end{enumerate} \begin{proof}[Proof of Proposition~\ref{prop:Cert}] The first step is justified by the observation that the image closure of $\alpha$ contains the image of $\alpha'$ if and only if for each $j$, the image of $\alpha'|_{A'_j \times P'}$ is contained in the image closure of some $\alpha|_{A_i \times
possible jammer locations $(0.5,0.5)$, $(0,1)$, $(1,0)$, $(1,1)$ are considered. Fig. \ref{fig:rate_imp_jammer_location} presents the rate maximization algorithm performance versus source power. With varying locations of jammer, the algorithm performs differently, because of the varying degree of impact the jammer has on the users. Performance at location $(0,1)$ and $(1,0)$ are similar, because these are symmetric locations for the square geometry considered. Location $(1,1)$ performs the poorest as the jammer is too far to have significant effect on users' rates. The central location $(0.5, 0.5)$ performs the best, as the jammer is able to affect all the users and contribute significantly in improving their rates. Motivated by this, we consider the jammer to be located at the center in our subsequent simulations. \begin{figure}[!htb] \centering \epsfig{file=figure2.pdf, height=1.6in} \caption{Secure rate versus source power at various jammer locations with jammer power $P_J/\sigma^2=0$ dB.} \label{fig:rate_imp_jammer_location} \end{figure} Fig. \ref{fig:rate_imp_ps_variation} shows the secure rate and fairness performance of proposed JPA scheme with respect to source power at two different values of jammer power, $P_{J1}/\sigma^2=0$ dB and $P_{J2}/\sigma^2=6$ dB. The performance of JPA is compared with the suboptimal version JPASO and EPA. The rate achieved with OSPWJ is also plotted to emphasize secure rate improvement with friendly jammer. The performance of asymptotically optimal solution ($P_J \to \infty$) is plotted as `Asymp opt' to indicate closeness of proposed algorithms to the optimal solution. Sum secure rate and the corresponding fairness upper bounds as $\{P_S, P_J\} \to \infty$ are also indicated in text boxes in the respective figures. Fig. \ref{fig:rate_imp_ps_variation}(a) indicates that JPASO performs better compared to EPA because JPASO performs power optimization while EPA does not. JPASO has a marginal performance penalty with respect to JPA. Also, the gap between JPA and OSPWJ initially keeps increasing and then saturates because of diminishing returns at higher source power. It may be further noted that, `Asymp opt' has poor performance at lower source power budget, because the decision of subcarrier allocation and jammer mode at $\{P_S, P_J\} \to \infty$ may not be optimal at lower value of $P_S$. At finite $P_S$, the possibility of utilizing jammer power under jammer power bounds over larger number of subcarriers is more compared to that at $P_S \to \infty$. But, as $P_S$ increases the decision of subcarrier allocation and jammer mode becomes optimal and the `Asymp opt' tries to catch the upper bound. Since secure rate performance of the proposed schemes improve with $P_J$, hence at higher $P_S$ and $P_J$ the performance of JPA tends to that of `Asymp opt'. Fig. \ref{fig:rate_imp_ps_variation}(b) shows the associated fairness performance. Because of equal distribution of resources in EPA, its fairness performance is better at lower source power compared to both JPA and JPASO. With increasing source power, the jammer is able to affect more number of users because the percentage of subcarriers over which jammer can help keep increasing (cf. Proposition \ref{prop_sri_jammer_rate_imp_constraints}). This results in improved secure rate as well as fairness performance of JPA compared to EPA. \begin{figure}[!htb] \centering \epsfig{file=figure3.pdf, width=3.2in} \caption{Secure rate and fairness performance versus source power at $P_{J1}/\sigma^2 = 0$ dB and $P_{J2}/\sigma^2=6$ dB. `Rate-ub': rate upper bound; `Fairness-ub': fairness upper bound.} \label{fig:rate_imp_ps_variation} \end{figure} Performance of the secure rate improvement algorithms with respect to jammer power is presented in Fig. \ref{fig:rate_imp_pj_variation}. As observed in Fig. \ref{fig:rate_imp_pj_variation}(a) the secure rate of JPA initially increases with jammer power and then saturates, as the algorithm start allocating optimal jammer power $(P_{j_n}^\star)$ achieving maximum secure rate over the selected set of subcarriers (cf. Proposition \ref{prop_sri_jammer_rate_imp_constraints}). Due to sequential power allocation, JPASO performance is inferior compared to JPA. The secure rate saturates at a value lower than the peak value, because JPASO is oblivious to existence of optimal jammer power and utilizes more jammer power than required. Since EPA uses equal jammer power, the secure rate initially increases, achieves a maximum and then reduces with increased jammer power. Compared to EPA which utilizes equal jammer power blindly, the performance of JPASO is relatively better at higher jammer power, which can be attributed to improved jammer power allocation policy (\ref{jammer_power_allocation}). Because of the existence of finite upper bounds on jammer power (cf. Section \ref{subsubsec_ri_bounds}), the rate achieved by EPA also saturates at higher jammer power but at a relatively lower value compared to JPASO. The corresponding fairness performance plots for the various schemes are presented in Fig. \ref{fig:rate_imp_pj_variation}(b). At lower source power, EPA performance is better because of the utilization of equitable distribution of resources, while at higher source power JPA performs comparable to EPA at lower jammer power but outperforms at higher jammer power. With increasing jammer power JPA fairness saturates while EPA fairness achieves a peak, reduces a bit and then saturates, showing similar trend as the corresponding secure rate plots. JPASO first completes source power optimization and then finishes jammer power optimization over the selected set of subcarriers, thereby increasing the secure rate imbalance which results in comparatively poor fairness performance. \begin{figure}[!htb] \centering \epsfig{file=figure4.pdf,width=3.2in} \caption{Secure rate and fairness performance versus jammer power at $P_{S1}/\sigma^2 = 12$ dB and $P_{S2}/\sigma^2=15$ dB.} \label{fig:rate_imp_pj_variation} \end{figure} Fig. \ref{fig:rate_imp_user_var} presents the performance of the algorithms in secure rate improvement scenario with number of users $M$. It may appear that, with increasing number of users the secure rate should reduce as the number of eavesdropper increases, but eventually the secure rate of all the algorithms improves with increasing number of users because of multi-user diversity. \begin{figure}[!htb] \centering \epsfig{file=figure5.pdf,width=2.3in} \caption{Secure rate versus number of users $M$ at $P_J/\sigma^2=6$ dB and $P_{S1}/\sigma^2 = 12$ dB and $P_{S2}/\sigma^2=15$ dB.} \label{fig:rate_imp_user_var} \end{figure} \subsection{Max-min fair schemes} Next we present the performance of the proposed max-min fairness algorithms in Figs. \ref{fig:max_min_ps_variation} and \ref{fig:max_min_pj_variation}. PFA and ODA and their corresponding suboptimal versions PFASO and ODASO are considered with EPA. The performance of asymptotically optimal scheme which act as an upper bound to our proposed max-min scheme has been plotted as `Asymp opt'. Fig. \ref{fig:max_min_ps_variation} presents the fairness and secure rate performance of the algorithms versus source power. ODA utilizing jammer power dynamically, allocates optimal jammer power on first come first serve basis. This results in more subcarrier snatching possibilities and hence a higher fairness at low source power compared to PFA. Being a conservative algorithm, PFA is not able to help many users because of limited per subcarrier jammer power budget ($P_J/N$). Since the optimal jammer power $(P_{j_n}^\star)$ for maximum secure rate over a snatched subcarrier is an increasing function of source power (cf. (\ref{rate_derivative_pjn_quad_coef1})-(\ref{rate_derivative_pjn_quad_coef3})), jammer resource gets exhausted very soon in ODA. Because of this, ODA's capability to improve fairness reduces drastically with increasing source power, and correspondingly its fairness performance starts degrading. ODASO also faces the same issue of depleting all the jammer resource for a few initial users and later finding itself unable to help users, which results in early saturation of the fairness with source power. In contrast PFA and EPA even if helping a limited number of users because of their conservative nature, keep performing well with increasing source power and outperform ODA at higher source power. While ODASO is able to help only a few initial users, EPA provides equal opportunity to all the users. As a result, the overall performance of EPA is better than ODASO. Though ODA at $P_J=18$ dB has similar performance as with $P_J=12$ dB, the plot gets closer to that of `Asymp opt' as $P_J$ is increased. `Asymp opt' does not face the problem of depleting jammer resources and keep on helping users as far as possible, which results in better system fairness compared to all other bounded $P_J$ schemes. The cost being paid for higher fairness is the reduced secure rate, as observed in Fig. \ref{fig:max_min_ps_variation}(b). The secure rate of OSPWJ is better than all other schemes. However, its fairness performance is the poorest because it either allocates the best subcarrier to a user or does not allocate at all. Because of the dynamic allocation, ODA allows more subcarrier snatching which results in poorer secure rate performance compared to PFA as observed in Fig. \ref{fig:max_min_ps_variation}(b). Note that, with the help
Home How to check whether a file exists? # How to check whether a file exists? spence91 1# spence91 Published in 2008-09-17 12:55:00Z How to see if a file exists or not, without using the try statement? abc 2# import os.path if os.path.isfile(filepath): abc 3# import os os.path.exists(path) # returns whether the path (dir or file) exists or not os.path.isfile(path) # returns whether the file exists or not EOL 4# You have the os.path.exists function: import os.path os.path.exists(file_path) This returns True for both files and directories but you can instead use os.path.isfile to test if it's a file specifically. It follows symlinks. user2357112 5# If the reason you're checking is so you can do something like if file_exists: open_it(), it's safer to use a try around the attempt to open it. Checking and then opening risks the file being deleted or moved or something between when you check and when you try to open it. If you're not planning to open the file immediately, you can use os.path.isfile Return True if path is an existing regular file. This follows symbolic links, so both islink() and isfile() can be true for the same path. import os.path os.path.isfile(fname) if you need to be sure it's a file. Starting with Python 3.4, the pathlib module offers an object-oriented approach (backported to pathlib2 in Python 2.7): from pathlib import Path my_file = Path("/path/to/file") if my_file.is_file(): # file exists To check a directory, do: if my_file.is_dir(): # directory exists To check whether a Path object exists independently of whether is it a file or directory, use exists(): if my_file.exists(): # path exists You can also use resolve() in a try block: try: my_abs_path = my_file.resolve(): except FileNotFoundError: # doesn't exist else: # exists armatita 6# Additionally, os.access(): if os.access("myfile", os.R_OK): with open("myfile") as fp: return fp.read() Being R_OK, W_OK, and X_OK the flags to test for permissions (doc). Aaron Hall 7# Aaron Hall Reply to 2016-12-02 19:33:06Z Unlike isfile(), exists() will return True for directories. So depending on if you want only plain files or also directories, you'll use isfile() or exists(). Here is a simple REPL output. >>> print os.path.isfile("/etc/password.txt") True >>> print os.path.isfile("/etc") False >>> print os.path.isfile("/does/not/exist") False >>> print os.path.exists("/etc/password.txt") True >>> print os.path.exists("/etc") True >>> print os.path.exists("/does/not/exist") False Honest Abe 8# Honest Abe Reply to 2014-04-28 01:01:59Z Prefer the try statement. It's considered better style and avoids race conditions. Don't take my word for it. There's plenty of support for this theory. Here's a couple: Style: Section "Handling unusual conditions" of http://allendowney.com/sd/notes/notes11.txt Avoiding Race Conditions Peter Mortensen 9# Peter Mortensen Reply to 2017-05-27 00:43:51Z You could try this (safer): try: # http://effbot.org/zone/python-with-statement.htm # 'with' is safer to open a file with open('whatever.txt') as fh: # Do something with 'fh' except IOError as e: print("({})".format(e)) The ouput would be: ([Errno 2] No such file or directory: 'whatever.txt') Then, depending on the result, your program can just keep running from there or you can code to stop it if you want. Honest Abe 10# Honest Abe Reply to 2014-09-18 04:39:52Z import os path = /path/to/dir root,dirs,files = os.walk(path).next() if myfile in files: print "yes it exists" This is helpful when checking for several files. Or you want to do a set intersection/ subtraction with an existing list. octopusgrabbus 11# This sample function will test for a file's presence in a very Pythonic way using try .. except: def file_exists(filename): try: with open(filename) as f: return True except IOError: return False Peter Mortensen 12# Peter Mortensen Reply to 2017-05-27 00:45:02Z You can simply use the tempfile module to know whether a file exists or not: import tempfile tempfile._exists('filename') # Returns True or False user2154354 13# You should definitely use this one. from os.path import exists if exists("file") == True: print "File exists." elif exists("file") == False: print "File doesn't exist." MrWonderful 14# This is the simplest way to check if a file exists. Just because the file existed when you checked doesn't guarantee that it will be there when you need to open it. import os fname = "foo.txt" if os.path.isfile(fname): print("file does exist at this time") else: print("no such file exists at this time") 15# It doesn't seem like there's a meaningful functional difference between try/except and isfile(), so you should use which one makes sense. If you want to read a file, if it exists, do try: f = open(filepath) except IOError: print 'Oh dear.' But if you just wanted to rename a file if it exists, and therefore don't need to open it, do if os.path.isfile(filepath): os.rename(filepath, filepath + '.old') If you want to write to a file, if it doesn't exist, do # python 2 if not os.path.isfile(filepath): f = open(filepath, 'w') # python 3, x opens for exclusive creation, failing if the file already exists try: f = open(filepath, 'wx') except IOError: print 'file already exists' If you need file locking, that's a different matter. codeforester 16# If you want to do that in Bash it would be: if [ -e "$FILE" ]; then prog "$FILE" fi Which I sometimes do when using Python to do more complicated manipulation of a list of names (as I sometimes need to use Python for), the try open(file): except: method isn't really what's wanted, as it is not the Python process that is intended to open the file. In one case, the purpose is to filter a list of names according to whether they exist at present (and there are no processes likely to delete the file, nor security issues since this is on my Raspberry Pi which has no sensitive files on its SD card). I'm wondering whether a 'Simple Patterns' site would be a good idea? So that, for example, you could illustrate both methods with links between them and links to discussions as to when to use which pattern. Cody Piersall 17# Cody Piersall Reply to 2017-12-04 14:45:12Z Python 3.4+ has an object-oriented path module: pathlib. Using this new module, you can check whether a file exists like this: import pathlib p = pathlib.Path('path/to/file') if p.is_file(): # or p.is_dir() to see if it is a directory # do stuff You can (and usually should) still use a try/except block when opening files: try: with p.open() as f: # do awesome stuff except OSError: print('Well darn.') The pathlib module has lots of cool stuff in it: convenient globbing, checking file's owner, easier path joining, etc. It's worth checking out. If you're on an older Python (version 2.6 or later), you can still install pathlib with pip: # installs pathlib2 on older Python versions # the original third-party module, pathlib, is no longer maintained. pip install pathlib2 Then import it as follows: # Older Python versions import pathlib2 as pathlib Peter Mortensen 18# Peter Mortensen Reply to 2017-05-27 00:48:31Z You can write Brian's suggestion without the try:. from contextlib import suppress with suppress(IOError), open('filename'): process() suppress is part of Python 3.4. In older releases you can quickly write your own suppress: from contextlib import contextmanager @contextmanager def suppress(*exceptions): try: yield except exceptions: pass Peter Mortensen 19# Peter Mortensen Reply to 2017-05-27 00:48:49Z You can use the following open method to check if a file exists + readable: open(inputFile, 'r') 20# If the file is for opening you could use one of the following techniques: >>> with open('somefile', 'xt') as f: #Using the x-flag, Python3.3 and above ... f.write('Hello\n') >>> if not os.path.exists('somefile'): ... with open('somefile', 'wt') as f: ... f.write("Hello\n") ... else: ... print('File already exists!') Hanson 21# To check if a file exists, from sys import argv from os.path import exists script, filename = argv target = open(filename) print "file exists: %r" % exists(filename) Peter Mortensen 22# Peter Mortensen Reply to 2017-05-27 00:49:25Z You can use the "OS" library of Python: >>> import os >>> os.path.exists("C:\\Users\\####\\Desktop\\test.txt") True >>> os.path.exists("C:\\Users\\####\\Desktop\\test.tx") False Peter Mortensen 23# Peter Mortensen Reply to 2017-05-27 00:52:18Z Although I always recommend using try and except statements, here are a few possibilities for you (my personal favourite is using os.access): Try opening the file: Opening
and RGO was added followed by 3 h sonication to make hybrid composites as explained in Table 1. 4a. Appl Therm Eng 106:1067–1074, Wang Z, Qi R, Wang J, Qi S (2015) Thermal conductivity improvement of epoxy composite filled with expanded graphite. These above results indicate that the elimination of partial functional groups in GO is done effectively by reduction [13, 23]. %PDF-1.3 %���� 4 can be assumed as terms in Eq. However, high loading, i.e., more than 60% filler fraction, pays penalty to mechanical and other properties like viscosity and bonding strength, which create hindrance in the application of high-performance electrical chip [16]. 0000004421 00000 n 0000073213 00000 n Φ represents the volume fraction of the dispersed phase. The long-MWCNT composites exhibited higher thermal conductivity than the short-MWCNT composites when the weight percentages were the same. As per Sreekanth Perumbilavil et al., XRD diffraction peak for pristine graphite is at 2θ = 26.36° with interlayer spacing (d) of 0.335 nm, whereas GO diffraction peak is at 2θ = 10.01°. But at 4.5 wt% of both GO and RGO loadings, the TC slightly decreases to 0.293 and 0.311 W/mK, respectively. The agglomeration of unmodified h-BN with RGO is evidently visible in Fig. Then, h-BN/epoxy and mh-BN/epoxy composites were fabricated with 10, 20, 30 and 40 wt% of filler loading separately. 0000003768 00000 n 0000006796 00000 n information16,17, the longitudinal thermal conductivity of the fibers at 300 K is reported in Tab. In order to enhance the thermal conductivity, thermally conductive but electrically insulative materials should be introduced to … DLS technique shows that the particle size of h-BN is 2.12 µm which is defined by the manufacturer. The thermal conductivities of RGO with unmodified h-BN (0.789 W/mK), GO with mh-BN (0.769 W/mK) and GO with unmodified h-BN (0.713) at 44.5 wt% loading are also reported. 0000083418 00000 n [13] demonstrated that at 26.04 vol% of h-BN loading the composite showed TC value approximately 0.8 W/mK. system used, cured epoxy can have a resulting bulk thermal conductivity measured anywhere from 0.5 W/mK to upwards of 35 W/mK. Graphene and boron nitride possess thermal conductivity of 5300 and 300 W/m‧K, respectively. © 2020 Springer Nature Switzerland AG. THERMOLOX 701A – This thermally conductive epoxy has a simple 1:1 epoxy ratio and is filled with a blend of Aluminum Nitride giving it a centipoise of 40,000 coating. 0000008793 00000 n XRD patterns of h-BN and mh-BN are illustrated in Fig. - 23.118.218.37. 13a, b. The peak at 1392 cm−1 represents =C–H vibration in GO. The type of filler, concentration of particles, their size and shape will determine the thermal conductivity of the product. Comparing the experimental TC values with calculated analytical values, it is observed that the series model estimates the lower value of TC in relation to the experimental value. This creates an impact on the surface area and wettability of reinforcement [30, 31]. The hybrid composites of different compositions were applied on mechanically polished steel substrates by carefully rubbing with 400-grade emery paper and cleaning with acetone. 3a indicates the few-layer structure with less interplanar spacing between monolayers [35]. Laser diffraction particle size analyzer (LA-960, Horiba Scientific) was used to measure the size distribution profile or particle size of h-BN and GO water suspension in dynamic light scattering (DLS) technique. 1009 0 obj << /Linearized 1 /O 1012 /H [ 1636 509 ] /L 267248 /E 87797 /N 7 /T 246948 >> endobj xref 1009 39 0000000016 00000 n Fig. Formulated to draw heat away from sensitive electronic components, these potting compounds have higher thermal conductivity than standard potting compounds. Epoxy is well known for its excellent mechanical properties and thermal stability, best adhesion and good electrical insulation properties and compatibility with many materials [2, 3]. Mechanical properties and thermal conductivity of epoxy composites enhanced by h-BN/RGO and mh-BN/GO hybrid filler for microelectronics packaging application, $$Q = - KA\left[ {\frac{{\Delta T}}{{\Delta x}}} \right]$$, $${\text{Maxwell}}\;{\text{equation}}:\frac{K}{{K_{\text{c}} }} = 1 + 3\left( {\frac{{K_{\text{d }} - K_{\text{c}} }}{{K_{\text{d}} + 2K_{\text{c}} }}} \right)\varPhi$$, $${\text{Hashin - Shtrikman}}\;{\text{model:}}\;K_{\text{C}} = K_{\text{m}} \left[ {\frac{{2\left( {1 - f} \right)K_{\text{m}} + \left( {1 + 2f} \right)K_{\text{f}} }}{{\left( {2 + f} \right)K_{\text{m}} + \left( {1 - f} \right)K_{\text{f}} }}} \right]$$, $${\text{Inverse}}\;{\text{rule}}\;{\text{of}}\;{\text{mixture/series}}\;{\text{model:}}\;\frac{1}{{K_{\text{C}} }} = \frac{1 - f}{{K_{\text{m}} }} + \frac{f}{{K_{\text{f}} }}$$, https://doi.org/10.1007/s42452-019-0346-2. The red arrows represent the void developed at the time of fabrication of composite as the result of improper dispersion of hybrid filler with the matrix as the consequence of gas bubble, defects and stress concentration point are introduced. The h-BN powders (255475-50G) with dimension ~ 2 µm were obtained from Sigma-Aldrich Co., Germany. At 40 wt% loading of h-BN and mh-BN with epoxy, the TC values were observed at 0.53 and 0.59 W/mK, respectively. The fabricated hybrid composites exhibit outstanding performance in lap shear strength, thermal stability, and slightly reduced impact and flexural strength, which makes it as relevant for electronics packaging application such semiconductors, integrated circuit packaging, optoelectronics and also can attest potentiality in structural energy storage application. We declare that this work is completely original, and research was conducted of in a sound mind considering or following different articles previously published. The GO exhibits a sharp diffraction peak (001) at a diffraction angle 2θ = 11.5° while in reduced GO, (001) peak disappears and a new peak (002) was observed at 2θ = 26.3°. Sci Rep 6:36143, Park W, Guo Y, Li X, Hu J, Liu L, Ruan X, Chen YP (2015) High-performance thermal interface material based on few-layer graphene composite. Butane - Thermal Conductivity - Online calculators, figures and tables showing thermal conductivity of liquid and gaseous butane, C 4 H 10, at varying temperarure and pressure, SI and Imperial units … A pre-mixed, degassed, tested and frozen epoxy… As compared to sample manifest as h-BN/GO4/epoxy hybrid composite, containing the same amount of filler as sample h-BN/RGO4/epoxy hybrid composite, the only difference is GO, RGO and having less thermal decomposition temperature, i.e., 376.71 °C. The thermal conductivity (K, W/mK) at 60 °C is measured on the circular sample in accordance with ASTME 1530-06 standard by guarded heat flow meter technique (Unitherm™2022, Antercorpo, USA), which follows Eq. A high silver content allows these surface fillers to conduct electricity and heat. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Good thermal conductivity (0.72 W/m-K) Low CL ion content and outgassing; 3M™ Thermally Conductive Epoxy Adhesive TC-2707 is a thermally conductive 2-part epoxy using aluminum metal fillers for good thermal conductivity … So, the recovery sp2 type structure establishes more π–π interactions for which restacking of RGO layers occurs. Wenhui Yuan et al. The epoxy is diglycidyl ether of bisphenol-A (DGEBA), named as Araldite GY 250, and the corresponding crosslinker is tri-ethylene tetra amine (TETA), named as Aradur®HY 951 IN, supplied by M/S Huntsman international (INDIA) Pvt. The TGA curve of neat epoxy and the four optimized hybrid composites is illustrated in Fig. 0000072925 00000 n Huang et al. 7, it is proved that TC is increasing with the increase in filler addition and also the effect of hybrid filler contributes to the TC enhancement [40, 41]. The material is ideal for applications requiring a self-leveling room temperature cure and better than average thermal conductivity. The device developed here can measure thermal conductivity of epoxy resin with accuracy up to 3%. where K is the TC of the composite. Arch Appl Mech 77(7):453–460. 0000045399 00000 n The fabrication of thermal interface material (TIM) based on epoxy adhesive incorporating hybrid filler of GO and RGO with h-BN is purposefully done. Equation 4 is derived from the rule of the mixture and does not consider voids. The h-BN/GO and h-BN/RGO–epoxy hybrid composites were fabricated using hand lay-off technique which is summarized in Table 1. 0000072716 00000 n The composites titled as mh-BN/GO4/epoxy and mh-BN/RGO4/epoxy represent the optimized hybrid filler loading of GO and RGO with
\section{Introduction} Ambiguity can affect natural language processing at very different levels: it can be very local (e.g. limited to a feature value) or conversely affect entire syntactic structures. But the general disambiguating process remains the same and rely in particular on contextual information. Unfortunately, there exists very few solutions providing a general account of such a process with a direct and efficient implementation. In this paper, we propose an approach allowing a general and homogeneous representation of ambiguity and disambiguation relations. This approach constitutes an extension of {\em named disjunctions} (cf. \cite{Dorre90}) and allows a direct implementation of relations controlling the disambiguation. This paper is threefold. In the first part, we approach the question of the representation and we situate our method among the most representative ones. We describe in particular the advantages and drawbacks of named disjunctions and show how different phenomena can be described within such a paradigm. In the second part, we propose an analysis of the disambiguating process itself and describe some of the control relations existing between the different parts of the linguistic structure. We integrate the representation of such relations to the named disjunction approach: we call this new technique {\em controlled disjunction}. The last section presents the implementation of controlled disjunction which uses different delayed evaluation techniques such as coroutining, constraint propagation or residuation. \section{Representing Ambiguity} Ambiguity is generally a problem for NLP, but it can also be conceived as a useful device for the representation of particular linguistic information such as homonymy, valency variations, lexical rules, etc. In this perspective, a general representation is very useful. \subsection{Different needs for representing ambiguity} Ambiguity can be more or less complex according to its scope and there is a general distinction between global and local ambiguities. We can also observe such a difference from a strictly structural point of view: ambiguity can affect the subcomponents of linguistic objects as well as entire structures (or, in a feature structure perspective, atomic as well as complex feature values). This is the reason why we distinguish two fundamental properties: ({\em i}) the interconnection between different subcomponents of an ambiguous structure and ({\em ii}) the redundancy of such structures. Moreover, we can introduce a certain kind of dynamicity: ambiguity affects the objects differently depending on whether it is related to external values or not. More precisely, certain ambiguities have only a minimal effect on the structure to which they belong whereas some others can deeply affect it. A value can be poorly or strongly related to the context and the ambiguities are more or less dynamic according to the density of the entailed relations. Figure (\ref{tableau-ambig}) shows several ambiguities involving different kind of relations between feature values. In these examples, some ambiguities (e.g. (1.a)) have no effect on the other features whereas some features are strongly interconnected as in (1.c). Let us remark that the feature type has no consequence on such relations (see for example (1.d) and (1.e)). \begin{figure} \begin{center} \footnotesize \begin{tabular}{|l|l|l|l|l|} \hline & Unit & Language & Ambiguity & Relations between values \\ \hline (1.a) & {\em les} & French & \begin{avm} \[\rm \it det $\vee$ \rm \it pro \\ \rm \it plur\\ \rm \it masc $\vee$ \rm \it fem \] \end{avm} & None \\ (1.b) & {\em walks} & English & \avmoptions{center}\begin{avm} \[ \rm \it noun \\ \rm \it plur \] \end{avm} $\vee$ \avmoptions{center}\begin{avm} \[ \rm \it verb \\ \rm \it sing \\ \rm \it 3rd \] \end{avm} & Category and number \\ (1.c) & {\em mobile} & French & \avmoptions{center}\begin{avm} \[\rm \it noun \\ \rm \it masc \\ \rm \it sing \] \end{avm} $\vee$ \avmoptions{center}\begin{avm} \[\rm \it adj \\ \rm \it masc $\vee$ fem \\ \rm \it sing \] \end{avm} & Category and number \\ (1.d) & {\em die} & German & \begin{avm} \[\rm \it det \\ \rm \it nom $\vee$ \rm \it acc \\ \rm \it plu $\vee$ \avmoptions{center}\begin{avm} \[\rm \it fem \\ \rm \it sing \]\end{avm}\] \end{avm} & Gender and number \\ (1.e) & {\em den} & German & \begin{avm} \[ \rm \it det \\ \avmoptions{center} \[\rm \it acc \\ \rm \it masc \\ \rm \it sing \] $\vee$ \avmoptions{center} \[\rm \it dat \\ \rm \it plu \] \] \end{avm} & Case, gender, number \\ (1.f) & suffix {\em \_st} & German & \avmoptions{center}\begin{avm} \[\rm \it 3rd \\ \rm \it sing \] \end{avm} $\vee$ \avmoptions{center}\begin{avm} \[\rm \it 2nd \\ \rm \it plu \] \end{avm} & Person and number \\ \hline \end{tabular}\caption{Examples of interconnection between feature values} \end{center} \label{tableau-ambig} \end{figure} \setcounter{equation}{1} The second problem concerns redundancy: when an ambiguity affects major feature values such as category, the result is a set of structures as for (1.b): the ambiguity between the verb and the noun involves in this example two completely different linguistic structures. But there are also ambiguities less interconnected with other parts of the structure (e.g. (1.d)). In these cases, a common subpart of the structure can be factorised. This is particularly useful for the representation and the implementation of some descriptive tools such as lexical rules (see \cite{Bredekamp96}). An efficient ambiguity representation must take into account the interconnection between the different subparts of the structure and allow a factorisation avoiding redundancy. \subsection{Different Representations} The disjunctive representation of ambiguity remains the most natural (see \cite{Karttunen84}, \cite{Kay85} or \cite{Kasper90}). However, this approach has several drawbacks. First of all, if an ambiguity affects more than one atomic value, then a classical disjunction can only represent variation between complete structures. In other words, such a representation doesn't allow the description of the relations existing between the features. In the same way, several approaches (see \cite{Maxwell91} \cite{Nakazawa88}, \cite{Ramsay90}, \cite{Dawar90} or \cite{Johnson90}) propose to rewrite disjunctions as conjunctions (and negations). This method, in spite of the fact that it can allow efficient implementations of some ambiguities, presents the same drawback. A partial solution, concerning in particular the redundancy problem, can be proposed with the use of {\em named disjunctions} (noted hereafter ND; also called {\em distributed disjunctions}). This approach has been described by \cite{Dorre90} and used in several works (see \cite{Krieger93}, \cite{Gerdemann95} or \cite{Blache96}). The disjunction here only concerns the variable part of a structure (this allows the information factorisation). A ND binds several disjunctive formulae with an index (the name of the disjunction). These formulae are ordered and have the same arity. The variation is controlled by a {\em covariancy} mechanism enforcing the simultaneous variation of the ND values: when one disjunct in a ND is chosen (i.e. interpreted to true), all the disjuncts occurring at the same rank into the other ND formulae also have to be true. Several NDs can occur in the same structure. Figure (\ref{ex1}) presents the lexical entry corresponding to example (1.d). In the following, the shaded indices represent the names of the disjunctions. \begin{equation} \begin{minipage}{7cm} \begin{avm} \rm\it den = \[spec \[ case \{ \rm\it acc $\vee_{\colorbox{light}{1}}$ dat \} \\ index \[ gen \{ \rm\it masc $\vee_{\colorbox{light}{1}}$ \_ \} \\ num \{\rm\it sing $\vee_{\colorbox{light}{1}}$ plu \} \] \]\] \end{avm} \end{minipage} \label{ex1} \end{equation} This example shows a particular case of subvariation. Indeed, the {\em plural dative} determiner stipulates no constraints on the gender. We note this subvariation with an {\em anonymous} variable. However, the ND technique also presents some drawbacks. In particular, covariancy, which allows a local representation of ambiguity, is the only way to represent relations between features. Such structures need redundant information as shown in example (\ref{suffix}) (cf. \cite{Krieger93}). Moreover, the semantic of the disjunction is lost: this is no longer a representation of a variation between values, but between different set of values. \begin{equation} \begin{minipage}{10cm} \begin{avm} \[ morph \[ stemm \\ ending \{ \rm \it e $\vee_{\colorbox{light}{1}}$ \rm \it st $\vee_{\colorbox{light}{1}}$ \rm \it t $\vee_{\colorbox{light}{1}}$ \rm \it en \} \] \\ synsem ... index \[ per \{ 1 $\vee_{\colorbox{light}{1}}$ 2 $\vee_{\colorbox{light}{1}}$ \{3 $\vee_{\colorbox{light}{2}}$ 2\} $\vee_{\colorbox{light}{1}}$ \{1 $\vee_{\colorbox{light}{1}}$ 3 \} \} \\ num \{ \rm \it sing $\vee_{\colorbox{light}{1}}$ \rm \it sing $\vee_{\colorbox{light}{1}}$ \{\rm \it sing $\vee_{\colorbox{light}{2}}$ \rm \it plu\} $\vee_{\colorbox{light}{1}}$ \rm \it plu \} \] \] \end{avm} \end{minipage} \label{suffix} \end{equation} In the same perspective, covariancy forces a symmetrical relation between two feature values. In fact, there are relations expressing finer selection relations. This is the case in (1.c) in which no covariancy between
# This source code is recognition are positions from camera image, # self position, markers position, and read barcode. # (These are indicate a distance of cm, and a degree of view direction, and a diffrential cm, # of cource, Those estimates is not accurate! but, Aircraft is repeatedly estimate, # and move, amend, so Usually works well.) # Swich a action mode from some modes and submodes, depend on detect marker information. # If aircraft position is close a marker, It try read a barcode. # Successfuly, ring a beep sound, then turn 90degree x 2, Alternate to next marker. # Repeate above.(If it goes well) # These algorithms have margin for improvement. import os import time import datetime import math import numpy as np import cv2 from drone_ar_assignment import Drone_AR_assignment from PIL import Image, ImageFont, ImageDraw from pyzbar.pyzbar import decode from beep import Beep MODE_SEARCH_UD = 'SEARCH UP DOWN' MODE_TO_DIR = 'TO DIRECTION' MODE_TO_FRONT = 'TO FRONT' MODE_TO_ALTERNATE = 'TO ALTERNATE' MODE_MANUAL = 'MANUAL' SUB_MODE_UP = 'UP' SUB_MODE_DOWN = 'DOWN' SUB_MODE_BACK = 'BACK' SUB_MODE_ALT = 'ALT' ALTITUDE_MAX = 160 ALTITUDE_MIN = 50 MOVE_MIN = 20 MOVE_MIN_F = 30 MOVE_MIN_B = 20 MOVE_MAX = 80 BARREAD_DISTANCE = 60 MIN_DEGREE = 3 FRAME_W = 960 FRAME_H = 720 DETECT_CYCLE_TIME_MS = 200 PARAM_A_1 = (float)(185) PARAM_A_2 = (float)(5.1) PARAM_A_3 = (int)(135) PARAM_1 = PARAM_A_1 PARAM_2 = PARAM_A_2 PARAM_3 = PARAM_A_3 DETECT_CYCLE_TIME_MS = 200 TTFFONT = '/usr/share/fonts/truetype/freefont/FreeMono.ttf' TTFFONTBOLD = '/usr/share/fonts/truetype/freefont/FreeMonoBold.ttf' COLOR_BLACK = (0,0,0) COLOR_RED = (255,0,0) COLOR_GREEN = (0,255,0) COLOR_YELLOW = (255,255,0) COLOR_WHITE = (255,255,255) class Drone_AR_Flight: def __init__(self): self.ar_as = Drone_AR_assignment() self.mode = MODE_SEARCH_UD self.sub_mode = SUB_MODE_UP self.now_height_cm = int(0) self.next_cmd = None self.next_cmd_val = 0 self.frame_no = 0 self.fps = 0 self.fps_period = 10 self.frame_p_m = int(round(time.time() * 1000)) self.frame = None self.gray_frame = None self.detects = 0 self.detect_t = 0 self.ar_dict_name = cv2.aruco.DICT_6X6_250 self.ar_dict = cv2.aruco.getPredefinedDictionary(self.ar_dict_name) self._marker_reset() self.code_latest = '' self.code_latest_rect = (0,0,0,0) self.code_latest_view = 0 self.beep = Beep() self.font = ImageFont.truetype(TTFFONT,32) self.fontbold = ImageFont.truetype(TTFFONTBOLD,32) def renew_frame(self, frame, frame_no, now_height, ar_cmd, ar_val): self.now_height_cm = now_height if hasattr(frame, 'shape'): if len(frame.shape) >= 2: if frame.shape[1] == FRAME_W: if (frame_no % self.fps_period) == 0: frame_p_m_t = int(round(time.time() * 1000)) diff_t = frame_p_m_t - self.frame_p_m self.fps = round(1000*self.fps_period / diff_t) self.frame_p_m = frame_p_m_t self.frame = frame now_t = int(round(time.time() * 1000)) dt = now_t - self.detect_t if dt >= DETECT_CYCLE_TIME_MS: self.gray_frame = cv2.cvtColor(self.frame, cv2.COLOR_BGR2GRAY) self._detect() self._detect2() self.detect_t = int(round(time.time() * 1000)) self._draw(ar_cmd, ar_val) self.frame_no = frame_no return def _draw(self, ar_cmd, ar_val): if self.code_latest_view > 0: self.code_latest_view -=1 cv2.rectangle(self.frame, self.code_latest_rect, (0,196,0), 1) puttxt4 = self.code_latest cv2.putText(self.frame, puttxt4, (self.code_latest_rect[0],self.code_latest_rect[1]+self.code_latest_rect[3]/2), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,0), 3) for i in range(4): if self.marker_id[i] == True: points = self.marker_pointss[i] if self.marker_enable[i] == True: self.alt_ar_to_img(self.frame, points, i) cv2.polylines(self.frame, [points], True, (255,255,0), 2) else: cv2.polylines(self.frame, [points], True, (0,255,0), 1) return def draw_txt(self, fromarray, ar_cmd, ar_val): drawtxt = ImageDraw.Draw(fromarray) puttxt = ('%3d' % self.fps) + '.fps ' + str(self.detects) + '.detects' self.draw_bold_text(drawtxt, puttxt, 10, 10, COLOR_BLACK, COLOR_GREEN) if ar_cmd == MODE_MANUAL: puttxt = ar_cmd color = COLOR_BLACK else: puttxt = 'AUTO : ' + self.mode + ' ' + self.sub_mode + ' ' + ar_cmd + ' ' + str(ar_val) color = COLOR_RED self.draw_bold_text(drawtxt, puttxt, 10, 560, color, COLOR_WHITE) puttxt = 'dist ' puttxt += ('%3d' % self.marker_distances[0]) + ' ' puttxt += ('%3d' % self.marker_distances[1]) + ' ' puttxt += ('%3d' % self.marker_distances[2]) + ' ' puttxt += ('%3d' % self.marker_distances[3]) + ' cm' self.draw_bold_text(drawtxt, puttxt, 10, 590, COLOR_BLACK, COLOR_GREEN) puttxt = ' gap ' puttxt += ('%3d' % self.marker_diff_cm[0][0]) + ',' + ('%3d' % self.marker_diff_cm[0][1]) + ' ' puttxt += ('%3d' % self.marker_diff_cm[1][0]) + ',' + ('%3d' % self.marker_diff_cm[1][1]) + ' ' puttxt += ('%3d' % self.marker_diff_cm[2][0]) + ',' + ('%3d' % self.marker_diff_cm[2][1]) + ' ' puttxt += ('%3d' % self.marker_diff_cm[3][0]) + ',' + ('%3d' % self.marker_diff_cm[3][1]) + ' cm' self.draw_bold_text(drawtxt, puttxt, 10, 620, COLOR_BLACK, COLOR_GREEN) puttxt = ' dir ' puttxt += ('%3d' % self.marker_degree[0]) + ' ' puttxt += ('%3d' % self.marker_degree[1]) + ' ' puttxt += ('%3d' % self.marker_degree[2]) + ' ' puttxt += ('%3d' % self.marker_degree[3]) + ' degree' self.draw_bold_text(drawtxt, puttxt, 10, 650, COLOR_BLACK, COLOR_GREEN) puttxt = 'Z-tilt ' puttxt += ('%3d' % self.marker_ztilt[0]) + ' ' puttxt += ('%3d' % self.marker_ztilt[1]) + ' ' puttxt += ('%3d' % self.marker_ztilt[2]) + ' ' puttxt += ('%3d' % self.marker_ztilt[3]) + ' degree' self.draw_bold_text(drawtxt, puttxt, 10, 680, COLOR_BLACK, COLOR_GREEN) return def _detect(self): corners, ids, rejects = cv2.aruco.detectMarkers(self.gray_frame, self.ar_dict) self.detects = len(corners) if self.detects == 0: for i in range(4): dt = int(round(time.time() * 1000)) - self.marker_time[i] if dt > 1000: self.marker_enable[i] = False else: self.marker_enable[i] = True return detect = [False, False, False, False] for i, corner in enumerate(corners): points = corner[0].astype(np.int32) p0 = points[0] p1 = points[1] p2 = points[2] p3 = points[3] id = int(ids[i][0]) if id < 4 and len(points) == 4: # enable marker flag detect[id] = True self.marker_id[id] = True # take a marker corner points self.marker_pointss[id] = points # take a distance of to marker (method of apparent size) # (apparent size is sum of outer 4line length) normsum = float(cv2.norm(p0,p1) + cv2.norm(p1,p2) + cv2.norm(p2,p3) + cv2.norm(p3, p0)) dist = (float)(100) / (normsum / PARAM_1) self.marker_distances[id] = (int)(dist) # take a center of marker corner points cx = int(0) cy = int(0) for ii in range(4): cx += self.marker_pointss[id][ii][0] cy += self.marker_pointss[id][ii][1] cx /= 4 cy /= 4 # take a difference from screen center to the center of marker dcx = cx - (FRAME_W/2) dcy = cy - (FRAME_H/2) # take a cm/dot as in the marker cmpdot = (PARAM_2*4) / normsum # take a difference [cm] to the marker as space dcm_x_f = cmpdot * (float)(dcx) dcm_x = int(round(cmpdot*dcx)) dcm_y = int(round(cmpdot*dcy)) dcm_y += int(round(cmpdot*PARAM_3)) self.marker_diff_cm[id][0] = dcm_x self.marker_diff_cm[id][1] = dcm_y # take a direction degree to the marker # (as tangent with distance and diffrence) if dist != 0 and dcm_x != 0: deg = 0 if dcm_x > 0: deg = int(round(math.degrees( math.tan(dcm_x_f/dist*-1) ))) deg *= -1 else: dcm_x_f = abs(dcm_x_f) deg = int(round(math.degrees( math.tan(dcm_x_f/dist*-1) ))) self.marker_degree[id] = deg # take a z-tilt of the marker # (as square distortion) x0 = self.marker_pointss[id][0][0] y0 = self.marker_pointss[id][0][1] x1 = self.marker_pointss[id][1][0] y1 = self.marker_pointss[id][1][1] x2 = self.marker_pointss[id][2][0] y2 = self.marker_pointss[id][2][1] x3 = self.marker_pointss[id][3][0] y3 = self.marker_pointss[id][3][1] deg1 = self._get_2point_degree(x0,y0, x1,y1) deg2 = self._get_2point_degree(x3,y3, x2,y2) deg = deg1 + deg2 self.marker_ztilt[id] = deg # renew record a detect time of the marker for i in range(4): if detect[i] == True: self.marker_time[i] = int(round(time.time() * 1000)) for i in range(4): dt = int(round(time.time() * 1000)) - self.marker_time[i] if dt > 1000: self.marker_enable[i] = False else: self.marker_enable[i] = True return def _detect2(self): if self.marker_enable[self.choise_marker] == True: if self.marker_distances[self.choise_marker] < BARREAD_DISTANCE: if self.code_flag == False: # self._try_read_barcode() self.log_ar_code(self.choise_marker) return def _try_read_barcode(self): decoded = decode(self.gray_frame) if len(decoded) > 0: rcode = str(decoded[0].type) + ':' + str(decoded[0].data) rrect = decoded[0].rect self.code_latest = rcode self.code_latest_rect = rrect self.beep.on() self.code_latest_view = 60 self.code_flag = True print('found code:', self.code_latest) def get_latest_barcode(self): return self.code_latest def _marker_sel(self): for i in range(4): if self.marker_id[i] == True: return i return 0 def get_command(self): if self.next_cmd != None: cmd = self.next_cmd val = self.next_cmd_val self.next_cmd = None self.next_cmd_val = 0 return cmd, val cmd = 'stay' val = 0 if self.mode == MODE_SEARCH_UD: self.choise_marker = self._marker_sel() if self.marker_enable[self.choise_marker] == True: self.mode = MODE_TO_DIR else: if self.now_height_cm > ALTITUDE_MAX: self.sub_mode = SUB_MODE_DOWN if self.now_height_cm < ALTITUDE_MIN: self.sub_mode = SUB_MODE_UP if self.sub_mode == SUB_MODE_UP: cmd = 'up' val = MOVE_MIN # self.next_cmd = 'rotateRight' # self.next_cmd_val = 30 elif self.sub_mode == SUB_MODE_DOWN: cmd = 'down' val = MOVE_MIN # self.next_cmd = 'rotateRight' # self.next_cmd_val = 30 elif self.mode == MODE_TO_DIR: deg = self.marker_degree[self.choise_marker] if abs(deg) > MIN_DEGREE: if deg > 0: cmd = 'rotateRight' val = deg else: cmd = 'rotateLeft' val = deg*(-1) else: self.mode = MODE_TO_FRONT elif self.mode == MODE_TO_FRONT: dcm_x = self.marker_diff_cm[self.choise_marker][0] dcm_y = self.marker_diff_cm[self.choise_marker][1] distcm = self.marker_distances[self.choise_marker] if self.marker_enable[self.choise_marker] == False: cmd = 'back' val = MOVE_MIN_B elif dcm_x > MOVE_MIN: cmd = 'right' val = dcm_x elif dcm_x < -1*(MOVE_MIN): cmd = 'left' val =
experimental intrinsic Q-factor (without doping), and the total Q-factor (with doping) of LD cavities of different lengths. (c) A tilted SEM image of the fabricated PhC holes. (d) Passive (left) and active (right) PhC cavities. (e) Simulated Q-factor for an optimized L7 cavity as a function of the p-doping offset from the center of the cavity. The blue (red) shaded region shows the expected p-doping offset for a passive (active) cavity. The dashed lines are a guide to the eye.} \label{fig:Q-factor} \end{figure*} Figure \ref{fig:Q-factor}c depicts a tilted SEM image of the PhC holes highlighting the surface roughness of the etched holes. An SEM image of a passive and active cavity is shown in the left and right part of Figure \ref{fig:Q-factor}d respectively. The Zn- and the Si-dopants provide enough contrast to visualize the doping profiles. The n-doping profile closely follows the DUV-mask design, as does the p-doping profile of the passive InP sample. However, the p-doping profile of the active sample is extended and exhibits some random wavy patterns attributed to the lower quality of InP regrown after dry etching, affecting the diffusion of the p-dopants. The extended p-region has a lower density of dopants, which is evident from the contrast in the SEM image. In Figure \ref{fig:Q-factor}e, 3D FDTD simulations show the calculated $Q_{abs}$ for different p-doping offsets from the center of the cavity and different p-doping absorption coefficients. As the p-doping profile is getting closer to the cavity center, the Q-factor drops, resembling a tri-exponential decay which relates to the overlap of the mode profile and the p-doping region. The absorption coefficient of the p-doping region is calculated as 120 cm$^{-1}$ using the p-doping offset extracted from the SEM image of the passive InP sample shown Figure \ref{fig:Q-factor}d. Consecutively, we can extrapolate the $Q_{abs}$ of an active PhC cavity using the SEM of the active sample and the p-doping absorption coefficient. The shaded red and blue region of Figure \ref{fig:Q-factor}e shows the expected range of the p-doping offset for the active and passive sample respectively. As a result, the $Q_{abs}$ of an active laser cavity is estimated as 8000. Using the conventional semiconductor laser notation \cite{Coldren1995DiodeLA}, this corresponds to an average internal loss $\langle a_i \rangle$ of 17.1 cm$^{-1}$ (details on the simulation model are provided in Note S3, Supporting Information). Overall, the Q-factor of a laser cavity is limited by the absorption of the p-doping region. We should note that although the Q-factor increases exponentially with the offset of the p-doping profile, the injection efficiency of the laser would massively drop due to the low mobility of the holes, discussed in section \ref{section:opticalPumping}. According to our measurements on LD lasers, the total Q-factor of a cavity should exceed 4000 to achieve lasing. \subsection{Thermal Characteristics} Another important characteristic of lasers intended for inter- and intra-chip communication is their behavior at high temperatures and, in particular, the temperature dependence of the threshold current, which ultimately affects the power consumption, the output power, and the service life of the laser. In the following experiment, the thermal properties of lasers based on one and three QWs are investigated by adjusting the stage temperature. The temperature is varied from 20 \textdegree{}C to 79 \textdegree{}C via a thermoelectric cooler (TEC). The L-I curves of a standard 3QW-L7 laser for four different heat sink temperatures are shown in \textbf{Figure \ref{fig:thermal}}a. Lasing is achieved for up to 79 \textdegree{}C, which is the upper limit of the used TEC. \begin{figure*} \centering\includegraphics[width=0.85\textwidth]{Fig5_thermal.pdf} \caption{Thermal characteristics of PhC lasers of one and three QWs. (a) L-I curve of an 3QW-based L7 laser for different heat sink temperatures. (b) The peak wavelength as a function of temperature. (c) Temperature increase of the active region for various pumping levels. (d) Threshold current dependence on heat sink temperature.} \label{fig:thermal} \end{figure*} The laser wavelength is linearly dependent on the heat sink temperature redshifting by 0.1 nm/K for both one and three QWs which is shown in Figure \ref{fig:thermal}b. This linear increase is due to the increased refractive index \cite{Geng2018_thermal_effect_in_refractive_index}. Subsequently, one can calculate the temperature increase in the active region of the laser under varying pumping levels. The active temperature increase is depicted in Figure \ref{fig:thermal}c and was approximately 8.5\textdegree{}C at 150 \textmu A and had an average slope of 84 K/mA for both lasers. The relatively high thermal conductivity is one of the advantages of the BH technology since the poorly thermally-conductive InGaAsP active region (68 W m$^{-1}$K$^{-1}$) is embedded in the better thermally-conductive InP membrane (4.2 W m$^{-1}$K$^{-1}$) \cite{Matsuo2010_optical_BH}. The generated heat under varying pumping is not dependent on the dissipated power due to ohmic heating, but due to the absorption of the increased optical power density circulating the cavity. This was confirmed by the comparison of the lasing wavelength under optical pumping, discussed in section \ref{section:opticalPumping}. Moreover, less powerful lasers operate at lower effective temperatures. One example would be the L3 laser showed in Figure \ref{fig.L3} that exhibited a much smaller active temperature increase with a slope of 43 K/mA. As the temperature of the heat sink rise, the injection efficiency and the gain drops \cite{Coldren1995DiodeLA, Piprek2000_thermal_properties_Gain_Auger_Leakage}, while the Auger recombination-induced losses increase \cite{Fuchs1993_Auger_QWs}. This leads to an increase in the laser threshold current. The sensitivity of the threshold current concerning temperature can be quantified using a characteristic temperature $T_0$ via the commonly used empirical relation \cite{Pankove1968}: \begin{equation} I_{th}(T)=I_0 exp\left( \frac{T}{T_0} \right) \label{eq:exponential_threshold} \end{equation} The evolution of laser threshold current to heat sink temperature is shown in figure \ref{fig:thermal}d for the 1QW- and 3QW-based lasers. A good fit with equation \ref{eq:exponential_threshold} was found giving a characteristic temperature $T_0$ of 35 \textdegree{}C for both 1QW and 3QWs lasers. This value is in the low end for InGaAsP-based QW lasers \cite{Coldren1995DiodeLA,OGorman1992}, attributed to the poor heat dissipation of the free-floating PhC membrane. The thermal properties of these devices can be greatly improved if the PhC slab is surrounded by a low-index material like SiO$_2$ or polymer acting as a heatsink \cite{Bazin2014_ranieriThermal}. \subsection{Injection Efficiency - Comparison between Optical and Electrical pumping} \label{section:opticalPumping} One of the main effects limiting the efficiency of the laterally doped 2D PhC nanolasers is the low injection efficiency estimated to be in the order of 1-10\% from data of previously demonstrated lasers \cite{Takeda2021, Matsuo2013}. The lateral doping geometry offers the possibility for both electrical and optical pumping, and thus a comparison between the two pumping schemes was performed to understand the limiting factors on the efficiency. In \textbf{Figure \ref{fig:optPumping}}(a), the L-I-V curves and the L-L curve for the electrical and optical pumping of a 1QW-L5 laser are shown. In the optical pumping scheme, a 1310 nm pump laser was coupled to a single-mode fiber, while the same 50x objective was used for pumping and collecting. The optical pump power was normalized to match the laser threshold in both pumping schemes. We observe, however, that the output power is much lower after the threshold for the case of electrical injection. This effect is attributed to a drop in the injection efficiency as the applied voltage and current increase, and it is not related to heating since the spectral evolution of the laser peak is very similar for both pumping schemes. The wavelength evolution is shown in Figure \ref{fig:optPumping}b, demonstrating that the heating after the threshold is mainly attributed to the optical field circulating the cavity and not the ohmic heating. pump was coupled to a single-mode fiber, while the same 50x objective was used for pumping and collecting. The signal was separated from the pump using a wavelength division multiplexer and was sent to an OSA to record the optical spectrum and the output power. The optical pump power was normalized to match the laser threshold of the electrical pumping scheme, however, the output power is much lower after the threshold for
\section{Introduction} \IEEEPARstart{F}{or} a long time, steganography and steganalysis always developed in the struggle with each other. Steganography seeks to hide secret information into a specific cover as much as possible and makes the changes of cover as little as possible, so that the stego is close to the cover in terms of visual quality and statistical characteristics[1,2,3]. Meanwhile, steganalysis uses signal processing and machine learning theory, to analyze the statistical differences between stego and cover. It improves detecting accuracy by increasing the number of features and enhancing the classifier performance[4]. \par Currently, the existing steganalysis methods include specific steganalysis algorithms and universal steganalysis algorithms. Early steganalysis methods aimed at the detection of specific steganography algorithms[5], and the general-purpose steganalysis algorithms usually use statistical features and machine learning[6]. The commonly used statistical features include the binary similarity measure feature[7], DCT[8,9] and wavelet coefficient feature[10], co-occurrence matrix feature[11] and so on. In recent years, higher-order statistical features based on the correlation between neighboring pixels have become the mainstream in the steganalysis. These features improve the detection performance by capturing complex statistical characteristics associated with image steganography, such as SPAM[12], Rich Models[13], and its several variants[14,15]. However, those advanced methods are based on rich models that include tens of thousands of features. Dealing with such high-dimensional features will inevitably lead to increasing the training time, overfitting and other issues. Besides, the success of feature-based steganalyzer to detect the subtle changes of stego largely depends on the feature construction. The feature construction requires a great deal of human intervention and expertise. \par Benefiting from the development of deep learning, convolutional neural networks (CNN) perform well in various steganalysis detectors[16,17,18,19]. CNN can automatically extract complex statistical dependencies from images and improve the detection accuracy. Considering the GPU memory limitation, existing steganography analyzers are typically trained on relatively small images (usually $256\times 256$). But the real-world images are of arbitrary size. This leads to a problem that how an arbitrary sized image can be steganalyzed by the CNN-based detector with a fixed size input. In traditional computer vision tasks, the size of the input image is usually adjusted directly to the required size. However, this would not be a good practice for steganalysis as the relation between pixels are very weak and independent. Resizing before classification would compromise the detector accuracy. \par In this paper, we have proposed a new CNN network structure named ``Zhu-Net'' to improve the accuracy of spatial domain steganalysis. The proposed CNN performs well in both the detection accuracy and compatibility, and shows some distinctive characteristics compared with other CNNs, which are summarized as follows: \par (1) In the preprocessing layer, we modify the size of the convolution kernel and use 30 basic filters of SRM[13] to initialize the kernels in the preprocessing layer to reduce the number of parameters and optimize local features. Then, the convolution kernel is optimized by training to achieve better accuracy and to accelerate the convergence of the network. \par (2) We use two separable convolution blocks to replace the traditional convolution layer. Separable convolution can be used to extract spatial correlation and channel correlation of residuals, to increase the signal to noise ratio, and obviously improve the accuracy. \par (3) We use spatial pyramid pooling[20] to deal with arbitrary sized images in the proposed network. Spatial pyramid pooling can map feature maps to fixed lengths and extract features through multi-level pooling. \par We design experiments to compare the proposed CNN network with Xu-Net[17], Ye-Net[19], and Yedroudj-Net[21]. The proposed CNN shows excellent detection accuracy, which even exceeds the most advanced manual feature set, such as SRM[13]. \par The rest of the paper is organized as follows. In Section II, we present a brief review of the framework of popular image steganalysis methods based on convolutional neural networks (CNNs) in the spatial domain. The proposed CNN is described in Section III, which is followed by experimental results and analysis in Section IV. Finally, the concluding remarks are drawn in Section V. \section{Related Works} \par The usual ways to improve CNN structure for steganalysis include: using truncated linear units, modifying topology by mimicking the Rich Models extraction process, and using deeper networks such as ResNet[22], DenseNet[23], and others. \par Tan et.al used a CNN network with four convolution layers for image steganalysis[24]. Their experiments showed that a CNN with random initialized weights usually cannot converge and initializing the first layer's weights with the KV kernel can improve accuracy. Qian et al.[25] proposed a steganalysis model using standard CNN architecture with Gaussian activation function, and further proved that transfer learning is beneficial for a CNN model to detect a steganography algorithm with low payloads. The performance of these schemes is comparable to or better than the SPAM scheme[12], but is still worse than the SRM scheme[13]. Xu et al.[17] proposed a CNN structure with some techniques used for image classification, such as batch normalization (BN)[26], 1×1 convolution, and global average pooling. They also did pre-processing with a high-pass filter and used an absolute (ABS) activation layer. Their experiments showed better performance. By improving the Xu-CNN, they achieved a more stable performance[27]. In JPEG domain, Xu et al.[18] proposed a network based on decompressed image and achieved better detection accuracy than traditional methods in JPEG domain. By simulating the traditional steganalysis scheme of hand-crafted features, Fridrich et al.[28] proposed a CNN structure with histogram layers, which is formed by a set of Gaussian activation functions. Ye et al.[19] proposed a CNN structure with a group of high-pass filters for pre-processing and adopted a set of hybrid activation functions to better capture the embedding signals. With the help of selection channel knowledge and data augmentation, their model obtained significant performance improvements than the classical SRM. Fridrich[29] proposed a different network architecture to deal with steganalyzed images of arbitrary size by manual feature extraction. Their scheme inputs statistical elements of feature maps to the fully-connected-network classifier. \par Generally, there are two disadvantages for the existing networks. \par (1) A CNN is composed of two parts: the convolution layer and the fully connected layer (ignoring the pooling layer, etc.). The function of convolution layer is to convolve input and to output the corresponding feature map. The input of the convolution layer does not need a fixed size image, but its output feature maps can be of any size. The fully connected layer requires a fixed-size input. Hence, the fully connected layer leads to the fixed size constraint for network. The two existing solutions are as follows. \begin{itemize} \item Resizing the input image directly to the desired size. However, the relationship between the image pixels is fragile and independent in the steganalysis task. Detecting the presence of steganographic embedding changes really means detecting a very weak noise signal added to the cover image. Therefore, resizing the image size directly before inputting image to CNN will greatly affect the detection performance of the network. \item Using a full convolutional neural network(FCN), because the convolutional layer does not require a fixed image size. \end{itemize} \par In this paper, we propose the third solution: mapping the feature map to a fixed size before sending it to the fully-connected layer, such as SPP-Net[20]. The proposed network can map feature maps to a fixed length by using spp-module, so as to steganalyze arbitrary size images. \par (2) Accuracy of steganalysis based on CNN seriously relies on signal-to-noise ratio of feature maps. CNN network favorites high signal-to-noise ration to detect small differences between stego signals and cover signals. Many steganalyzers usually extract the residuals of images to increase the signal-to-noise ratio. However, some existing schemes directly convolve the extracted residuals without thinking of the cross-channel correlations of residuals, which do not make good use of the residuals. \par In this paper, we increase signal-to-noise ratio by three ways
# 1911 Encyclopædia Britannica/Weighing Machines WEIGHING MACHINES. Mechanical devices for determining weights or comparing the masses of bodies may be classified as (a) equal-armed balances, (b) unequal-armed balances, (c) spring balances and (d) automatic machines. Equal-armed balances may be divided into (1) scale-beams or balances in which the scale-pans are below the beam; (2) counter machines and balances on the same principle, in which the scale-pans are above the beam. Unequal-armed balances may be divided into (1) balances consisting of a single steelyard; (2) balances formed by combinations of unequal-armed levers and steelyards, such as platform machines, weighbridges, &c. Equal-armed Balances. Scale-beams are the most accurate balances, and the most generally used. When constructed for purposes of extreme accuracy they will turn with the one-millionth part of the load weighed, though to ensure such a result the knife-edges and their bearings must be extremely hard (either hardened steel or agate) and worked up with great care. The beam must be provided with a small ball of metal which can be screwed up and down a stem on the top of the beam for the purpose of accurately adjusting the position of the centre of gravity, and there should be a small adjustable weight on a fine screw projecting horizontally from one end of the beam for the purpose of accurately balancing the arms. The theory of the scale-beam is stated by Weisbach in his Mechanics of Machinery and Engineering, as follows:—In fig. 1 D is the fulcrum Fig. 1. of the balance, S the centre of gravity of the beam alone without the scales, chains or weights; A and B the points of suspension of the chains. If the length of the arms AC = BC = l, CD = a, SD = s, the angle of deviation of the balance from the horizontal = φ, the weight of the beam alone = G, the weight on one side = P, that on the other = P + Z, and lastly the weight of each scale with its appurtenances = Q then ${\displaystyle \tan \phi ={\frac {{\text{Z}}l}{\{2({\text{P}}+{\text{Q}})+{\text{Z}}\}a+{\text{G}}s}}}$ From this it is inferred that the deviation, and therefore the sensitiveness, of the balance increases with the length of the beam, and decreases as the distances, a and s, increase; also, that a heavy balance is, ceteris paribus, less sensitive than a light one, and that the sensitiveness decreases continually the greater the weight put upon the scales. In order to increase the sensitiveness of a balance, the line AB joining the points of suspension and the centre of gravity of the balance must be brought nearer to each other. Finally, if a is made extremely small, so that practically ${\displaystyle \tan \phi ={\frac {{\text{Z}}l}{{\text{G}}s}}}$, the sensitiveness is independent of the amount weighed by the balance. Weisbach also shows that if Gy² is the moment of inertia of the beam, the time, t, of a vibration of the balance is ${\displaystyle t=\pi {\sqrt {\frac {2({\text{P}}+{\text{Q}})(l^{2}+a^{2})+{\text{G}}y^{2}}{g\{2({\text{P}}+{\text{Q}})a+{\text{G}}s\}}}}}$ This shows that the time of a vibration increases as P, Q and l increase, and as a and s diminish. Therefore with equal weights a balance vibrates more slowly the more sensitive it is, and therefore weighing by a sensitive balance is a slower process than with a less sensitive one. The conditions which must be fulfilled by a scale-beam in proper adjustment are:—(1) The beam must take up a horizontal position when the weights in the two scale-pans are equal, from nothing to the full weighing capacity of the machine. (2) The beam must take up a definite position of equilibrium for a given small difference of weight in the scale-pans. The sensitiveness, i.e. the angle of deviation of the beam from the horizontal after it has come to rest, due to a given small difference of weight in the scale-pans, should be such as is suited to the purposes for which the balance is intended. Bearing in mind that with ordinary trade balances there is always a possibility of the scale-pans and chains getting interchanged, these conditions require; (a) That the beam without the scale-pans and chains must be equally balanced and horizontal; (b) that the two scale-pans with their chains must be of equal weight; (c) that the arms of the beam must be exactly equal in length; i.e. the line joining the end knife edges must be exactly bisected by a line drawn perpendicular to it from the fulcrum knife-edge. By testing the beam with the scale-pans attached and equal weights in the pans, and noting carefully the position which it takes up; and then interchanging the scales-pans, &c., and again noting the position which the beam takes up, a correct inference can be drawn as to the causes of error; and if after slightly altering or adjusting the knife-edges and scale-pans in the direction indicated by the experiment, the operation is repeated, any required degree of accuracy may be obtained by successive approximations. The chief reason for testing balances with weights in the scale-pans rather than with the scale-pans empty, is that the balance might be unstable with the weights though stable without them. This is not an infrequent occurrence, and arises from the tendency on the part of manufacturers to make balances so extremely sensitive that they are on the verge of instability. In fig. 2 let ABCD be the beam of a scale-beam, Z the Fig. 2. fulcrum knife-edge, and X, Y the knife-edges on which the scales are bung. In order to ensure a high degree of sensitiveness, balances are sometimes constructed so that Z is slightly below the line joining X and Y, and is only slightly above H, the centre of gravity of the beam with the scale pans and chains attached. The addition of weights in. the scales will have the effect of raising the point H till it gets above Z, and the balance, becoming unstable, will turn till it is brought up by a stop of some kind. Fig. 3 represents a precision balance constructed to weigh with great accuracy. The beam is of bronze in a single deep casting, cored out in the middle so as to allow the saddle at the top of the stand to pass through the beam and afford a continuous bearing for the fulcrum knife-edge. The knife-edge and its bearing are both of steel or agate, and the bearing surface is flat. The end knife-edges also are of steel or agate, and have continuous bearing on flat steel or agate surfaces at the upper part of the suspension links. To relieve the knife-edges from wear when the balance is not being used a triangular frame is provided, which is lifted and lowered by a cam action at the bottom, and moves vertically in guides fixed on the stand. By its upward movement the tops of the screw studs near its ends arc first received by the projecting studs on each side of the suspension links, and the suspension links are lifted off the end knife-edges; and next, as the sliding frame continues its upward motion, the horizontal studs at the two ends of the beam are received in the forks at the ends of the sliding frame, and by them the fulcrum of the beam is lifted off its bearing. From Airy, "On Weighing Machines," Institution of Civil Engineers, 1892. Fig. 3.—Precision Balance. To keep the beam truly in its place, which is very necessary, as all the bearings are flat, the recesses for the ends of the studs are formed so as to draw the beam without strain into its true position every time that it is thrown out of gear by the sliding frame. The end knife-edges are adjusted and tightly jammed into exact position by means of wedge pieces
is an algebra and $X\subset A$, then $\langle X\rangle$ is the two-sided ideal generated by $X$. Let $G$ be a finite group. We denote by $e$ the identity element of $G$, by $\ku G$ its group algebra and by $\ku^G$ the function algebra on $G$. The usual basis of $\ku G$ is $\{g:g\in G\}$ and $\{\delta_g:g\in G\}$ is its dual basis in $\ku^G$, i.e. $\delta_g(h)=\delta_{g,h}$ for all $g,h\in G$. If $M$ is a $\ku^G$-module and $g\in G$, the isotypic component of weight $g$ is $M[g]=\delta_g\cdot M$. We write $\supp M=\{g\in G:M[g]\neq0\}$ and $M^\times=\bigoplus_{g\neq e}M[g]$. The symmetric group in $n$ letters is denoted by $\Sn_n$ and $\sgn:\Sn_n\to\Z_2$ denotes the morphism given by the sign. Let $H$ be a Hopf algebra. Then $\Delta$, $\e$, $\mathcal{S}$ denote respectively the comultiplication, the counit and the antipode. We use Sweedler's notation for comultiplication and coaction but dropping the summation symbol. We denote by $\{H_{[i]}\}_{i\geq0}$ the coradical filtration of $H$ and by $\gr H=\oplus_{n\geq0}\gr^n H=\bigoplus_{n\geq0}H_{[n]}/H_{[n-1]}$ the associated graded Hopf algebra of $H$ with $H_{[-1]}=0$. Assume $\mathcal{S}$ is bijective and let $\ydh$ be the category of Yetter-Drinfeld modules over $H$. If $V\in\ydh$, then the dual object $V^*\in\ydh$ is defined by $\langle h\cdot f,v\rangle=\langle f,\mathcal{S}(h)\cdot v\rangle$ and $f\_{-1}\langle f\_{0},v\rangle=\mathcal{S}^{-1}(v\_{-1})\langle f,v\_{0}\rangle$ for all $v\in V$, $f\in V^*$ and $h\in H$, where $\langle\,,\,\rangle$ denotes the standard evaluation. \subsection{Galois objects} Let $H$ be a Hopf algebra with bijective antipode and $A$ be a right $H$-comodule algebra with right $H$-coinvariants $A^{\co H}=\ku$. If there exist a convolution-invertible $H$-colinear map $\gamma:H\to A$, then $A$ is called a right \emph{cleft object}. The map $\gamma$ can be chosen so that $\gamma(1)=1$, in which case it is called a {\it section}. In turn, $A$ is called a right \emph{$H$-Galois object} if the following linear map is bijective: $$ \can:A\ot A\longmapsto A\ot H,\quad a\ot b\mapsto ab\_0 \ot b\_{1}. $$ Analogously, left $H$-Galois objects are defined. Let $L$ be another Hopf algebra. An $(L,H)$-bicomodule algebra is an $(L,H)$-biGalois object if it is simultaneously a left $L$-Galois object and a right $H$-Galois object. Assume $A$ is a right $H$-Galois object. There is an associated Hopf algebra $L(H,A)$ such that $A$ is a $(L(A,H),H)$-biGalois object, see \cite[Section 3]{S}. $L(A,H)$ is a subalgebra of $A\ot A^{\op}$. Moreover, if $L$ is a Hopf algebra such that $A$ is $(L,H)$-biGalois then $L\cong L(A,H)$. More precisely, if $\delta$, $\delta_L$ stand for the coactions of $L(A,H)$ and $L$ in $A$, there is a Hopf algebra isomorphism $F:L(A,H)\to L$ such that $\delta_L=(F\ot \id)\delta$ and \begin{equation}\label{eqn:F} F\left(\sum a_i\ot b_i\right)\ot 1_{A}=\sum\lambda_L(a_i)(1\ot b_i), \quad \sum a_i\ot b_i\in L(A,H). \end{equation} Thus, one can use Galois objects to find new examples of Hopf algebras. Furthermore, $L(H,A)$ is a cocycle deformation of $H$ \cite[Theorem 3.9]{S}. \section{Nichols algebras and Racks}\label{subsec:relacion entre nichols} From now on $\cC$ denotes a category of (left, right or left-right) Yetter-Drinfeld modules over a finite-dimensional Hopf algebra $H$. Then $\cC$ is a braided monoidal category. Let $c$ be the canonical braiding of $\cC$. See {\it e.~ g.} \cite{K} for details about braided monoidal categories. \smallbreak Let $V\in\cC$. The tensor algebra $T(V)$ is an algebra in $\cC$. Also, $T(V)\ot T(V)$ is an algebra with multiplication $(m\ot m)\circ(\id\ot\, c\ot\id)$. Hence $T(V)$ becomes a Hopf algebra in $\cC$ extending by the universal property the following maps $$ \Delta(v)=v\ot1+1\ot v,\quad\varepsilon(v)=0\quad\mbox{and}\quad\cS(v)=-v,\quad v\in V. $$ Let $\cJ(V)$ be the largest Hopf ideal of $T(V)$ generated as an ideal by homogeneous elements of degree $\geq 2$. \begin{fed}\cite[Proposition 2.2]{AS2} The {\it Nichols algebra} of $V$ (in $\cC$) is $\B(V)=T(V)/\cJ(V)$. \end{fed} See \cite{AS2} for details about Nichols algebras. Let $n\in\N$; we denote by $\mJ^n(V)$, resp. $\B^n(V)$, the homogeneous component of degree $n$ of $\cJ(V)$, resp. of $\B(V)$. We set $\mJ_n(V)=\langle\bigoplus_{l=2}^n \mJ^l(V)\rangle$ and $\widehat{\B_n}(V)=T(V)/\mJ_n(V)$. Let $A$ be Hopf algebra such that $\gr A$ is isomorphic to $\B(V)\# H$. Then $A$ is called a {\it lifting of $\B(V)$ over $H$}. The {\it infinitesimal braiding of $A$} is $V\in\ydh$ with the braiding of $\ydh$. Recall from \cite[Proposition 2.4]{AV} that there exists a {\it lifting map} $\phi:T(V)\#H\rightarrow A$, that is an epimorphism of Hopf algebras such that \begin{align}\label{eq:properties of A and phi} \phi_{|H}=\id,&& \phi_{|V\#H}\mbox{ is injective}&&\mbox{ and }&&\phi((\ku\oplus V)\#H)=A_{[1]}. \end{align} \bigbreak We recall another characterization of $\cJ(V)$, see {\it e.~ g.} \cite{AG1, AS2}. Fix $n\in\N$. Let $\Bb_n$ be the {\it Braid group}: It is generated by $\{\sigma_i:1\leq i< n\}$ subject to the relations $ \sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}$ and $\sigma_i\sigma_j=\sigma_{j}\sigma_i$ for all $1\leq i, j< n$ such that $|i-j|>1$. The projection $ \Bb_n\twoheadrightarrow\Sn_n$, $\sigma_i\mapsto(i\,i+1)$, $1\leq i< n$, admits a set-theoretical section $s:\Sn_n\rightarrow\Bb_n$ defined by $s(i\,i+1)=\sigma_i$, $1\leq i< n$, and $s(\tau)=\sigma_{i_1}\cdots\sigma_{i_{\ell}}$, if $\tau=(i_1\,i_1+1)\cdots(i_{\ell}\,i_{\ell}+1)$ with $\ell$ minimum; this is the {\it Matsumoto section}. The {\it quantum symmetrizer} is: $$ \bS_n=\sum_{\tau\in\Sn_n}s(\tau)\in\ku\Bb_n. $$ The group $\Bb_n$ acts on $V^{\ot n}$ via the assignment $\sigma_i\mapsto c_{i,i+1}$, $1\leq i< n$, where $c_{i,i+1}:V^{\ot n}\longrightarrow V^{\ot n}$ is the morphism $$ \id\ot\, c\ot\id:V^{\ot i-1}\ot V^{\ot2}\ot V^{\ot n-i-1}\longrightarrow V^{\ot i-1}\ot V^{\ot2}\ot V^{\ot n-i-1}. $$ Then the homogeneous components of $\cJ(V)$ are given by $$ \cJ^k(V)=\ker\bS_k,\quad k\in\N. $$ \subsection{Correspondence between Nichols algebras in braided equivalent categories} Let $H$, $\cC$ be as above. Let $H'$ be a finite-dimensional Hopf algebra, $\cC'$ be a category of Yetter-Drinfeld modules over $H'$. Assume there is a functor $(F,\eta):\cC\rightarrow\cC'$ of braided monoidal categories, {\it i. ~e.} $F:\cC\rightarrow\cC'$ is a functor and $\eta:\ot\circ F^2\rightarrow F\circ\ot$ is a natural isomorphism such that the diagrams \begin{align}\label{eq:F y eta} \xymatrix{ F(U)\ot F(V)\ot F(W)\ar@{->}[rr]^{\eta\,\ot\, \id}\ar@{->}[d]_{\id\ot\,\eta} && F(U\ot V)\ot F(W)\ar@{->}[d]^{\eta}&\\ F(U)\ot F(V\ot W)\ar@{->}[rr]_{\eta} && F(U\ot V\ot W),& } \end{align} \begin{align}\label{eq:c y eta} \xymatrix{ F(U)\ot F(V)\ar@{->}[rr]^{c_{F(U),F(V)}}\ar@{->}[d]_{\eta} && F(V)\ot F(U)\ar@{->}[d]^{\eta}&\\ F(U\ot V)\ar@{->}[rr]_{F(c_{U,V})} && F(V\ot U), } \end{align} commute for each $U,V,W\in\cC$. Fix $V\in\cC$. For $m,n\in\N$, set $\eta_{m,n}=\eta_{V^{\ot m}, V^{\ot n}}$ and \begin{align*} \eta_n&=\eta_{n-1,1}(\eta_{n-2,1}\ot\id)\cdots(\eta_{2,1}\ot\id)(\eta\ot\id):F(V)^{\ot n}\longrightarrow F(V^{\ot n}). \end{align*} By abuse of notation, we still write $\eta=\eta_{1,1}=\eta_2$. By \eqref{eq:F y eta}, it holds that \begin{align}\label{eq:asociatividad de los etas} \eta_{m+n+k}=\eta_{m,n+k}\,(\id\ot\eta_{n,k})\,(\eta_m\ot\eta_n\ot\eta_{ k}), \quad m,n,k\in\N. \end{align} Note that $\Bb_n$ acts on $F(V^{\ot n})$ via $\sigma_i\mapsto F(c_{i,i+1})$. Then the commutative diagram \eqref{eq:c y eta} implies that $\eta$ is an isomorphism of $\Bb_2$-modules. Moreover, combining \eqref{eq:F y eta} and \eqref{eq:c y eta} with the fact that $\eta$ is a natural isomorphism, we obtain that $\eta_n:F(V)^{\ot n}\longrightarrow F(V^{\ot n})$ is an isomorphism of $\Bb_n$-modules in $\cC'$. As a consequence we have the next lemma. \begin{lem}\label{thm:generators of nichols algebras via functors general} Assume $(F,\eta):\cC\rightarrow\cC'$ is exact. Let $V\in\cC$ with $\dim V<\infty$. The ideals defining the Nichols algebras $\B(V)$ and $\B(F(V))$ are related by $$ \cJ^n(F(V))=\eta_n^{-1}F(\cJ^n(V))\,\mbox{ for all }n\in\N. $$ If $F$ preserves dimensions, then $\dim \B^n(V)=\dim \B^n(F(V))$ for all $n\in\N$. \end{lem} \begin{proof} Recall that $\cJ^n(F(V))$ is the kernel of $\bS_n$ acting on $F(V)^{\ot n}$, $n\in\N$. Since $F$ is exact and $\eta_n$ is an isomorphism, the theorem follows. \end{proof} We can apply the above result to the categories $\ydh$ and $\ydhdual$. In fact, by \cite[Proposition 2.2.1]{AG1} they are braided equivalent monoidal categories via the functor $(F,\eta)$ defined as follows: $F(V)=V$ as a vector space, \begin{align}\label{prop:equiv de cat gdual} \begin{split} f\cdot v&=\langle f,\cS(v\_{-1})\rangle v\_{0},\quad\delta(v)=f_i\ot\cS^{-1}(h_i)v\quad\mbox{and}\\ \noalign{\smallbreak} \eta:\, &F(V)\ot F(W)\longmapsto F(V\ot W),\quad v\ot w\mapsto w_{(-1)}v\ot w_{(0)} \end{split} \end{align} for every $V,W \in \ydh$, $f \in H^*$, $v \in V$, $w\in W$. Here $\{h_i\}$ and $\{f_i\}$ are dual bases of $H$ and $H^*$. \begin{lem}\label{lem:generators of nichols algebras via functors} Let $V\in\ydh$ of finite dimension and $M\subset V^{\ot n}$ in $\ydh$. Let $N=\bigoplus_{m\in\N} N^m$ with $N^m\subset V^{\ot m}$ in $\ydh$, $m\in\N$. Then \begin{enumerate}\renewcommand{\theenumi}{\alph{enumi}}\renewcommand{\labelenumi}{ (\theenumi)} \item\label{eq:lem:generators of nichols algebras via functors tensor} $F(V)^{\ot m}\ot\eta_n^{-1}F(M)\ot F(V)^{\ot k}=(\eta_{m+n+k})^{-1}F(V^{\ot m}\ot M\ot V^{\ot k})$. \smallbreak \item $\langle\eta_n^{-1}F(M)\rangle=\sum_{m,k}(\eta_{m+n+k})^{-1}F(V^{\ot m}\ot M\ot V^{\ot k}).$ \smallbreak \item\label{eq:lem:generators of nichols algebras via functors cocientes} Let $\overline{M}\subset T(V)/\langle N\rangle$. In $T(F(V))/\langle\bigoplus_m\eta_m^{-1}F(N^m)\rangle$ it holds that $\overline{\eta_n^{-1}F(M)}=\eta_n^{-1}F(\overline{M})$. \end{enumerate} \end{lem} \begin{proof} (a) Let $x\in V^{\ot m}$, $r\in M$ and $y\in V^{\ot k}$. By \eqref{eq:asociatividad de los etas}, there exist $x'\in V^{\ot m}$, $r'\in M$ and $y'\in V^{\ot k}$ such that $(\eta_{m+n+k})^{-1}(x\ot r\ot y)=\eta_m^{-1}(x')\ot \eta_n^{-1}(r')\ot \eta_{k}^{-1} (y')$. Also by \eqref{eq:asociatividad de los etas}, there exist $x''\in V^{\ot m}$, $r''\in M$ and $y''\in V^{\ot k}$ such that $\eta_{m+n+k}(x\ot r\ot y)=x''\ot r''\ot y''$. Since $(\eta_{m+n+k})^{\pm1}_{|V^{\ot m}\ot M\ot V^{\ot k}}$ are injective morphisms the statement follows. (b) and (c) are straightforward. \end{proof} Lemma \ref{lem:generators of nichols algebras via functors} \eqref{eq:lem:generators of nichols algebras via functors cocientes} is useful to find deformations of Nichols algebras. Next lemma is a consequence of Lemma \ref{lem:generators of nichols algebras via functors} \eqref{eq:lem:generators of nichols algebras via functors tensor}. \begin{lem}\label{thm:generators of nichols algebras via functors} Let $M=\bigoplus_{m\in\N} M^m$ with $M^m\subset V^{\ot m}$ in $\ydh$, $m\in\N$. Assume that $M$ generates $\cJ(V)$ as an ideal. Then \begin{enumerate}\renewcommand{\theenumi}{\alph{enumi}}\renewcommand{\labelenumi}{ (\theenumi)} \item\label{eq:generadors thm:generators of nichols algebras via functors} $\bigoplus_{m\in\N}\eta_m^{-1}F(M^m)\in \ydhdual$ generates $\cJ(F(V))$ as an ideal. \smallbreak \item\label{eq:pre generatos thm:generators of nichols algebras via functors} $\cJ_k(F(V))=\langle\bigoplus_{l=2}^k\eta_l^{-1}F(\cJ^l(V))\rangle$ for all $k\in\N$. \qed \end{enumerate} \end{lem} \subsection{Racks}\label{subsec:YD realization} A {\it rack} is a nonempty set $X$ with an operation $\rhd:X\times X\rightarrow X$ such that $$\phi_i:X\longmapsto X,\, j\mapsto i\rhd j,$$ is a bijective map and $\phi_i(j\rhd k)=\phi_i(j)\rhd\phi_i(k)$ for all
them with \eqref{c4}, we repeatedly use the definition \eqref{star-dr-def} to make appear the dressing operation. We then show that the discrepancies cancel each other. The integration variable $\theta$ in the formula \eqref{c4} corresponds to the coordinate of the internal vertex closest to the root of each tree. These vertices are always of degree at least 3, therefore we can factorize a factor $f(\theta)[1-f(\theta)]$ from their weights. After this factorization, the contribution from the trees are (we omit the dependence on $\theta$) \begin{itemize} \item Tree (a) \begin{align*} \frac{Y^2-4Y+1}{(Y+1)^2}s(q^\text{dr})^4=\frac{Y^2+6Y+6}{(Y+1)^2}s(q^\text{dr})^4-5(1-f^2)s(q^\text{dr})^4 \end{align*} \item Tree (b) \begin{align*} &3s\lbrace[(1-f)s(o^\text{dr})^2]^{*\text{dr}}\rbrace^2=3s \lbrace [(1-f)s(o^{\text{dr}})^2]^{\text{dr}}\rbrace^{2}\\ +&3s(1-f)^2(o^\text{dr})^4-6(1-f)(o^\text{dr})^2[(1-f)s(o^\text{dr})^2]^\text{dr} \end{align*} \item Tree (c) \begin{align*} &12sq^\text{dr}\lbrace(1-f)sq^\text{dr}[(1-f)s(q^\text{dr})^2]^{*\text{dr}}\rbrace^{*\text{dr}}=12sq^\text{dr}\lbrace(1-f)sq^\text{dr}[(1-f)s(q^\text{dr})^2]^{\text{dr}}\rbrace^{\text{dr}}\\ -&12(1-f)(q^\text{dr})^2[(1-f)s(q^\text{dr})^2]^\text{dr}-12sq^\text{dr}[(1-f)^2(q^\text{dr})^3]^\text{dr}+12(1-f)^2s(q^\text{dr})^4 \end{align*} \item Tree (d) \begin{align*} &6(1-2f)(q^\text{dr})^2[(1-f)s(q^\text{dr})^2]^{*\text{dr}}=6(f-2)(q^\text{dr})^2[(1-f)s(q^\text{dr})^2]^{\text{dr}}\\ -&6s(f-2)(1-f)(q^\text{dr})^4+18(1-f)(q^\text{dr})^2[(1-f)s(q^\text{dr})^2]^{\text{dr}}-18s(1-f)^2(q^\text{dr})^4 \end{align*} \item Tree (e) \begin{align*} &4sq^\text{dr}[(1-f)(1-2f)(q^\text{dr})^3]^{*\text{dr}}=4sq^\text{dr}[(1-f)(f-2)(q^\text{dr})^3]^\text{dr}\\ -&4s(q^\text{dr})^4(1-f)(f-2)-12s(q^\text{dr})^4(1-f)^2+12sq^\text{dr}[(1-f)^2(q^\text{dr})^3]^\text{dr} \end{align*} \end{itemize} The discrepancies indeed cancel each other. \subsection{Comments on the conjecture} There are two plausible ways to prove our conjecture. First, one can try to derive the matrix elements of the product of total currents. One can then repeat the same steps of section 2 to perform their summation. The correct matrix elements must guarantee that the resulting diagrams have energy derivative at their root and sign of the effective velocity at their odd internal vertices. Concerning these two properties, the former is expected while the latter is more puzzling. Let us elaborate on this point. In our proof of the current average, it was understood that the form factor of a current is very similar to that of the corresponding charge: both are given by trees, the only difference being the operator at the root. It is then natural that any average involving currents, if admits combinatorial structure of trees, would have the energy derivative at the roots. As for the sign of the effective velocity, a naive guess would be to assign such sign for each bare propagator and for each external vertex. Most of them will cancel each other except for internal vertices of odd degrees. The flaw in this argument is that the weights of graph components should involve only bare quantities, like the ones in \eqref{Feynmp}. Only after the graphs are summed over do we have renormalized (dressed) quantities, see \eqref{reFeynman}. The effective velocity is a dressed quantity and as such cannot be included in the weight of bare propagators. In most cases however, the sign of the effective velocity coincides with that of the rapidity and the above modification could in principle be implemented. Second, one can regard the combinatorial structure of the charge cumulants as a result of successive derivatives on the free energy \eqref{GGE-free-energy}. Simply speaking these derivatives generate branches and joints (internal vertices) of the trees. If one can prove the existence of a similar "free energy" whose derivatives lead to cumulants of the total transport, it is natural that the same combinatorial structure would arise. Such free energy should not be confused with the generating function \eqref{gen-function}: what we seek for is the derivative with respect to the GGE chemical potentials, not the auxiliary variable $\lambda$. This approach seems possible in view of the following identity, proven in \cite{Doyon:2019osx} \begin{align} \int_0^t ds\langle J_i(0,s) \mathcal{O}(0,0)\rangle ^\text{c}=-\sum_{j}\sgn(A)_{ij}\frac{\partial}{\partial\beta_j}\langle \mathcal{O}(0,0)\rangle \end{align} for any local observable $\mathcal{O}$. Here $A$ is the flux Jacobian matrix, and the sign is defined as the sign matrix of its eigenvalues. If one can show that this identity is still valid when the local operator $\mathcal{O}$ is replaced by the product of the total currents then one would be able to obtain their cumulants from successive derivatives of the current average. \subsection{Summary and outlook} Our systematic treatment not only reduces the computational complexity but also improves the conceptual understanding of these cumulants. First, the simple combinatorial structure of the cumulants of total currents potentially translates into an analytic property of the full counting statistics. It is interesting to find a new relation in addition to the one established in \cite{10.21468/SciPostPhys.8.1.007}. Second, such structure provides hints about what the corresponding matrix elements would look like. For the current average, this line of idea has been exploited in recent work \cite{PhysRevX.10.011054}. Explicit expressions of these matrix elements would have significant impact on the understanding of related quantities, for instance the Drude weight. Last but not least, the observed similarity between the two families of cumulants suggests that one could think of a "free energy" that generates the time integrated currents in the same fashion that the usual TBA free energy generates the conserved charges. In future work, we would like to see the extend of this combinatorial structure in dynamical correlation functions and related quantities. The study of large scale correlation functions in GHD has been addressed in \cite{Doyon:2017vfc}. For the charge-charge and charge-current correlation functions, the same combinatorial structure continues to hold, with the inclusion of a space-time propagator. The situation is more subtle for the current-current correlator and the Drude weight. These quantities involve the inverse of a dressed quantity and it is currently not clear how such inversion could be represented in our formalism. \chapter*{Conclusion} In this thesis we propose a new method to compute thermodynamic observables in integrable systems. The main idea is to use the matrix-tree theorem to express the Gaudin determinants as a sum over graphs. We have found two types of applications of this graph expansion. First, it can be used to directly evaluate the cluster expansion of thermodynamic quantities. In this context, the Gaudin determinant appears as the Jacobian of the change of variables from mode numbers to rapidities. This change of variables is the only approximation in our formalism and it is exact to all orders of powers in inverse volume. The new method is thus more powerful than the standard TBA, which is insensitive to all corrections of order 1 and lower. We have applied it for a wide class of observable in theories with a diagonal S-matrix, confirming its versatility. There are however situations where it cannot be implemented. First, when a complete set of states is not known or is complicated. This happens for theories with a non-diagonal S-matrix and the standard TBA is more adapted to this situation as it only requires information about states that contribution to the thermodynamic limit. Nevertheless it is possible to interpret known TBA equations with strings in terms of diagrams. We have used this interpretation to study the boundary entropy of the corresponding theory and although we have not obtained a complete answer we have made several important observations. Second, when the action of the observable on unphysical states can not be determined. These unphysical states are inserted into the cluster expansion to compensate the strict order between mode numbers. For the observables that we have considered, the action on these states is a natural generalization of the action on physical states. For more involved examples however, this task might not be straightforward or even impossible. Last but not least, there could be exotic Gaudin determinants for which a graph expansion is not known. The second type of applications is to use the diagrammatic representation to replace the algebraic manipulations of the Gaudin determinants. We have used this idea to derive the equations of state in GHD however we cannot make a general conclusion of when the diagrammatic representation is more useful than normal calculations. \vspace{3mm} There are several directions to explore with the new method \begin{itemize} \item Looking for situations where corrections of order $1/L$ or lower are needed. If the new method can be implemented in this case, its advantage over the standard TBA would be truly confirmed. \item Computing finite size corrections in the hexagon form factor approach. There might be simple setups where the form factors can be organized in a nice way and the exact summation can be carried out. \item Interpreting known quantities in terms of diagrams and finding the connection between quantities with similar diagrammatic structure. \item Obtaining finite-particle matrix elements and form factors from the thermodynamic expression by applying the steps in reverse. \item Explaining the origin of Gaudin determinants in spin chain scalar products \cite{Korepin:1982gg}. One could for instance investigate the formation
the RBC membrane are modeled by the Skalak model \cite{skalak73} involving as parameters the shear modulus $\kappa_\mathrm{S}$ and the area expansion modulus $\kappa_\mathrm{A}$ \cite{daddi16}. The membrane resists toward bending according to the Helfrich model \cite{helfrich73}. {Membrane viscosity can in principle be included into our model by adding an imaginary part to the shear modulus $\kappa_\mathrm{S}$. Yet, since membrane viscosity is a damping term akin to the already included fluid viscosity, we do not expect our results to change significantly if it were to be included. As we shall see below, the anomalous diffusion on which we focus in the present paper comes from the membrane elasticity providing a memory to the system.} With the Skalak and Helfrich models it follows that the linearized tangential and normal fluid stress jumps across the interface are related to the membrane displacement field at $z_\mathrm{m}$ by \cite{daddi16} \begin{subequations} \begin{align} [\sigma_{z\alpha}] &= -\frac{\kappa_\mathrm{S}}{3} \left( \Delta_{\parallel} u_\alpha + (1+2C) e_{,\alpha} \right) \,, \quad \alpha \in \{ x,y \} \, , \label{tangentialCondition}\\ ~[\sigma_{zz}] &= \kappa_\mathrm{B} \Delta_{\parallel}^2 u_z \, , \label{normalCondition} \end{align} \end{subequations} where $[g] = g(z_{\mathrm{m}}^{+}) - g(z_{\mathrm{m}}^{-})$ denotes the jump of a quantity $g$ across the membrane located at $z_{\mathrm{m}}$. Furthermore, $C := \kappa_\mathrm{A}/\kappa_\mathrm{S}$ is the ratio of the area expansion to shear modulus, $\Delta_{\parallel} = \partial_{,xx} + \partial_{,yy}$ is the Laplace-Beltrami operator along the membrane and $e=u_{x,x}+u_{y,y}$ is the dilatation. A comma in indices denotes derivatives. The components $\sigma_{z\alpha}$ of the stress tensor are expressed by \begin{equation} \sigma_{z\alpha} = -p \, \delta_{z\alpha} + \eta (v_{z,\alpha} + v_{\alpha,z}) \, , \quad \alpha \in \{x,y,z\} \, . \end{equation} The Stokes equations can conveniently be solved using a two-dimensional Fourier transform technique \cite{felderhof06, bickel07, daddi16}. Moreover, the dependence of the membrane shape on the motion history suggests a temporal Fourier mode analysis. Here we use the common convention of a negative exponent in the forward Fourier transforms. As both spacial as well as temporal transformations will be performed, we shall reserve the tilde for the spatially transformed functions while the function and its temporal Fourier transform will be distinguished uniquely by their arguments. Continuing, it is convenient to adopt the orthogonal coordinate system in which the Fourier transformed vectors are decomposed into longitudinal, transverse and normal components \cite{bickel07, thiebaud10, daddi16}, denoted by $\tilde{v}_l$, $\tilde{v}_t$ and $\tilde{v}_z$, respectively. For some given vectorial quantity $\vect{\tilde{Q}}$, the passage from the new orthogonal basis to the usual Cartesian basis can be performed via the orthogonal transformation \begin{equation} \left( \begin{array}{c} \tilde{Q}_x \\ \tilde{Q}_y \end{array} \right) = \frac{1}{q} \left( \begin{array}{cc} q_x & q_y\\ q_y & -q_x \end{array} \right) \left( \begin{array}{c} \tilde{Q}_l \\ \tilde{Q}_t \end{array} \right) \, , \label{transformation} \end{equation} where $q_x$ and the $q_y$ are the components of the wavevector $\vect{q}$ and $q:= |\vect{q}|$. Note that the component $\tilde{Q}_z$ along the direction normal to the membranes is left unchanged. After applying these transformations to Eqs.~\eqref{eq:Stokes} and \eqref{eq:Incompress}, we can eliminate the pressure and obtain two decoupled ordinary differential equations for $\tilde{v}_t$ and $\tilde{v}_z$, such that \cite{bickel07, daddi16} \begin{subequations} \begin{align} q^2 \tilde{v}_t - \tilde{v}_{t,zz} &= \frac{F_t}{\eta} \delta (z-z_0) \, , \label{transverseEquation}\\ \tilde{v}_{z,zzzz} - 2q^2 \tilde{v}_{z,zz} + q^4 \tilde{v}_z &= \frac{q^2 F_z}{\eta} \delta(z-z_0) \nonumber \\ \hphantom{\tilde{v}_{z,zzzz} - 2q^2 \tilde{v}_{z,zz} + q^4 \tilde{v}_z} & \hphantom{{}={}} + \frac{iq F_l}{\eta} \delta' (z-z_0) \, , \label{normalEquation} \end{align} \label{systemOfEquations} \end{subequations} where $\delta'$ stands for the derivative of the Dirac delta function. The incompressibility equation~\eqref{eq:Incompress} allows for the determination of $\tilde{v}_l$ from $\tilde{v}_z$ such that \begin{equation} \tilde{v}_l = \frac{i \tilde{v}_{z,z}}{q} \, . \label{longitudinalFromNormal} \end{equation} For the sake of amenable mathematical equations, we will only consider the case that the two membranes have the same elastic and bending properties. Indeed, this is usually encountered in blood vessels where the RBCs posses similar physical properties. After some algebra it can be shown that the stress jump due to shear and area expansion from Eq.~\eqref{tangentialCondition} imposes the following discontinuities at $z_\mathrm{m}$ \cite{daddi16}: \begin{subequations} \begin{align} [\tilde{v}_{t,z}] &= \left. -iB\alpha q^2 \tilde{v}_t \right|_{z = z_\mathrm{m}} \, , \label{transverseVeloPrime}\\ ~[\tilde{v}_{z,zz}] &= \left. -{4i\alpha q^2} \tilde{v}_{z,z} \right|_{z = z_\mathrm{m}} \, ,\label{normalVeloSecond} \end{align} \end{subequations} where $\alpha := \kappa_\mathrm{S}/(3 B\eta\omega)$ with $B := 2/(1+C)$ is a characteristic length for shear and area expansion. The normal stress jump given by Eq.~\eqref{normalCondition} leads to \begin{equation} [\tilde{v}_{z,zzz}] = \left. 4i \alpha_\mathrm{B}^3 q^6 \tilde{v}_z \right|_{z = z_\mathrm{m}} \, ,\label{normalVeloThird} \end{equation} where $\alpha_\mathrm{B} := (\kappa_\mathrm{B}/(4\eta\omega))^{1/3}$ is a characteristic length for bending. \subsection{Solutions} The basic approach for solving such a system of equations \eqref{systemOfEquations} and \eqref{longitudinalFromNormal} to obtain the particle mobility was detailed in an earlier work \citep{daddi16}. Here we only outline the major differences and steps. Since the system is isotropic with respect to the $x$ and $y$ directions the mobility tensor only contains diagonal components. The normal-normal component $\tilde{\G}_{zz}$ can be obtained from solving Eq.~\eqref{normalEquation} in which only the normal force $F_z$ is considered, i.e.\ $F_l=0$. By applying the appropriate boundary conditions at $z_\mathrm{m}$ and $z_0$, the integration constants are readily determined. At $z=z_\mathrm{m}$, the normal velocity $\tilde{v}_z$ and its first derivative are continuous whereas the second and third derivatives are discontinuous because of shearing and bending, as prescribed in Eqs.~\eqref{normalVeloSecond} and \eqref{normalVeloThird} respectively. At the point force position, i.e.\ at $z=z_0$, the normal velocity and its first and second derivatives are continuous while the Dirac delta function imposes the discontinuity of the third derivative (see Eq.~\eqref{normalEquation}). For the motion parallel to the membranes, it is sufficient to consider a force $F_x$ and solve for the Green's function component $\tilde{\G}_{xx}$. The latter can be expressed by employing Eq.~\eqref{transformation} via \begin{equation} {\tilde{\G}}_{xx} (q,\phi,\omega) = {\tilde{\G}}_{tt} (q,\omega) \sin^2 \phi + {\tilde{\G}}_{ll} (q,\omega) \cos^2 \phi \, , \label{G_xx_qPhiOmega} \end{equation} where $\phi := \arctan (q_y/q_x)$. Accordingly, the determination of $\tilde{\G}_{xx}$ requires two steps. First, the transverse-transverse component $\tilde{\G}_{tt}$ is determined from solving Eq.~\eqref{transverseEquation}. The transverse velocity $\tilde{v}_t$ is continuous at the membranes whereas shearing imposes the discontinuity of the first derivative as prescribed by Eq.~\eqref{transverseVeloPrime}. At $z=z_0$, the transverse velocity is continuous while its first derivative is discontinuous because of the Dirac delta function (see Eq.~\eqref{transverseEquation}). Second, the normal velocity component $\tilde{v}_z$ is determined as an intermediate step from solving first Eq.~\eqref{normalEquation} by only considering the longitudinal force $F_l$, i.e.\ $F_z = 0$. In this situation, the Dirac delta function imposes the discontinuity of the second derivative at $z_0$ whereas the third derivative is continuous. Afterward, the velocity component $\tilde{v}_l$ is immediately recovered thanks to the incompressibility equation~\eqref{longitudinalFromNormal}, giving access to the longitudinal-longitudinal component~$\tilde{\G}_{ll}$. What remains for the determination of the particle mobility is to apply the spatial inverse Fourier transform by integrating over $\phi$ and the wavenumber $q$. In the point particle approximation, the mobility correction can readily be calculated by subtracting the bulk term and taking the limit when $\vect{r}$ tends to $\vect{r}_0$, as described by Eq.~\eqref{mobilityFromGreensFunction}. {For convenience, we define the subscripts $\perp$ and $\parallel$ to denote the tensorial components $zz$ and $xx$, respectively. The $yy$ component of the mobility tensor is identical to the $xx$ component.} Moreover, we define $k_\perp^{\sigma} (\beta, \beta_\mathrm{B})$ and $k_\parallel^{\sigma} (\beta, \beta_\mathrm{B})$, two frequency dependent complex quantities which are related to the first order correction in the mobility via \begin{equation} \frac{\Delta \mu_\alpha (z_0, \omega)}{\mu_0} = -k_\alpha^{\sigma} (\beta, \beta_\mathrm{B}) \frac{a}{z_0} \, , \quad \alpha \in \{ \perp, \parallel \} \, , \label{DeltaMuDefinition} \end{equation} where $\beta := 2z_0/\alpha \sim \omega$ and $\beta_\mathrm{B} := 2z_0/\alpha_\mathrm{B} \sim \omega^{1/3} $ are two dimensionless frequencies related to the shear and bending effects, respectively. Analytical expressions for $k_\alpha^{\sigma} (\beta, \beta_\mathrm{B})$ can be obtained with computer algebra software, but they are not listed here due to their complexity and lengthiness. \footnote{See Supplemental Material at [URL will be inserted by publisher] for a Maple script (Maple 17 or later) providing the particle mobility corrections in both directions of motion.} {These expressions are the basis for the computation of the Brownian motion and therefore constitute one of the central results of our work.} {We proceed to investigate the limiting case of Eq.~\eqref{DeltaMuDefinition} in which both shearing and bending modulus tend to infinity and therefore $\beta$ and $\beta_\mathrm{B}$ both tend to zero. In this case, which physically represents a hard wall, the general expression for $k_{\alpha}^{\sigma}$ as it appears in Eq.~\eqref{DeltaMuDefinition} reduces to} \begin{widetext} \begin{subequations} \begin{align} k_{\perp}^{\sigma} (0,0) &= \int_{0}^{\infty} \frac{3}{4\Gamma} \left( \phi_{+}^{1}e^{2\sigma u}-\phi_{-}^{1}e^{-2\sigma u}
+ G_nG_{n+1})$~\cite{Gao2020DeepComfort}. The complexity of the critic network is ${\cal O}\left(\sum_{n=2}^{L_c-1}(G_{n-1}G_n + G_nG_{n+1})\right) $. As a result, the overall computational complexity of the TD3 model is ${\cal O}\left( \sum_{m=2}^{L_a-1}(J_{m-1}J_m + J_mJ_{m+1}) + \sum_{n=2}^{L_c-1}(G_{n-1}G_n + G_nG_{n+1})\right) $~\cite{Gao2020DeepComfort}. We further analyze the convergence of the proposed PSD-TD3 algorithm. Specifically, the algorithm satisfies the following conditions: (i) the network parameters $\theta$ and $\theta'$ (of which the subscripts are suppressed for brevity) are upper bounded since they are sequentially compact following the Arzela-Ascoli theorem \cite{billingsley2013convergence}; (ii) the state and action spaces are compact as the sampled states and actions are bounded by the maximum transmit power of the BS and the phase shifts of the RIS; (iii) the reward function, i.e., \eqref{eq-reward}, is continuous; and (iv) the training networks are feedforward FCNNs with twice continuously differentiable activation functions, such as Rectified Linear Units (ReLUs) and sigmoid. According to \cite[Lemma~2]{redder2022asymptotic}, the proposed algorithm can asymptotically converge if we adopt a sequence of square summable learning rates, i.e., $\sum_t \eta_a (t) = \infty$ and $\sum_t \eta_a (t)^2 < \infty$. Here, $t$ is the time step, and $\eta_a (t)$ is a time-varying learning rate of the actor network. \section{Simulation Results}\label{sec-sim} In the considered system, the BS is placed at $(D_0,0,H_b)$, the jammer is placed at $(x_J,y_J,0)$, and the first element of the RIS has the coordinates $(0,\delta,\delta+H_r)$, as depicted in Fig.~\ref{fig-sysmodel}. We set $D_0 = 2$ m, $H_b = 10$ m, $H_r =10$ m, $x_J =50$ m, and $y_J = 150$ m. The RIS is a URA with element spacing of $\delta$. We assume $d_0 = \delta = \frac{\lambda}{2}$. We use $(\iota,\kappa)$ to index the RIS elements. $\iota \in \{1,\cdots,{N_y}\}\;{\text{and}}\;\kappa \in \{1,\cdots,{N_z}\}$. The coordinates of the $(\iota,\kappa)$-th reflecting element are $(0,\iota\times\delta,\kappa\times\delta+H_r)$. The users are uniformly scattered within a square area centered at $(100, 100, 0)$~m with the side length of 100~m. The sides of the area are parallel to the $x$- and $y$-axes. The location of the $m$-th user is $(x_{m},y_{m},0)$, $\forall {m} \in {\cal M}$. By default, $M=4$. We consider Rayleigh fading for the BS-user (BS-UE) and the jammer-UE links, and Rician fading for the BS-RIS, jammer-UE and RIS-UE links. The channel gains of the BS-UE (or jammer-UE), BS-RIS (or jammer-RIS), and RIS-UE links are given by \begin{align}\label{eq-bs-ue} \!h^{d}_{m,k}& =\! \sqrt{\epsilon_o \left(d^{d}_{m} \right)^{-\alpha_{d}}} \tilde{h}^d, \forall m,k,&\\ \!h^{br}_{\iota,\kappa}& = \!\sqrt{\epsilon_o \left(d^{br}_{\iota,\kappa} \right)^{-\alpha_{br}}}\left(\! \sqrt{\frac{K_1}{1\!+\!K_1}} h^{br}_{los} \!+\! \sqrt{\frac{1}{1\!+\!K_1}}h^{br}_{nlos} \!\right), \forall \iota,\kappa, & \label{eq-bs-irs}\\ \!h^{ru}_{\iota,\kappa,m} &=\! \sqrt{\epsilon_o \left(d^{ru}_{\iota,\kappa,m} \right)^{-\alpha_{ru}}}\left(\! \sqrt{\frac{K_2}{1\!+\!K_2}} h^{ru}_{los} \!+\! \sqrt{\frac{1}{1\!+\!K_2}}{h}^{ru}_{nlos} \!\right), \forall \iota,\kappa,m, & \label{eq-irs-ue}\\ \!h^{Jd}_{m,k}& =\! \sqrt{\epsilon_o \left(d^{Jd}_{m} \right)^{-\alpha_{Jd}}} \tilde{h}^{Jd}, \forall m,k,& \label{eq-jammer-ue}\\ \!h^{Jr}_{\iota,\kappa}& = \!\sqrt{\epsilon_o \left(d^{Jr}_{\iota,\kappa} \right)^{-\alpha_{Jr}}}\left(\! \sqrt{\frac{K_3}{1\!+\!K_3}} h^{Jr}_{los} \!+\! \sqrt{\frac{1}{1\!+\!K_3}}h^{Jr}_{nlos} \!\right), \forall \iota,\kappa, & \label{eq-jammer-irs} \end{align} where $\epsilon_o$ is the path loss at the reference distance $d_0 =1$ m with $\alpha_{d}$, $\alpha_{br}$, $\alpha_{ru}$, $\alpha_{Jd}$, and $\alpha_{Jr}$ being the path loss exponents of the BS-RIS, BS-UE, RIS-UE, jammer-UE, and jammer-RIS links, respectively; $d^{br}_{\iota,\kappa} = \sqrt{\left(H_r+\kappa\delta -H_b \right)^2+\iota^2\delta^2+D_0^2}$ is the distance from the BS to the $(\iota,\kappa)$-th reflecting element of the RIS, and $d^{d}_{m} = \sqrt{\left(D_0 - x_{m}\right)^2 + y_{m}^2 + H_b^2}$ is the distance from the BS to the $m$-th user, and $d^{ru}_{\iota,\kappa,m} = \sqrt{x_m^2 + \left(H_r + \kappa\delta \right)^2 +(y_m -\iota\delta)^2}$ is the distance from the $(\iota,\kappa)$-th reflecting element of the RIS to the $m$-th user, $d^{Jr}_{\iota,\kappa} = \sqrt{\left(H_r+\kappa\delta \right)^2+\left(\iota\delta - y_J\right)^2+x_J^2}$ is the distance from the jammer to the $(\iota,\kappa)$-th reflecting element of the RIS, $d^{Jd}_{m} = \sqrt{\left(x_J - x_{m}\right)^2 +\left(y_J - y_{m}\right)^2}$ is the distance from the jammer to the $m$-th user. In \eqref{eq-bs-irs},~\eqref{eq-irs-ue}, and \eqref{eq-jammer-irs}, $K_1$, $K_2$ and $K_3$ are the Rician factors of the BS-RIS, RIS-UE, and jammer-RIS links. $h^{br}_{los} = e^{-j \frac{2\pi \delta}{\lambda}\phi^{br}_{\iota, \kappa}}$, $h^{ru}_{los} = e^{-j \frac{2\pi \delta}{\lambda}\phi^{ru}_{\iota, \kappa, m}}$, and $h^{Jr}_{los} = e^{-j \frac{2\pi \delta}{\lambda}\phi^{Jr}_{\iota, \kappa}}$ are the deterministic Line-of-Sight (LoS) components of the BS-RIS, RIS-UE, and jammer-RIS links, respectively, where $\phi^{br}_{\iota, \kappa} = \arccos\left(\frac{\iota \delta}{d^{br}_{\iota, \kappa}} \right) $ is the angle-of-arrival (AoA) of the signal from the BS to the $(\iota, \kappa)$-th reflecting element of the RIS, $\phi_{ru} = \arccos\left(\frac{y_m - \iota \delta}{d^{ru}_{\iota,\kappa,m}} \right) $ is the angle-of-departure (AoD) of the signal from the $(\iota, \kappa)$-th reflecting element of the RIS to the $m$-th user, and $\phi^{Jr}_{\iota, \kappa} = \arccos\left(\frac{y_J - \iota \delta}{d^{Jr}_{\iota, \kappa}} \right) $ is the AoA of the signal from the jammer to the $(\iota, \kappa)$-th reflecting element of the RIS. $\tilde{h}^d$, $h^{br}_{nlos}$, $h^{ru}_{nlos}$, $\tilde{h}^{Jd}$, and $h^{Jr}_{nlos}$ are random scattering components modeled by zero-mean and unit-variance CSCG variables. The other parameters of the considered system are provided in Table \ref{tab.pm}. \begin{table}[t \caption{The parameters of the considered system} \begin{center} \begin{tabular}{ll} \toprule[1.5pt] Parameters & Values \\ \hline Maximum transmit power of the BS, $P_{\max}$ & 5 -- 35 dBm \\ Transmit power of the jammer, $P_J$ & 10 dBm \\ Number of subchannels, $K$ & 16, 32 \\ Number of users, $M$ & 4 \\ Number of modulation levels, $L$ & 4 \\ Set of modulation-coding rate & \{0,2,4,6\} bits/symbol\\ Path loss at $d_0 =1$ m, $\epsilon_o$ &-30 dB \\ Path loss exponents, $\alpha_{br}$, $\alpha_{d}$, $\alpha_{ru}$ & 2.5, 3.0, 2.2 \\ Rician factors, $K_1$, $K_2$, $K_3$ & 1, 3, 1 \\ Noise power density, $\sigma^2$ & -169 dBm/Hz \\ Bandwidth, $B_w$ & 100~MHz \\ BER requirements, $\{\varrho_0^{(1)},\varrho_0^{(2)}\}$ & $\{10^{-6}, 10^{-2}\}$ \\ Coefficients of modulation and coding, $\beta_1$, $\beta_2$ &0.2, -1.6~\cite{Goldsmith1998} \\ \toprule[1.5pt] \end{tabular} \end{center} \label{tab.pm} \end{table} The TD3-based network is implemented by a two-layer feedforward neural network with 128 and 64 hidden nodes in the two layers. Rectified Linear Units (ReLUs) are used as the activation functions between the layers of the actor and critic networks. The output layers of the actor use the sigmoid($\cdot$) to bound the output actions within $[0, 2\pi)$ for the RIS configuration. The state and action are taken as the input to the first layer of the critic networks. The learning rates of both the actor and critic networks are $10^{-3}$. The exploration noise used to train the TD3 actor, and the policy noise used to smooth the target-actor are both generated from the zero-mean GN with variance $0.2$. The maximum value of the exploration noise is $0.5$. The update frequency of the actor networks is $2$. The TD3-based network is trained on a server with an Nvidia Tesla P100 SXM2 16GB GPU. The network hyperparameters are summarized in Table~\ref{table_hyper_td3}. \begin{table}[t \renewcommand{\arraystretch}{1.0} \caption{The hyperparameters of the TD3-based algorithm} \begin{center} \begin{tabular}{ll} \toprule[1.5pt] Parameters & Values \\ \hline Discounting factor for future reward, $\gamma$ & 0.99 \\ Learning rate for actor and critic networks, $\eta_a$, $\eta_c$ & $1 \times 10^{-3}$ \\ Decaying rate for actor and critic networks, $\rho_{\tau}$ & $5 \times 10^{-3}$\\ Size of experience replay buffer & $1 \times 10^5$\\ Number of episodes, $T_{ep}$ & 400 \\ Total number of steps in each episode, $T_s$ & 200 \\ Mini-batch size, $N_{batch}$ & 16\\ Policy delay update frequency & 2 \\ Maximum value of the Gaussian noise, $\sigma_m^2$ & 0.5 \\ Variance of the exploration noise, $\sigma_e^2$ & 0.2 \\ Variance of the policy noise, $\sigma_a^2$ & 0.2 \\ \toprule[1.5pt] \end{tabular} \end{center} \label{table_hyper_td3} \end{table} As discussed earlier, no existing algorithm is directly comparable to the proposed PSD-TD3 algorithm. We come up with a DDPG-based alternative to the PSD-TD3 algorithm, referred to as PSD-DDPG, where the DDPG is employed to configure the RIS. We also develop a DQN-TD3 algorithm, where the selections of the user, subchannel, and modulation-coding mode are done using a DQN, and the TD3 is used to configure the RIS. Moreover, we consider the case where the RIS is randomly configured, while the selections of the user, data stream, subchannel, and modulation-coding mode are optimized, as described in Section~\ref{subsec-optimization}. These three benchmarks are used to evaluate the proposed PSD-TDS algorithm. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{fig_reward.pdf} \caption{The per-episode and average rewards of the proposed PSD-TD3 algorithm and its DDPG-based alternative under $N=40$, 60, and 80 (the top three subfigures), and the rewards and the BS transmit power of the DQN-TD3 algorithm under $N=40$ (the bottom two subfigures).} \label{fig-reward} \end{figure} We train the proposed algorithm only for one value of the maximum BS transmit power $P_{\max}$, i.e., $P_{\max} =30$ dBm, and test the resulting model under other $P_{\max}$ values to show the generalizability of the algorithm. Likewise, we train the algorithm only for one value of the transmit power of the jammer $P_J$,
could have defined the Helmholtz equation this way from the very beginning. The conventional Helmholtz dynamics represents so called generalized Hamiltonian dynamics, which cannot be described in terms of the Poisson brackets. The formula \rf{dH} cannot be solved for the velocity field because the matrix \begin{equation} \Omega_{\alpha\beta}(\rho) = e_{\alpha\beta\gamma}\Omega_{\gamma}(\rho) \end{equation} cannot be inverted (there is the zero mode $ \Omega_{\beta}(\rho) $). The physical meaning is the gauge invariance which allows us to perform the gauge transformations in addition to the Lagrange motion of the fluid element. Formally, the inversion of the $ \Omega-$matrix can be performed in a subspace which is orthogonal to the zero mode. The inverse matrix $\Omega^{\beta\gamma}$ in this subspace satisfies the equation \begin{equation} \Omega_{\alpha\beta} \Omega^{\beta\gamma} = \delta_{\alpha\gamma} - \frac{\Omega_{\alpha}\Omega_{\gamma}}{\Omega_{\mu}^2} \end{equation} which has the unique solution \begin{equation} \Omega^{\beta\gamma} = - e_{\alpha\beta\gamma} \frac{\Omega_{\alpha}}{\Omega_{\mu}^2} \end{equation} This solution leads to our Poisson brackets. The vorticity 2-form simplifies in the Clebsch variables \begin{equation} \Omega^a = e^{a b c} \partial_b\phi_1(\rho) \partial_c\phi_2(\rho) \label{CL} \end{equation} \begin{equation} \Omega = d \phi_1 \wedge d\phi_2 = \mbox{inv} \end{equation} The Clebsch variables provide the bridge between the Lagrange and the Euler dynamics. By construction they are conserved, as they parametrize the conserved vorticity. The Euler Clebsch fields $ \Phi_i(r) $ can be introduced by solving the equation $ r = X(\rho) $ \begin{equation} \Phi_i(r) = \phi_i\left(X^{-1}(r)\right) \end{equation} These Euler Clebsch fields were studied by Yakhot and Zakharov \cite{YZ93} where the conservation law of Clebsch "particles" was observed and some estimates of Kolmogorov cascade leading to his famous $\nicefrac{5}{3}$ law were presented. The difference between that work and our independent work in \cite{TSVS} was that we were introducing Clebsch field only inside cells, rather than in whole space. This will be discussed in more detail below. Unlike the vorticity field, the Clebsch variables cannot be defined globally in the whole space. The inverse map $ \rho = X^{-1}(r) $ is defined separately for each cell, therefore one cannot write $ v_{\alpha} = \Phi_1 \partial_{\alpha} \Phi_2 + \partial_{\alpha} \Phi_3 $ everywhere in space. Rather one should add the contributions from all cells to the Biot-Savart integral, as we did before. The most important for us consequence of this generalized Hamiltonian dynamics for $X_\alpha(\rho)$ is the conservation of the volume element \begin{equation} J(X,\rho) = \JAC{X_1,X_2,X_3} \end{equation} \begin{equation} \partial_t J(X,\rho) = J(X,\rho)\partial_\alpha v_\alpha(X) =0 \end{equation} This means that the volume of each vortex cell is conserved in dynamics \begin{equation} V_i = \int_{D_i} d^3\rho J(X,\rho) = \mbox{inv}; \end{equation} The volume is parametric invariant and can be written as an integral of 3-form \begin{equation} V_i = \int_{D_i} d X\wedge d X \wedge d X = \int_{D_i} d^3 \rho e_{\mu\nu\lambda} \partial_1 X_\mu \partial_2 X_\nu \partial_3 X_\lambda \end{equation} Parametric invariance comes about because the transformation of Jacobian $J(X,\rho)$ cancels with transformation of the volume element. Note that the induced metric in 3D space \begin{equation} G_{a b} = \partial_a X_\mu \partial_b X_\mu \end{equation} while being parametric covariant, is not conserved in Helmholtz dynamics unlike the volume element $J(X,\rho)$. So we cannot build 3D invariant functionals other than the volume of the cell. However, the boundary invariants exist as we shall see now. In fact this volume is so-called Wess-Zumino term in string theory \begin{equation} V_i = \int_{D_i} d X\wedge d X \wedge d X = \int_{\partial D_i} d X\wedge d X \wedge X \end{equation} which in fact only depends of the boundary values of the $X$ field. Let us introduce parametric equation corresponding to the bounbdary of the cell in $\rho$ space \begin{align} &\partial D: \rho_a = Y_a(\xi)\\ &\xi = \{u,v\} \end{align} with induced metric on the boundary: \begin{equation} g_{i j} = \partial_i Y_a \partial_j Y_a \end{equation} This equation of the boundary $Y_a(\xi)$ of each cell is a motion invariant, unlike the physical coordinates $X_\mu(\rho \in \partial D) = X(Y_a(.))$. Now we can define the internal area \begin{equation} S_i = \int_{\partial D_i} d^2 \xi \sqrt{g} \end{equation} which is another conserved quantity. In fact, the volume is a functional of the boundary values of $X^B(\xi) = X(Y(\xi))$: \begin{equation} V_i = \int_{\partial D_i} d^2 \xi e_{\mu\nu\lambda} \partial_1 X^B_\mu(\xi) \partial_2 X^B_\nu(\xi) X^B_\lambda(\xi) \end{equation} These net volume and net area, being motion and parametric invariant could serve in place of Hamiltonian in the Gibbs distribution \begin{equation} P = \exp\left(-\beta \sum_i V_i - \gamma \sum_i S_i\right) \end{equation} These vortex cells look like an ideal gas but there is an implicit interaction via excluded volume. As it is argued in \cite{TSVS} the cells cannot intersect nor self-intersect as this would be a singularity in inverse function $\rho(X)$. Such cells are described by self-avoiding surfaces enclosing them. In addition to the shapes and locations of vorticity drops there is of course the vorticity inside cells, parametrized by Clebsch variables. As we observed in old work \cite{TSVS} (see also below), there are no parametric invariant volume integrals of vorticity in three dimensions. However, we can construct surface invariants using induced metric. The lowest order local effective Lagrangian for these variables compatible with parametric invariance, translation invariance $ \phi_i \Rightarrow \phi_i + a_i$ and $U(1)$ symmetry of phase transformation of complex field $\psi = \phi_1 + \imath\, \phi_2$ is Clebsch Hamiltonian \begin{equation} H_C = \frac{1}{4\pi}\sum_D \int_{\partial D} d^2 \xi \sqrt{g} g^{i j} \partial_i\bar{\psi} \partial_j\psi \end{equation} Note that we did not add any potential term for the Clebsch field, as it would violate above-mentioned translation invariance. Normalization of the Clebsch Hamiltonian is a matter of convention. \section{Clebsch Boundary Conditions and String Compactification} There are some important boundary conditions for the Clebsch field, restricting the number of free variables. The fact that vorticity is zero outside the cell, means that there should be no flux outside. The normal direction given by \begin{equation} n_a \propto e_{a b c} \partial_1 Y_a(\xi) \partial_2 Y_b(\xi) \end{equation} The zero flux condition thus reads \begin{equation} \left[n_a e_{a b c } \partial_b \bar{\psi}\partial_c\psi\right]_{\partial D} =0 \end{equation} or equivalently \begin{equation} \oint_{\delta C \in \partial D} \bar{\psi} d \psi=0 \end{equation} for every infinitesimal loop $\delta C$ at the boundary $\partial D$ of the cell. This can be achieved with the complex field $\psi$ taking values on a circle $|\psi| = R$ at the boundary of the cell (see below). So, the vorticity does not flow outside cells, it is confined inside, pretty much like gluon field is confined to the world sheet of the flux tube connecting quarks. This analogy inspired my conjectures about minimal surface in Turbulence. Now we can move this analogy one step further. As vorticity is parametrized by Clebsch field which is defined only inside cells, what we have in this theory can be called Clebsch confinement and Clebsch field plays the role of "quarks of turbulence". The charge of this field corresponds to unbroken $U(1)$ symmetry and all observables are neutral with respect to this charge. Moreover,as we shall see, all observables can be expressed in terms of surface values of Clebsch field. We neglected viscosity here, but it plays an important role behind the scenes, just like particles interaction in the gas leads to Boltzmann distribution which only depends of the sum of their kinetic energies. The physics behind this distribution is the energy exchange between the particle and the thermostat made of the others. This energy exchange results from interaction, but specific form of this interaction does not affect resulting Boltzmann distribution. In our case this would be the volume and vorticity exchange between cells due to viscosity effects: cells will split and reconnect in course of dynamics thanks to viscosity, and this would convert micro-canonical distribution $\delta\left(V- V_0\right)$ to the exponential distribution $\exp(-\beta V)$. Viscosity would influence the relaxation time $T \sim \frac{1}{\nu}$ but will drop from the equilibrium distribution in the same way as interaction strength drops from the Gibbs-Boltzmann distribution for ideal gas. Obviously, the viscosity effects work at the cell boundary. The normal vorticity vanishes there but not the vorticity components along the boundary. With Jacobian vanishing in (\ref{Omega(r)}) this would lead to singularity in vorticity $\omega_{\alpha\beta}$ in physical space. Therefore, the dissipation term $\nu \Delta \omega$ in vorticity equation would go to infinity at the boundary, which means it cannot be neglected and results in smooth transition to $\omega=0$ outside within a thin lawyer. Note that there is not enough variables in Clebsch field to make all three components of vorticity vanish at the boundary, so the Neumann boundary conditions $n_a \partial_a \phi_{1,2}=0$ would not help. The vorticity flux outside will not vanish for Neumann boundary conditions, so they are not acceptable. The
math.IT In error-correcting codes, locality refers to several different ways of quantifying how easily a small amount of information can be recovered from encoded data. In this work, we study a notion of locality called the s-Disjoint-Repair-Group Property (s-DRGP). This notion can interpolate between two very different settings in coding theory: that of Locally Correctable Codes (LCCs) when s is large---a very strong guarantee---and Locally Recoverable Codes (LRCs) when s is small---a relatively weaker guarantee. This motivates the study of the s-DRGP for intermediate s, which is the focus of our paper. We construct codes in this parameter regime which have a higher rate than previously known codes. Our construction is based on a novel variant of the lifted codes of Guo, Kopparty and Sudan. Beyond the results on the s-DRGP, we hope that our construction is of independent interest, and will find uses elsewhere. • ### Weak Decoupling, Polynomial Folds, and Approximate Optimization over the Sphere(1611.05998) April 22, 2017 cs.DS We consider the following basic problem: given an $n$-variate degree-$d$ homogeneous polynomial $f$ with real coefficients, compute a unit vector $x \in \mathbb{R}^n$ that maximizes $|f(x)|$. Besides its fundamental nature, this problem arises in diverse contexts ranging from tensor and operator norms to graph expansion to quantum information theory. The homogeneous degree $2$ case is efficiently solvable as it corresponds to computing the spectral norm of an associated matrix, but the higher degree case is NP-hard. We give approximation algorithms for this problem that offer a trade-off between the approximation ratio and running time: in $n^{O(q)}$ time, we get an approximation within factor $O_d((n/q)^{d/2-1})$ for arbitrary polynomials, $O_d((n/q)^{d/4-1/2})$ for polynomials with non-negative coefficients, and $O_d(\sqrt{m/q})$ for sparse polynomials with $m$ monomials. The approximation guarantees are with respect to the optimum of the level-$q$ sum-of-squares (SoS) SDP relaxation of the problem. Known polynomial time algorithms for this problem rely on "decoupling lemmas." Such tools are not capable of offering a trade-off like our results as they blow up the number of variables by a factor equal to the degree. We develop new decoupling tools that are more efficient in the number of variables at the expense of less structure in the output polynomials. This enables us to harness the benefits of higher level SoS relaxations. We complement our algorithmic results with some polynomially large integrality gaps, albeit for a slightly weaker (but still very natural) relaxation. Toward this, we give a method to lift a level-$4$ solution matrix $M$ to a higher level solution, under a mild technical condition on $M$. • ### Subspace Designs based on Algebraic Function Fields(1704.05992) April 20, 2017 math.CO, cs.CC Subspace designs are a (large) collection of high-dimensional subspaces $\{H_i\}$ of $\F_q^m$ such that for any low-dimensional subspace $W$, only a small number of subspaces from the collection have non-trivial intersection with $W$; more precisely, the sum of dimensions of $W \cap H_i$ is at most some parameter $L$. The notion was put forth by Guruswami and Xing (STOC'13) with applications to list decoding variants of Reed-Solomon and algebraic-geometric codes, and later also used for explicit rank-metric codes with optimal list decoding radius. Guruswami and Kopparty (FOCS'13, Combinatorica'16) gave an explicit construction of subspace designs with near-optimal parameters. This construction was based on polynomials and has close connections to folded Reed-Solomon codes, and required large field size (specifically $q \ge m$). Forbes and Guruswami (RANDOM'15) used this construction to give explicit constant degree "dimension expanders" over large fields, and noted that subspace designs are a powerful tool in linear-algebraic pseudorandomness. Here, we construct subspace designs over any field, at the expense of a modest worsening of the bound $L$ on total intersection dimension. Our approach is based on a (non-trivial) extension of the polynomial-based construction to algebraic function fields, and instantiating the approach with cyclotomic function fields. Plugging in our new subspace designs in the construction of Forbes and Guruswami yields dimension expanders over $\F^n$ for any field $\F$, with logarithmic degree and expansion guarantee for subspaces of dimension $\Omega(n/(\log \log n))$. • ### Promise Constraint Satisfaction: Algebraic Structure and a Symmetric Boolean Dichotomy(1704.01937) April 6, 2017 cs.LO, cs.DM, cs.CC A classic result due to Schaefer (1978) classifies all constraint satisfaction problems (CSPs) over the Boolean domain as being either in $\mathsf{P}$ or $\mathsf{NP}$-hard. This paper considers a promise-problem variant of CSPs called PCSPs. A PCSP over a finite set of pairs of constraints $\Gamma$ consists of a pair $(\Psi_P, \Psi_Q)$ of CSPs with the same set of variables such that for every $(P, Q) \in \Gamma$, $P(x_{i_1}, ..., x_{i_k})$ is a clause of $\Psi_P$ if and only if $Q(x_{i_1}, ..., x_{i_k})$ is a clause of $\Psi_Q$. The promise problem $\operatorname{PCSP}(\Gamma)$ is to distinguish, given $(\Psi_P, \Psi_Q)$, between the cases $\Psi_P$ is satisfiable and $\Psi_Q$ is unsatisfiable. Many natural problems including approximate graph and hypergraph coloring can be placed in this framework. This paper is motivated by the pursuit of understanding the computational complexity of Boolean promise CSPs. As our main result, we show that $\operatorname{PCSP}(\Gamma)$ exhibits a dichotomy (it is either polynomial time solvable or $\mathsf{NP}$-hard) when the relations in $\Gamma$ are symmetric and allow for negations of variables. We achieve our dichotomy theorem by extending the weak polymorphism framework of Austrin, Guruswami, and H\aa stad [FOCS '14] which itself is a generalization of the algebraic approach to study CSPs. In both the algorithm and hardness portions of our proof, we incorporate new ideas and techniques not utilized in the CSP case. Furthermore, we show that the computational complexity of any promise CSP (over arbitrary finite domains) is captured entirely by its weak polymorphisms, a feature known as Galois correspondence, as well as give necessary and sufficient conditions for the structure of this set of weak polymorphisms. Such insights call us to question the existence of a general dichotomy for Boolean PCSPs. • ### Efficiently list-decodable punctured Reed-Muller codes(1508.00603) April 2, 2017 cs.IT, math.IT, cs.CC The Reed-Muller (RM) code encoding $n$-variate degree-$d$ polynomials over ${\mathbb F}_q$ for $d < q$, with its evaluation on ${\mathbb F}_q^n$, has relative distance $1-d/q$ and can be list decoded from a $1-O(\sqrt{d/q})$ fraction of errors. In this work, for $d \ll q$, we give a length-efficient puncturing of such codes which (almost) retains the distance and list decodability properties of the Reed-Muller code, but has much better rate. Specificially, when $q =\Omega( d^2/\epsilon^2)$, we given an explicit rate $\Omega\left(\frac{\epsilon}{d!}\right)$ puncturing of Reed-Muller codes which have relative distance at least $(1-\epsilon)$ and efficient list decoding up to $(1-\sqrt{\epsilon})$ error fraction. This almost matches the performance of random puncturings which work with the weaker field size requirement $q= \Omega( d/\epsilon^2)$. We can also improve the field size requirement to the optimal (up to constant factors) $q =\Omega( d/\epsilon)$, at the expense of a worse list decoding radius of $1-\epsilon^{1/3}$ and rate $\Omega\left(\frac{\epsilon^2}{d!}\right)$. The first of the above trade-offs is obtained by substituting for the variables functions with carefully chosen pole orders from an algebraic function field; this leads to a puncturing for which the RM code is a subcode of a certain algebraic-geometric code (which is known to be efficiently list decodable). The second trade-off is obtained by concatenating this construction with a Reed-Solomon based multiplication friendly pair, and using the list recovery property of algebraic-geometric codes. • ### Repairing Reed-Solomon Codes(1509.04764) Aug. 9, 2016 cs.IT, math.IT, cs.CC We study the performance of Reed-Solomon (RS) codes for the \em exact repair problem \em in distributed storage. Our main result is that, in some parameter regimes, Reed-Solomon codes are optimal regenerating codes, among MDS codes with linear repair schemes. Moreover, we give a characterization of MDS codes with linear repair schemes which holds in any parameter regime, and which can be used to give non-trivial repair schemes for RS codes in other settings. More precisely, we show
# 1 Opening items ## 1.1 Module introduction Trigonometric functions have a wide range of application in physics; examples include the addition and resolution of vectors (such as forces), the description of simple harmonic motion and the formulation of quantum theories of the atom. Trigonometric functions are also important for solving certain differential equations, a topic which is considered in some detail elsewhere in FLAP. In Section 2 of this module we begin by looking at the measurement of angles in degrees and in radians. We then discuss some basic ideas about triangles, including Pythagoras’s theorem, and we use right–angled triangles to introduce the trigonometric ratios (sinθ, cosθ and tanθ) and the reciprocal trigonometric ratios (secθ, cosecθ and cotθ). In Section 3 we extend this discussion to include the trigonometric functions (sin(θ), cos(θ) and tan(θ)) and the reciprocal trigonometric functions (cosec(θ), sec(θ) and cot(θ).) These periodic functions generalize the corresponding ratios since the argument θ may take on values that are outside the range 0 to π/2. Subsection 3.2 discusses the related inverse trigonometric functions (arcsin(x), arccos(x) and arctan(x)), paying particular attention to the conditions needed to ensure they are defined. We end, in Section 4, by showing how the sides and angles of any triangle are related by the sine rule and the cosine rule and by listing some useful identities involving trigonometric functions. Study comment Having read the introduction you may feel that you are already familiar with the material covered by this module and that you do not need to study it. If so, try the following Fast track questions. If not, proceed directly to the Subsection 1.3Ready to study? Subsection. ## 1.2 Fast track questions Study comment Can you answer the following Fast track questions? If you answer the questions successfully you need only glance through the module before looking at the Subsection 5.1Module summary and the Subsection 5.2Achievements. If you are sure that you can meet each of these achievements, try the Subsection 5.3Exit test. If you have difficulty with only one or two of the questions you should follow the guidance given in the answers and read the relevant parts of the module. However, if you have difficulty with more than two of the Exit questions you are strongly advised to study the whole module. Question F1 State Pythagoras’s theorem. Pythagoras’s theorem states that the square of the hypotenuse in a right–angled triangle is equal to the sum of the squares of the other two sides. Question F2 Define the ratios sinθ, cosθ and tanθ in terms of the sides of the right–angled triangle. What are the exact values (i.e. don’t use your calculator) of sin(45°) and tan(π/4)? Figure 10 A right–angled triangle with two equal length sides, an isosceles triangle. Figure 8 The labelling of sides in a right–angled triangle. With the notation of Figure 8 (Subsection 2.2), we have: $\sin\theta = \dfrac{\text{opposite}}{\text{hypotenuse}}\\ \cos\theta = \dfrac{\text{adjacent}}{\text{hypotenuse}}\\ \tan\theta = \dfrac{\text{opposite}}{\text{adjacent}}$ From Pythagoras’s theorem, the hypotenuse of the 45° triangle in Figure 10 (Subsection 2.2) has length 2, and therefore: $\sin(45°) = \dfrac{\text{opposite}}{\text{hypotenuse}} = \dfrac{1}{\sqrt{2\os}}$ An angle of π/4 radians is equivalent to 45° and therefore we have: $\tan\left(\dfrac{\pi}{4}\right) = 1$ Question F3 Write down the sine and cosine rules for a triangle. Calculate the angles of a triangle which has sides of length 2 m, 2 m and 3 m. The sine rule: $\dfrac{a}{\sin\hat {\rm A}} = \dfrac{b}{\sin\hat {\rm B}} = \dfrac{c}{\sin\hat {\rm C}}$ The cosine rule: a2 = b2 + c2 − 2bccosA^ where the notation is given in Figure 28 (Subsection 4.1). Suppose b = c = 2 m and a = 3 m. Then using the cosine rule we get: 32 = 22 + 22 − 2 × 2 × 2cosA^ and so: cosA^ = −1/8$There are an infinite number of solutions to this equation for A^, but only one is relevant to our triangle problem: A^ = arccos(1/8) = 97.2° Since sides b and c of the triangle are equal, we have: B^ = C^ = ½ (180° − 97.2°) = 41.4° Question F4 Sketch graphs of the functions cosec(θ) and sec(θ) for −πθπ, and cot(θ) for −3π/2 < θ < 3π/2. Answer F4 The graphs for these functions are given in Figures 21, 22 and 23.$\sec(\theta) = \dfrac{1}{\cos(\theta)}\quad\theta\ne(2n+1)\pi/2\cot(\theta) = \dfrac{\cos(\theta)}{\sin(\theta)}\quad\theta\ne n\pi$Figure 23 Graph of cot(θ). Figure 22 Graph of sec(θ). Question F5 Sketch graphs of the functions, arccosec(x) and arccot(x) over the range −10 < x < 10. Answer F5 The graphs for these functions are given in Figures 26a and 26c. ## 1.3 Ready to study? Study comment In order to study this module you will need to understand the following terms: constant, decimal places, function, power_mathematicalpower, reciprocal and variable. An understanding of the terms codomain, domain and inverse function would also be useful but is not essential. You will need to be able to solve a pair of simultaneous_linear_equationssimultaneous equations, manipulate arithmetic and algebraic expressions – including those involving square_of_a_numbersquares, square roots, brackets and ratios – and to plot and interpret graphs. If you are uncertain about any of these terms, you should consult the Glossary, which will also indicate where in FLAP they are developed. The following questions will help you to decide whether you need to review some of these topics before embarking on this module. Question R1 Given that x = a/h, y = b/h, and a2 + b2 = h2, rewrite the following expressions in terms of a, b and h, simplifying each as far as possible: (a) 1/x, (b) x/y, (c) x2 + y2. Answer R1 (a) 1/x = h/a, (b) x/y = (a/h)/(b/h) = a/b, (c) x2 + y2 = a2/h2 + b2/h2 = (a2 + b2)/h2 = 1. (If you had difficulty with any part of this question, consult algebra and arithmetic in the Glossary.) Question R2 Solve the following simultaneous equations: 2x + 5y = 6 and 3x − 10y = 9 Answer R2 Multiplying the first of the given equations by 2, the pair become 4x + 10y = 12 and 3x − 10y = 9 Adding the corresponding sides of the two equations in order to eliminate y, we find 7x = 21, so x = 3 Substituting x = 3 into either of the original equations shows that y = 0. (If you had difficulty with any part of this question, consult module MATH 1.4.) Question R3 What do we mean when we say that the position of an object is a function of time? Answer R3 A function is a rule that assigns a single value from a set called the codomain to each value from a set called the domain. Thus, saying that the position of an object is a function of time implies that at each instant of time the object has one and only one position. (If you had difficulty with any part of this question, consult functions in the Glossary.) # 2 Triangles and trigonometric ratios ## 2.1 Angular measure: degrees and radians When two straight lines pass through a common point, the angle between the lines provides a measure of the inclination of one line with respect to the other or, equivalently, of the amount one line must be rotated about the common point in order to make it coincide with the other line. The two units commonly used to measure angles are degrees and radians (discussed below) and we will use both throughout this module. Greek letters, α (alpha), β (beta), γ (gamma), ... θ (theta), ϕ (phi) ... are often used to represent the values of angles, but this is not invariably the case. A degree is defined as the unit of angular measure corresponding to 1/360th of a circle and is written as 1°. In other words, a rotation through 360° is a complete revolution, and an object rotated through 360° about a fixed point is returned to its original
the open and closed system is that the total substrate concentration, $s_T$, is a conserved quantity when the reaction is closed. Therefore, (\ref{MA}) with $k_0=0$ is equipped with the additional conservation law $s_T=s+c+p$, whereas with $k_0>0$ one has only one conservation law,~(\ref{econ}). It is well known that further simplification of (\ref{mmc}) is possible via a QSS reduction. The most common reduction is the sQSSA, in which (\ref{mmc}) is approximated with a differential-algebraic equation consisting of the algebraic equation obtained by setting the right-hand side of equation~(\ref{MMclosed_cdot}) equal to zero (``$\dot{c}=0$'') along with the differential equation~(\ref{MMclosed_sdot}). This reduces to the single differential equation \begin{subequations}\label{csQSSA} \begin{align} \dot{s} &= - \cfrac{k_2e_Ts}{K_M+s}, \quad K_M :=\cfrac{k_{-1}+k_2}{k_1},\\ c &= \cfrac{e_Ts}{K_M+s},\label{eq:c_of_s} \end{align} \end{subequations} where $K_M$ is the Michaelis constant. The legitimacy of the sQSSA (\ref{csQSSA}) for the closed Michaelis--Menten reaction mechanism (\ref{mmc}) is well-understood. Following an early effort by Briggs and Haldane~\cite{BH1925}, Heineken, Tsuchiya, and Aris \cite{Heineken1967} were perhaps the first to prove with some degree of rigor that (\ref{csQSSA}) is valid provided $e_T \ll s_0$. The qualifier, $e_T\ll s_0$, was justified via singular perturbation analysis. Defining $\bar{s}:=s/s_0$, $\bar{c}:=c/e_T$, and $T:=k_1e_Tt$ generates the singularly perturbed dimensionless form of (\ref{mmc}) \begin{subequations} \begin{align} \bar{s}' &= -\bar{s}+\bar{c}(\bar{s}+\kappa-\lambda)\\ \mu\bar{c}'&= \bar{s}-\bar{c}(\bar{s}+\kappa), \end{align} \end{subequations} where prime denotes differentiation with respect to $T$, $\lambda:=k_2/k_1s_0$, $\kappa:=K_M/s_0$, and $\mu:=e_T/s_0$. Consequently, the sQSSA (\ref{csQSSA}) is justified via Tikhonov's theorem~\cite{Tikhonov1952}. Throughout the years, refinements and variations of the condition $\mu\ll 1$ have been made. Perhaps most famously, Segel \cite{Segel1988} and Segel and Slemrod \cite{Segel1989} extended the results of Heineken et al.~\cite{Heineken1967} and demonstrated that (\ref{csQSSA}) is valid whenever $e_T\ll K_M +s_0$. Embedded in Segel's estimate is the more restrictive condition, $e_T\ll K_M$, which is independent of the initial substrate concentration, and is nowadays the almost universally accepted qualifier that justifies (\ref{csQSSA}) \cite{EILERTSEN2020}. While the QSS reductions of the closed Michaelis--Menten reaction are well-studied, analyses pertaining to the validity of the QSSA in open reaction environments are somewhat sparse \cite{Othmer2020,GAO2011,Stoleriu2004,Thomas2011}. The question we address is therefore: when is further reduction of (\ref{mmo}) possible? The trajectories illustrated in Figure~\ref{fig:traj} show that there are certainly conditions under which the QSSA estimate of the enzyme-substrate complex, given by equation~(\ref{eq:c_of_s}) which applies equally to the open system, is close to a slow invariant manifold, i.e.\ an invariant manifold (here, a trajectory) that attracts nearby trajectories and along which the equilibrium point is eventually approached from almost all initial conditions~\cite{Fraser1988,GK2003,RF91a}. \begin{figure} \centering \includegraphics[scale=0.8]{pset1_traj.pdf} \includegraphics[scale=0.8]{pset2_traj.pdf} \caption{Trajectories of the open Michaelis--Menten equations~(\ref{mmo}) for (a) $k_1=1$, $e_T=1$, $k_{-1}=1$ $k_2=3$ and $k_0=2.5$ (in arbitrary units), i.e.\ under conditions where there is an equilibrium point in the first quadrant, marked by a dot; and (b) with parameters as in (a), except $k_0=3.5$, under which conditions there is not an equilibrium point in the first quadrant, and the $s$ component of the solution grows without bound. The arrows show the direction of the flow. The dashed curve in both figures is defined by the QSSA equation~(\ref{eq:c_of_s}). \label{fig:traj}} \end{figure} We thus ask under what condition is the open sQSSA \begin{equation}\label{osQSSA} \dot{s} = k_0 - \cfrac{k_2e_Ts}{K_M+s}, \end{equation} permissible? At first glance, it seems rather intuitive to postulate that the open sQSSA~(\ref{osQSSA}) is valid under the same condition that legitimizes the closed sQSSA: $e_T \ll K_M$. In fact, following the earlier work of Segel and Slemrod \cite{Segel1989}, Stoleriu et~al.~\cite{Stoleriu2004} suggest that (\ref{csQSSA}) is applicable whenever \begin{equation}\label{STOLcon} e_T \ll s_0 + K_M\bigg(\cfrac{1}{1-\alpha}\bigg) + \cfrac{k_0}{k_2}, \quad \alpha:=k_0/k_2e_T, \end{equation} holds. The inequality (\ref{STOLcon}) is less restrictive than the Segel and Slemrod condition, since (\ref{STOLcon}) is satisfied as long as $k_0$ is sufficiently close to $k_2e_T$ (Implicitly, the authors assume that $\alpha<1$ in equation~\eqref{STOLcon}). The approach used to derive (\ref{STOLcon}) was based on the traditional method of comparing time scales: a singular perturbation parameter was recovered through scaling analysis of the mass action equations (\ref{mmo}). However, it is possible to derive erroneous conclusions regarding the validity of the QSSA, even when great care is taken in scaling and non-dimensionalization methodology (see, for example \cite{Goeke2012}, Section 4). It thus seems prudent to reexamine the basis for the sQSSA in the open Michaelis--Menten mechanism using tools of singular perturbation theory that go beyond scaling arguments. \section{The Quasi-Steady-State Approximation: Justification from singular perturbation theory}\label{GSPT} In this section we derive the QSSA directly from Fenichel theory. Details covering projection onto the slow manifold can be found in Appendix~\ref{AppendA}. \subsection{The critical manifolds: Tikhonov--Fenichel parameter values}\label{TFPVsubs} To apply Fenichel theory to the open Michaelis--Menten reaction mechanism, we need a curve of non-isolated equilibrium solutions to form in the first quadrant of $\mathbb{R}^2$; see \cite{Goeke2015}. The following Lemma addresses the conditions that ensure the existence of a critical manifold, and records some general qualitative features. \begin{lemma}\label{basicfacts} \begin{enumerate}[(a)] \item System \eqref{mmo} admits an infinite number of stationary points if and only if one of the following conditions holds. \begin{itemize} \item $k_0=k_1=0$; \item $k_0=e_T=0$; \item $k_0=k_2=0$. \end{itemize} \item If the number of stationary points in the plane is finite then it is equal to zero or one. There exists one stationary point if and only if the genericity conditions \begin{equation}\label{genercond} k_1\not=0,\quad k_2\not=0 \text{ and }k_2e_T-k_0\not=0 \end{equation} are satisfied. In that case the stationary point is equal to \begin{equation}\label{stapo} P_0:=\left(\widehat s,\,\widehat c\right)=\left(\frac{(k_{-1}+k_2)k_0}{k_1(k_2e_T-k_0)},\frac{k_0}{k_2}\right). \end{equation} This point lies in the first quadrant if and only if \begin{equation} k_2e_T-k_0>0, \label{eq:enzcapcondition} \end{equation} in which case it is an attracting node. The stationary point lies in the second quadrant if and only if $k_2e_T-k_0<0$, in which case it is a saddle point. \item The first quadrant is positively invariant for system \eqref{mmo}, and solutions starting in the first quadrant exist for all $t\geq 0$. When $k_{-1}+k_2>0$ then every solution that starts in the first quadrant enters the (positively invariant) subset defined by $c\leq e_T$ at some positive time. \item System \eqref{mmo} admits no nonconstant closed trajectory. \end{enumerate} \end{lemma} \begin{proof}[Sketch of proof] Parts (a) and (b) are straightforward, as is the first statement in part (c). For the second statement note $\dot s+\dot c\leq k_0$, hence solutions starting in the first quadrant remain in a compact set for all finite $t>0$. Finally, when $c\geq e_T$ then \eqref{mmo} shows that $\dot c\leq-(k_{-1}+k_2)e_T$, hence the second statement of part (c) holds. We turn to the proof of part (d): If there exists a nonconstant closed trajectory then its interior contains a stationary point. Given a degenerate situation from part (a), the variety of stationary points is unbounded, hence would intersect a closed trajectory if it intersects its interior; a contradiction. This leaves the setting with an isolated stationary point, necessarily of index one, which is only possible when the stationary point \eqref{stapo} lies in the first quadrant. By part (c) the closed trajectory must be contained in the strip defined by $c\leq e_T$. But in this strip the divergence of the vector field equals $-\left( k_1(e_T-c)+k_1s+k_{-1}+k_2\right)<0$, and no closed nonconstant trajectory can exist by Bendixson's criterion. \end{proof} \begin{remark} The case $k_0>k_2e_T$, in which the inflow exceeds the enzyme's clearance capacity, is not physiologically irrelevant since the gene coding for a particular enzyme may suffer a mutation that results in an enzyme with reduced catalytic activity, for example. As a rule, the accumulation of a metabolite will eventually become toxic (or possibly oncogenic) to the cell, and the rate at which S accumulates is therefore of interest. Other situations, e.g.\ the existence of an alternative but less efficient pathway for eliminating S, or the permeation of S through the cell membrane, would require more elaborate models for their study. Nevertheless, the model under study here would yield useful initial insights into the cellular effects of a mutation to an enzyme. \end{remark} Lemma (\ref{basicfacts}) ensures the existence of a critical manifold comprised of equilibrium points whenever $k_0$ vanishes along with either $e_T$, $k_1$ or $k_2$ in the singular limit. We note that in the context of the closed reaction (\ref{mmc}), parameters with $e_T=0$ (with all remaining parameters $>0$), respectively to $k_1=0$ (remaining parameters $>0$), respectively to $k_2=0$ (remaining parameters $>0$), are TFPV. Generally, a TFPV $[\widehat k_0\;\widehat e_T\;\widehat k_1\;\widehat k_2\;\widehat k_{-1}]$ is characterized by the property that a generic small perturbation results in the formation of a
human analysis of the mock spectra with the true evaluation discussed above for $\tau_{\rm LL} \ge 2$. These results suggest that $\mlox$\ is overestimated by $\sim20$\% in the lowest of three redshift bins, and by $\sim 40\%$ in the highest redshift bin. By inspecting our fits to the mock spectra and overplotting the known LLS positions (see Fig.\ \ref{f:spectrum} for an example), we determined that the main systematic causing this excess is the `blending bias', as described by \citet{Prochaska_2010}. This refers to the effect from nearby, partial LLSs (with $\tau_{\rm LL} < 2$) that tend to be merged together when identified by their Lyman break alone, and thus are incorrectly identified as a single $\tau_{\rm LL} > 2$ LLS. This merging occurs due to confusion in identifying the redshift of a LLS: often, all the Lyman series lines have a relatively low equivalent width, and so the only constraint on the LLS redshift is the LL break. The position of the break can be affected by overlapping IGM absorption, and by the velocity structure of the LLS, which is difficult to measure accurately using low-resolution spectra like the GMOS data employed here. \begin{figure} \begin{center} \includegraphics[width=0.92\columnwidth]{fig_mocks.pdf} \caption{{\label{f:corr} Comparison of the observed and true $\mlox$\ from the analysis of mock spectra in three redshift bins. Red points show the true value, after summing all components with separations less than 500\,km s$^{-1}$\ into a single LLS (see text). Error bars are $1\sigma$ Poisson uncertainties given the number of systems in each bin. $\mlox$\ inferred from the LLSs identified by NC and JO are shown by green and orange points. These values are consistent within the 1$\sigma$ uncertainties plotted, suggesting that any subjective criteria used to identify LLSs that differ between authors do not significantly impact our analysis. The blue points show the true $\mlox$\ value, where we introduce a further merging of systems to approximate the tendency to merge nearby weaker systems into a single stronger system when recovering LLSs in the mocks (see text). The lowest redshift bin and the uncertainties in all bins more closely matches the inferred $\mlox$\ from both authors, which suggest this merging is an important systematic effect.% }} \end{center} \end{figure} To further illustrate the importance of this systematic effect, we apply a simple merging scheme to the true list of LLSs, and compare $\mlox$\ for this merged list to our recovered $\mlox$. To do this merging, we step through each LLS with $\tau_{\rm LL} > \tau_1$ in the true list (described above), from lowest to highest redshift, and test whether there is another $\tau_{\rm LL} > \tau_1$ system within 9000\,km s$^{-1}$ (see below and Fig.~\ref{f:dz_LLS_one_sightline} for the justification of this value). If one is found, we merge the two systems by adding their column densities, using the redshift of the highest redshift system for the resulting single combined system. This process is repeated, with the new merged system replacing the previous two separate systems, and continued until every system with $\tau_{\rm LL}> \tau_1$ has been tested in a sightline. $\tau_1=1$ corresponds to the optical depth of a LLS which produces an appreciable Lyman break, detectable in the GMOS spectra. Using our mocks, we also explore the effect of setting $\tau_1 = 0.5$ or 1.5 instead and find that, while it has a negligible effect for the highest and lowest redshift bins in Fig.\ \ref{f:corr}, it can introduce an uncertainty of up to 8\% in $\mlox$\ for the central redshift bin. This is significantly smaller than the final statistical uncertainty on $\mlox$\ for this bin, but it is large enough that we include its contribution in the final uncertainty budget. To estimate the merging scale, we measured the distribution of redshift differences between LLSs that we observe towards sightlines showing more than one LLS. Fig.\ \ref{f:dz_LLS_one_sightline} shows these distributions for both authors who identified LLSs. Without the blending bias, one expects a roughly flat distribution at small $\delta z$ that then drops with increasing $\delta z$ because the presence of one strong LLS precludes the detection of any other at lower $z$. The low number of pairs with $\delta z < 0.1$, therefore, must be related to the blending bias. We adopt the peak in these distributions as the smallest scale at which we can reliably separate nearby LLSs. The peak for both authors is at $\delta z \sim 0.17$, equivalent to 9270\,km s$^{-1}$\ at $z=4.5$, and we therefore adopt a merging scale of 9000\,km s$^{-1}$. Using a merging scale of 6000 or 12000\,km s$^{-1}$\ instead of 9000\,km s$^{-1}$\ changes the inferred $\mlox$\ by less than 3\%. Finally, we define a redshift path for this merged list in a similar way for the mock spectra, by truncating the redshift path in a sightline when it encounters a strong $\tau_{\rm LL}>2$ LLS. \begin{figure} \begin{center} \includegraphics[width=0.75\columnwidth]{LLS_z_sep1} \caption{{\label{f:dz_LLS_one_sightline} Distribution of absolute redshift differences, $\delta z$, between pairs of LLSs in the same sightline for the GGG spectra. These include systems with $\tau_{\rm LL} < 2$, in addition to $\tau_{\rm LL} > 2$ systems. Results from two authors, NC and JO, are shown. When measuring $\mlox$\ directly from the mock line catalogue we adopt a merging scale for neighbouring LLSs of $9000\,$km s$^{-1}$\ ($\delta z \sim 0.165$ at $z=4.5$, shown as the vertical line in the plot), corresponding to the peaks of the histograms.% }} \end{center} \end{figure} With this new, merged list of LLSs in the mock spectra, we then make a new measurement of $\mlox$, shown by the blue points in Fig.\ \ref{f:corr}. While these blue points do not precisely match the directly measured $\mlox$\ values, they do show similar systematic offsets -- the first and last bins have an increase in $\mlox$\ and there is little change in the central bin. We consider this merging effect, which truncates the search path and merges weaker LLSs into $\tau_{\rm LL} > 2$ systems, to be the main systematic error to correct in our analysis. In the following section we apply the correction derived from this mock spectrum analysis to the real GGG sightlines, and report the final, corrected $\mlox$\ values. \subsection{$\mlox$\ measurements} Table \ref{tab:analy_lls} lists the final set of $\tau_{\rm LL} \ge 2$ LLSs used for the statistical analysis of $\mlox$. We have measured $\mlox$\ in three redshift bins, $z=3.75$--4.4, 4.4--4.7, 4.7--5.4, chosen to have roughly equal numbers of LLSs. The raw $\mrawlox$\ measurements from analysis by NC and JO are shown in Figure~\ref{f:lX_corr}. The two sets of $\mrawlox$\ measurements are in excellent agreement; the largest offset is only $\approx$4\%. We have then corrected these $\mrawlox$\ values based on the mock spectrum analysis described in Section \ref{s:true_corr}. Table \ref{tab:lzlX} summarizes the $\mlox$\ measurements including the corresponding values for $\mloz$. We have also estimated $\ell(z)$ across the entire survey ($z \approx 4.4$) to be $\ell(z) = 2.6 \pm 0.4$. \input{tab_lls_analy_sub.tex} \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{lX_corr1} \caption{{\label{f:lX_corr} $\mlox$\ before and after applying the correction factors inferred from mock spectra. The raw, uncorrected $\mrawlox$\ values are shown in blue, offset slightly in redshift for clarity. The left panel shows the results from one author (NC), and the right panel for another (JO). The corrected (true) $\mlox$\ values are shown in black. Error bars represent the $1\sigma$ Poisson uncertainty. The corrected results for JO and NC are in good agreement and the largest discrepancy between the corrected values for each author is in the lowest redshift bin (a 4\% difference). This discrepancy is much smaller than the final statistical error on $\mlox$. }} \end{center} \end{figure} \input{tab_lzlX.tex} Previous work \citep[e.g.][]{Ribaudo11} has shown that $\mloz$\ measurements are well described by a power-law of the form $\ell(z) = \ell_* [(1+z)/(1+z_*)]^\alpha$ where $z_*$ is an arbitrary pivot redshift. We compare this model against the combination of our GGG results and the {\it HST} datasets of \cite{Ribaudo11} and \cite{OMeara13}, the MagE survey by \cite{Fumagalli13}, and the SDSS results from \cite{Prochaska_2010}\footnote{All of the LLS lists, sensitivity functions, and code to calculate $\mloz$\ are available in the {\sc pyigm} repository at \urlstyle{rm}\url{https://github.com/pyigm/pyigm}\,.}. We adopt the maximum likelihood approach to fitting the LLS surveys, described
square vertices in $S$ (this choice is arbitrary). Observe that $w(S) \leq w(S_{\min})$ \item[2.] For each edge which is incident to a square vertex in $S$, assign a factor of $\frac{1}{\sqrt[3]{m}}$ to its circle endpoint and nothing to this square. \item[3.] For each edge which is incident to a square vertex in $(U_{\alpha} \cup V_{\alpha}) \setminus S$, assign a factor of $\frac{1}{\sqrt{n}}$ to the square vertex and nothing to the circle vertex. \item[4.] For all other edges, assign a factor of $\frac{1}{\sqrt[4]{n}}$ to its square endpoint and a factor of $\frac{1}{\sqrt[6]{m}}$ to its circle endpoint. \end{enumerate} Now each square vertex which is not in $S$ is assigned a factor of $\frac{1}{\sqrt{n}}$ and since $\alpha$ is not a spider, all circle vertices are assigned a factor of $\frac{1}{\sqrt{m}}$ or smaller. \end{enumerate} We now make this argument formal. Let ${\mathcal C}_{\alpha}$ and ${\mathcal S}_{\alpha}$ be the set of circle vertices and the set of square vertices in $\alpha$ respectively. We have $ n^{\frac{w(V(\alpha)) - w(S_{\min})}{2}} \leq n^{0.5|{\mathcal S}_{\alpha} \setminus S_{min}| + (0.75 - \frac{\varepsilon}{2})|{\mathcal C}_{\alpha} \setminus S_{min}|}$. So, it suffices to prove that \[|E(\alpha)| - |{\mathcal S}_{\alpha} \setminus S_{min}| - (1.5 - \varepsilon)|{\mathcal C}_{\alpha} \setminus S_{min}| \ge \Omega(\varepsilon |E(\alpha)|)\] Let $Q = U_{\alpha} \cap V_{\alpha}, P = (U_{\alpha} \cup V_{\alpha}) \setminus Q$ and let $P'$ be the set of vertices of $P$ that have degree $1$ and are not in $S_{min}$. Let $E_1$ be the set of edges incident to $P'$ and let $E_2 = E(\alpha) \setminus E_1$. For each vertex $\square{i}$ (resp. $\circle{u}$), let the number of edges of $E_2$ incident to it be $\deg'(\square{i})$ (resp. $\deg'(\circle{u})$). Since $\alpha$ is bipartite, we have that $|E_2| = \sum_{\square{i} \in {\mathcal S}_{\alpha}} \deg'(\square{i}) = \sum_{\circle{u} \in {\mathcal C}_{\alpha}} \deg'(\circle{u})$. We get that \[|E(\alpha)| = |E_1| + |E_2| = |P'| + \frac{1}{2}(\sum_{\square{i} \in {\mathcal S}_{\alpha}} \deg'(\square{i}) + \sum_{\circle{u} \in {\mathcal C}_{\alpha}} \deg'(\circle{u}))\] We also have $|S_{\alpha} \setminus S_{min}| \le |P'| + |{\mathcal S}_{\alpha} \cap W_{\alpha}| + |{\mathcal S}_{\alpha} \cap (P \setminus P')| \le |P'| + \frac{1}{2} \sum_{\square{i} \in {\mathcal S}_{\alpha}} \deg'(\square{i})$ because each square vertex outside $P' \cup Q$ has degree at least $2$ and is not incident to any edge in $E_1$. So, it suffices to prove \[\frac{1}{2}\sum_{\circle{u} \in {\mathcal C}_{\alpha}} \deg'(\circle{u}) - (1.5 - \varepsilon)|{\mathcal C}_{\alpha} \setminus S_{min}| \ge \Omega(\varepsilon |E(\alpha)|)\] Now, observe that each $\circle{u} \in {\mathcal C}_{\alpha}$ is incident to at most two edges in $E_1$. This is because if it were adjacent to at least $3$ edges in $E_1$, then either $\circle{u}$ is adjacent to at least two vertices of degree $1$ in $U_{\alpha}$ or $\circle{u}$ is adjacent to at least two vertices of degree $1$ in $V_{\alpha}$. However, this cannot happen since $\alpha$ is not a spider. This implies that $\deg'(\circle{u}) \ge \deg(\circle{u}) - 2$. Note moreover that if $\circle{u} \in {\mathcal C}_{\alpha} \setminus S_{min}$, we have that $\deg'(\circle{u}) \ge \deg(\circle{u}) - 1$. This is because, building on the preceding argument, $\deg'(\circle{u}) = \deg(\circle{u}) - 2$ can only happen if there exist $\square{i} \in U_{\alpha}, \square{j} \in V_{\alpha}$ such that $(\square{i}, \circle{u}), (\square{j}, \circle{u}) \in E_1$. But then, note that we have $\square{i}, \square{j} \not\in S_{min}$ by definition of $P'$ and also, $\circle{u} \not\in S_{min}$ by assumption. This means that there is a path from $U_{\alpha}$ to $V_{\alpha}$ which does not pass through $S_{min}$, which is a contradiction. Finally, we set $\varepsilon$ small enough such that the following inequalities are true, both of which follow from the fact that $\deg(\circle{u}) \ge 4$ for all $\circle{u} \in {\mathcal C}_{\alpha}$. \begin{enumerate} \item For any $\circle{u} \in {\mathcal C}_{\alpha} \cap S_{min}$, we have $\frac{\deg(\circle{u}) - 2}{2} \ge \frac{\varepsilon}{10}\deg(\circle{u})$. \item For any $\circle{u} \in {\mathcal C}_{\alpha} \setminus S_{min}$, we have $\frac{\deg(\circle{u}) - 1}{2} - 1.5 + \varepsilon \ge \frac{\varepsilon}{10}\deg(\circle{u})$. \end{enumerate} Using this, we get \begin{align*} \frac{1}{2}\sum_{\circle{u} \in {\mathcal C}_{\alpha}} \deg'(\circle{u})& - (1.5 - \varepsilon)|{\mathcal C}_{\alpha} \setminus S_{min}| \\ &\ge \sum_{\circle{u} \in {\mathcal C}_{\alpha} \cap S_{min}} \frac{\deg(\circle{u}) - 2}{2} + \sum_{\circle{u} \in {\mathcal C}_{\alpha} \setminus S_{min}} \frac{\deg(\circle{u}) - 1}{2} - (1.5 - \varepsilon)|{\mathcal C}_{\alpha} \setminus S_{min}|\\ &\ge \sum_{\circle{u} \in {\mathcal C}_{\alpha} \cap S_{min}} \frac{\varepsilon}{10}\deg(\circle{u}) + \sum_{\circle{u} \in {\mathcal C}_{\alpha} \setminus S_{min}} \left(\frac{\deg(\circle{u}) - 1}{2} - 1.5 + \varepsilon\right)\\ &\ge \sum_{\circle{u} \in {\mathcal C}_{\alpha} \cap S_{min}} \frac{\varepsilon}{10}\deg(\circle{u}) + \sum_{\circle{u} \in {\mathcal C}_{\alpha} \setminus S_{min}} \frac{\varepsilon}{10}\deg(\circle{u})\\ &= \sum_{\circle{u} \in {\mathcal C}_{\alpha}} \frac{\varepsilon}{10}\deg(\circle{u}) = \Omega(\varepsilon|E(\alpha)|) \end{align*} \end{proof} Since ${\mathcal L}_{bool} \subseteq {\mathcal L}$, the above result extends to non-trivial non spider shapes in ${\mathcal L}_{bool}$ too. \begin{corollary} If $\alpha \in {\mathcal L}_{bool}$ is not a trivial shape and not a spider, then \[\frac{1}{n^{|E(\alpha)|/2}} n^{\frac{w(V(\alpha)) - w(S_{\min})}{2}} \le \frac{1}{n^{\Omega(\varepsilon |E(\alpha)|)}}\] \end{corollary} \begin{corollary}\label{cor:non_spider_killing} If $\alpha \in {\mathcal L}$ is not a trivial shape and not a spider, then w.h.p. \[\frac{1}{n^{|E(\alpha)|/2}}\norm{M_{\alpha}} \le \frac{1}{n^{\Omega(\varepsilon |E(\alpha)|)}}\] \end{corollary} \begin{proof} Using the norm bounds in~\cref{lem:gaussian-norm-bounds}, we have {\footnotesize\begin{align*} \norm{M_\alpha} \leq 2\cdot\left(\abs{V(\alpha)} \cdot (1+\abs{E(\alpha)}) \cdot \log(n)\right)^{C\cdot (\abs{V_{rel}(\alpha)} + \abs{E(\alpha)})} \cdot n^q{\frac{w(V(\alpha)) - w(S_{\min}) + w(W_{iso})}{2}} \end{align*}} We have $W_{iso} = \emptyset$. Observe that since there are no degree $0$ vertices in $V_{rel}(\alpha)$, we have that $|V_{rel}(\alpha)| \le 2|E(\alpha)|$ and since we also have $|V(\alpha)|\cdot (1+\abs{E(\alpha)})\cdot \log n \le n^{O(\tau)}$, the factor $2\cdot(\abs{V(\alpha)} \cdot (1+\abs{E(\alpha)}) \cdot \log(n))^{C\cdot (\abs{V_{rel}(\alpha)} + \abs{E(\alpha)})}$ can be absorbed into $\frac{1}{n^{\Omega(\varepsilon |E(\alpha)|)}}$. The result follows from~\cref{lem:charging}. \end{proof} This says that nontrivial non-spider shapes have $\operatorname{o}_n(1)$ norm (ignoring the extra factor $\eta$ for the moment). We now demonstrate how to use this norm bound to control the total norm of all non-spiders in a block of ${\mathcal M}$,~\cref{cor:non-spider-sum}. We will first need a couple propositions which will also be of use to us later after we kill the spiders. \begin{proposition}\label{prop:edge-shape-count} The number of proper shapes with at most $L$ vertices and exactly $k$ edges is at most $L^{8(k+1)}$. \end{proposition} \begin{proof} The following process captures all shapes (though many will be constructed multiple times): \begin{itemize} \item Choose the number of square and circle variables in each of the four sets $U \cap V, U \setminus (U \cap V), V \setminus (U \cap V), W$. This contributes a factor of $L^{8}$. \item Place each edge between two of the vertices. This contributes a factor of $L^{2 k}$. \end{itemize} \end{proof} \begin{proposition}\label{prop:coefficient-bound} $\abs{\lambda_\alpha} \leq \eta^{\abs{U_\alpha} + \abs{V_\alpha}} \cdot \frac{\abs{E(\alpha)}^{3\cdot \abs{E(\alpha)}}}{n^{\abs{E(\alpha)}/2}}$ where we assume by convention that $0^0 = 1$. \end{proposition} \begin{proof} \noindent\textbf{(Gaussian setting)} Recall that the coefficients $\lambda_\alpha$ are either zero or are defined by the formula \[\lambda_\alpha = \eta^{\abs{U_\alpha} + \abs{V_\alpha}}\cdot \left( \prod_{\circle{u}\in V(\alpha)} h_{\deg(\circle{u})}(1)\right) \cdot \frac{1}{ n^{\abs{E(\alpha)}/2}} \cdot \frac{1}{\alpha!}\] The sequence $h_k(1)$ satisfies the recurrence $h_0(1) = h_1(1) = 1, h_{k + 1}(1) = h_k(1) - kh_{k - 1}(1)$. We can prove by induction that $\abs{h_k(1)} \le k^k$ and hence, \[\prod_{\circle{u}\in V(\alpha)} \abs{h_{\deg(\circle{u})}(1)} \le \prod_{\circle{u}\in V(\alpha)} (\deg(\circle{u}))^{\deg(\circle{u})} \le \abs{E(\alpha)}^{\abs{E(\alpha)}}.\] \noindent\textbf{(Boolean setting)} In the boolean setting the coefficients $\lambda_\alpha$ are defined by \[\lambda_\alpha = \eta^{\abs{U_\alpha} + \abs{V_\alpha}} \cdot \left(\prod_{\circle{u} \in V(\alpha)} e(\deg(\circle{u})) \right)\] Using~\cref{cor:bound_on_coeff_e_k}, we have that $\abs{e(k)} \le k^{3k} \cdot n^{-k/2}$. Thus, \[ \abs{\lambda_\alpha} = \eta^{\abs{U_\alpha} + \abs{V_\alpha}} \cdot \prod_{\circle{u} \in V(\alpha)} \abs{e(\deg(\circle{u}))} \le \eta^{\abs{U_\alpha} + \abs{V_\alpha}} \cdot \frac{\abs{E(\alpha)}^{3\abs{E(\alpha)}}}{n^{\abs{E(\alpha)}/2}}. \] \end{proof} \begin{corollary}\label{cor:non-spider-sum} For $k, l \in \{0, 1, \dots , D/2\}$, let ${\mathcal B}_{k,l} \subseteq {\mathcal L}$ denote the set of nontrivial, non-spiders $\alpha \in {\mathcal L}$ on the $(k,l)$ block i.e. $\abs{U_\alpha} = k, \abs{V_\alpha} = l$. The total norm of the non-spiders in ${\mathcal B}_{k, l}$ satisfies \[\sum_{\alpha \in {\mathcal B}_{k, l}} \abs{\lambda_\alpha} \norm{M_\alpha} = \eta^{k + l} \cdot \frac{1}{n^{\Omega(\varepsilon)}} \] \end{corollary} \begin{proof} \begin{align*} \sum_{\alpha \in {\mathcal B}_{k, l}} \abs{\lambda_\alpha} \norm{M_\alpha} & \leq \sum_{\alpha \in {\mathcal B}_{k, l}}\eta^{k+l} \cdot \frac{\abs{E(\alpha)}^{3\abs{E(\alpha)}}}{n^{\abs{E(\alpha)}/2}} \norm{M_\alpha} && \text{(\cref{prop:coefficient-bound})}\\ & \leq \eta^{k+l} \cdot\sum_{\alpha \in {\mathcal B}_{k, l}}\left(\frac{\abs{E(\alpha)}^3}{n^{\Omega(\varepsilon)}}\right)^{\abs{E(\alpha)}} && \text{(\cref{cor:non_spider_killing})}\\ & \leq\eta^{k+l} \cdot \sum_{\alpha \in {\mathcal B}_{k, l}}\left(\frac{n^{3\tau}}{n^{\Omega(\varepsilon)}}\right)^{\abs{E(\alpha)}} && (\alpha \in {\mathcal L})\\ & \leq \eta^{k+l} \cdot \sum_{\alpha \in {\mathcal B}_{k, l}}\frac{1}{n^{\Omega(\varepsilon\abs{E(\alpha)})}}\\ & \leq \eta^{k+l} \cdot\sum_{i=1}^\infty \frac{n^{O(\tau i)}}{n^{\Omega(\varepsilon i)}}\\ & = \eta^{k+l} \cdot \frac{1}{n^{\Omega(\varepsilon)}} \end{align*} where the last inequality used \cref{prop:edge-shape-count} and the fact $|E(\alpha)| \ge 1\text{ for }\alpha \in {\mathcal B}_{k, l}$. \end{proof} \subsection{Properties of $e(k)$} In this section, we establish some properties of the $e(k)$ used in the analysis. Recall that $e(k) = {\mathbb E}_{x \in \mathcal{S}(\sqrt{n})}\left[x_1\dots x_k\right]$ where $\mathcal{S}(\sqrt{n}) \coloneqq \set{x \in \set{\pm 1}^n \mid \sum_{i=1}^n x_i = \sqrt{n}}$. \begin{claim}\label{claim:e2} $e(2)=0$. \end{claim} \begin{proof} Fix $y \in \mathcal{S}(\sqrt{n})$. Note that $(\sum_{i=1}^n y_i)^2 = n$ implying $\sum_{i < j} y_i y_j = 0$. Using this fact, we get $$ {\mathbb
action can be generalized to include Hamiltonian functions which belong to the set $\ee{\mathcal{C}}^\infty(M)$. This definition, at least for $b^m$-symplectic manifolds, can be found in \cite[def. 2.6]{MatveevaMiranda}. \begin{definition}[$E$-Hamiltonian action] Let $(M, E)$ be an $E$-manifold and consider a symplectic form $\omega \in \ee{\Omega}^2(M)$. We say that an action $\rho\colon G \times M \longrightarrow M$ is \emph{$E$-Hamiltonian} if there exists a moment map $\mu \in \ee{\mathcal{C}}^\infty(M) \otimes \mathfrak{g}^*$ fulfilling \begin{equation} \iota_{X^\sharp} \omega = - \diff \langle \mu, X \rangle \end{equation} for each $X \in \mathfrak{g}$. \end{definition} We are ready now to state a version of the Marsden-Weinstein reduction by the Hamiltonian action of a Lie group $G$ in the context of $E$-symplectic manifolds. This result has been extracted and adapted from \cite[theorem 3.11]{MarreroReductionGut}. Afterwards, we prove a version of the well-known shifting trick over $E$-symplectic manifolds. The proof is essentially the same as for regular manifolds, and all we have to do is check that the construction is compatible with the $E$-structures. \begin{theorem} \label{thm:ERed} Let $(M, E)$ be an $E$-manifold with symplectic form $\omega \in \ee{\Omega}^2(M)$. Consider a proper and free group $E$-action $\rho\colon G \times M \longrightarrow M$ which is Hamiltonian with moment map $\mu\colon M \longrightarrow \mathfrak{g}^*$. If $\alpha \in \mathfrak{g}^*$ is a regular value of $\mu$, $\mu^{-1}(\alpha)/G_\alpha$ is an $E$-symplectic manifold with symplectic form $\omega_\mathrm{red}$ given by \begin{equation} \label{eq:RedForm} \pi^* \omega_\mathrm{red} = i^* \omega. \qedhere \end{equation} \end{theorem} \begin{theorem}[$E$-Shifting trick] \label{thm:EShiftTrick} Let $(M, E)$ be an $E$-symplectic manifold with symplectic form $\omega \in \ee{\Omega}^2(M)$ and a free and proper $E$-Lie group action $\rho\colon G \times M \longrightarrow M$ with moment map $\mu\colon M \longrightarrow \mathfrak{g}^*$ and assume $\alpha \in \mathfrak{g}^*$ is a regular value of $\mu$. If we endow $(\mathcal{O}(-\alpha), \omega_{\mathcal{O}(- \alpha)})$ with the natural coadjoint action of $G$ and the moment map $i \colon \mathcal{O}(-\alpha) \longhookrightarrow \mathfrak{g}^*$, there is an isomorphism of $E$-symplectic manifolds \begin{equation} \label{eq:ShiftTrick} \mu^{-1}(\alpha)/G_\alpha \simeq (\mu + i)^{-1}(0)/G. \end{equation} In particular, the Marsden-Weinstein reduction can always be performed at $0 \in \mathfrak{g}^*$. \end{theorem} \begin{proof} The first part of the proof relies on an interesting observation on its own, which claims that the reduction by coadjoint orbits is equivalent to the reduction by a point. Given a regular value $\alpha \in \mathfrak{g}^*$, the natural inclusion of $E$-manifolds $i\colon \mu^{-1}(\alpha) \longhookrightarrow \mu^{-1}(\mathcal{O}(\alpha))$ induces an $E$-diffeomorphism $\mu^{-1}(\alpha)/G_\alpha \simeq \mu^{-1}(\mathcal{O}(\alpha))/G$. We will also show that the uniqueness of the reduced form following equation \eqref{eq:RedForm} implies that the map of classes $[i]\colon \mu^{-1}(\alpha)/G_\alpha \longrightarrow \mu^{-1}(\mathcal{O}(\alpha))/G$ is an $E$-symplectomorphism. Let now $\pi_\alpha\colon \mu^{-1}(\alpha)/G_\alpha \longrightarrow \mu^{-1}(\alpha)/G_\alpha$ and $\pi_{\mathcal{O}(\alpha)}\colon \mu^{-1}(\mathcal{O}(\alpha)) \longrightarrow \mu^{-1}(\mathcal{O}(\alpha))/G$ be the respective quotient maps; similarly, let $i_\alpha\colon \mu^{-1}(\alpha) \longhookrightarrow M$ and $i_{\mathcal{O}(\alpha)}\colon \mu^{-1}(\mathcal{O}(\alpha)) \longhookrightarrow M$ the respective inclusions and let $\omega_{\alpha}$ and $\omega_{\mathcal{O}(\alpha)}$ be the reduced symplectic forms. It is obvious that $i_\alpha = i_{\mathcal{O}(\alpha)} i$, which implies $i_\alpha^* \omega = i^* i_{\mathcal{O}(\alpha)}^* \omega$. The characterization of the reduced $E$-symplectic form following equation \eqref{eq:RedForm} implies $\pi_\alpha^* \omega_\alpha = i^* \pi_{\mathcal{O}(\alpha)}^* \omega_{\mathcal{O}(\alpha)}$. We construct now the commutative diagram \eqref{eq:UniqRedForm}, where all the maps are actually $E$-maps. A direct consequence of the commutativity is that $\pi_\alpha^* [i]^* \omega_{\mathcal{O}(\alpha)} = i^* \pi_{\mathcal{O}(\alpha)}^* \omega_{\mathcal{O}(\alpha)}$. The previous results imply now that $\pi_\alpha^* \omega_\alpha = \pi_\alpha^* [i]^* \omega_{\mathcal{O}(\alpha)}$ but, as the reduced form is unique, we conclude $\omega_\alpha = [i]^* \omega_{\mathcal{O}(\alpha)}$. This shows that $[i]$ is an $E$-symplectomorphism. \begin{equation} \label{eq:UniqRedForm} \begin{tikzcd} {\mu^{-1}(\alpha)} & {\mu^{-1}(\mathcal{O}(\alpha))} \\ {\mu^{-1}(\alpha)/G_\alpha} & {\mu^{-1}(\mathcal{O}(\alpha))/G} \arrow["{[i]}"', from=2-1, to=2-2] \arrow["i", from=1-1, to=1-2] \arrow["{\pi_\alpha}"', from=1-1, to=2-1] \arrow["{\pi_{\mathcal{O}(\alpha)}}", from=1-2, to=2-2] \end{tikzcd} \end{equation} After this isomorphism of $E$-manifolds, we consider the product manifold $M \times \mathcal{O}(-\alpha)$, which is endowed with the product $E$-symplectic form of the $E$-form $\omega$ and the symplectic form on the coadjoint orbit: $\omega \oplus \omega_{\mathcal{O}(-\alpha)}$. The diagonal action of $G$ is Hamiltonian with moment map $\mu + i$ following lemma \ref{lem:ProdHamAct}. A straightforward computation shows \begin{align*} (\mu + i)^{-1}(0) &= \{ (p, \beta) \in M \times \mathcal{O}(- \alpha) \mid \mu(p) + \beta = 0 \} \\ &= \mu^{-1}(\mathcal{O}(\alpha)), \end{align*} and, in particular, the isomorphism respects the corresponding $E$-structures. This observation, together with the first result, proves equation \eqref{eq:ShiftTrick}. \end{proof} \section{Gauge theory of $E$-manifolds} \label{sec:GaugeTheory} In classical gauge theories, the configuration space of a point-mass particle in terms of positions and momenta does not give a complete characterization of the physical properties of a system. A standard example is the configuration space of classical electromagnetism, in which particles are described by an additional scalar called the \emph{electric charge}. The natural configuration space becomes a real line bundle $\mathbf{L}$ over the space-time $M$. In a general setting, the new configuration space is described by a fibre bundle $\tau\colon B \longrightarrow M$ with typical fibre $Q$. This fibre is understood to represent the internal degrees of freedom of the particle. Additionally, in classical gauge theories we consider the action of a Lie group $G$ on the total configuration space by gauge transformations. Following the previous interpretation, gauge transformations represent diffeomorphisms of the total configuration space $B$ which do not change the physical description of a particle in the base manifold $M$. The geometric idea behind a gauge transformation is represented in picture \ref{fig:gauge-local}. Having recalled the geometric description of gauge theories, we start by defining gauge theories over $E$-manifolds using the pullback structure over a fibre bundle. \begin{definition} Let $(M, E_M)$ be an $E$-manifold. A \emph{gauge theory} over $M$ is a pair $(B, G)$, where $\tau\colon B \longrightarrow M$ is a fibre bundle with the pullback structure $E_B$ and a $G$-structure on $B$. \end{definition} \begin{figure}[h!] \centering \includegraphics[scale=0.5]{images/Fibre_bundle.png} \caption{On the left, the induced pullback structure described locally for a $b$-manifold. In this case, the $b$-structure coincides with that generated by the pullback hypersurface $\pi^{-1}(Z)$. On the right, a schematic representation of a gauge transformation. The transformation may change from fibre to fibre smoothly, but a point of a fibre has to be mapped to the same fibre.} \label{fig:gauge-local} \end{figure} The equations of motion are specified using a $G$-invariant connection on the bundle $B$, called a gauge field. Such connections were already described in \cite{NestTsyganEMan}; we give here a different characterization as invariant splittings of an ``$E$-Atiyah sequence''. \begin{definition} Let $(M, E_M)$ be an $E$-manifold and consider a gauge theory $(B, G)$ over $M$. A \emph{gauge field} on $(B, G)$ is a $G$-invariant splitting of the following short exact sequence: \begin{equation} \label{eq:EAtiyahSeq} \begin{tikzcd} 0 & {\ker \diff \tau} & {\ee{T} B} & {\tau^* \ee{T} M} & 0. \arrow[from=1-1, to=1-2] \arrow["\iota", from=1-2, to=1-3] \arrow["{\diff \tau}", from=1-3, to=1-4] \arrow[from=1-4, to=1-5] \end{tikzcd} \qedhere \end{equation} \end{definition} The equations of motion of a classical particle under the action of a gauge field were described by Wong in \cite{Wong}, and are named after him as \emph{Wong's equations}. Weinstein showed in \cite{WeinsteinUniversal} that the equations of motion by Wong can be described in geometric terms and that, moreover, they become Hamiltonian. In his setting, a gauge field induces a projection from the cotangent bundle $\mathrm{T}^*B$, which can be regarded as the natural phase space of a gauge theory, to the standard cotangent bundle $\mathrm{T}^*M$. The choice of a Hamiltonian function in $\mathrm{T}^*M$ (the kinetic energy, for instance, if $M$ is a Riemannian manifold) induces a Hamiltonian function in $\mathrm{T}^*B$ which generates Wong's equations. Sternberg introduced generalized the minimal coupling procedure of electromagnetism to Yang-Mills theories and showed that it could be described by the introduction of a magnetic term in the canonical symplectic structure of $\mathrm{T}^*M$. Weinstein showed that the choice of a gauge field induces a symplectomorphism of symplectic manifolds which describes Sternberg's minimal coupling. Montgomery considered the symplectic formulation of Yang-Mills theories and showed that the minimal coupling of Weinstein and Wong's equations of motion could be described over more general Poisson manifolds. In particular, he proved that the restriction to a symplectic leaf recovers Weinstein's results. Our goal throughout the rest of the section is to prove analogous statements when the base manifold $M$
compute $Q\subset\P^3$ and $s,t\in \operatorname{Aut}(\P^3)$. \end{proof} The following example for case B2 explains how to compute $\mathcal{S}$ in case the quadrics are doubly ruled. \begin{example}[B2] \label{exm:B2} Suppose that we are given the following birational maps: \begin{gather*} f\c\P^2\dasharrow X\subset\P^3,\quad (x_0:x_1:x_2)\mapsto (x_0^6x_1^2: x_0x_1^5x_2^2: x_1^3x_2^5: x_0^5x_1x_2^2 + 2x_0^5x_2^3), \\ g\c\P^1\times\P^1\dasharrow Y\subset\P^3,\quad (y_0:y_1;y_2:y_3)\mapsto (y_0^3y_1^2y_2^5: y_0^3y_1^2y_2^5 + y_1^5y_2^3y_3^2: \\ y_0^2y_1^3y_3^5: y_0^4y_1y_2^3y_3^2 + y_0^5y_2^2y_3^3 + y_0^2y_1^3y_3^5). \end{gather*} Our goal is to compute the projective isomorphisms $\mathcal{P}(f,g)$. We use \ALG{get} to compute the base points of the linear series associated to~$f$ and $g$. We find that $f$ has simple base points at $p_1:=(0:0:1)$, $p_2:=(0:1:0)$ and $p_3:=(1:0:0)$ with multiplicities $3$, $2$ and $2$, respectively. The infinitely near relations between the remaining 10 base points $p_4,\ldots,p_{13}$ of $f$ are as follows: \begin{gather*} p_7\rightsquigarrow p_6 \rightsquigarrow p_4 \rightsquigarrow p_1, \quad p_9\rightsquigarrow p_8 \rightsquigarrow p_5 \rightsquigarrow p_2, \\ p_{10}\rightsquigarrow p_{11} \rightsquigarrow p_2, \quad p_{13}\rightsquigarrow p_{12} \rightsquigarrow p_3. \end{gather*} The simple base points of $g$ are $q_1:=(0:1;0:1)$, $q_2:=(0:1;1:0)$, $q_3:=(1:0;0:1)$, $q_4:=(1:0;1:0)$ and have each multiplicity $2$. The infinitely near relations between the remaining 8 base points $q_5,\ldots,q_{12}$ of $g$ are as follows: \begin{gather*} q_6\rightsquigarrow q_5\rightsquigarrow q_1,\quad q_8\rightsquigarrow q_7\rightsquigarrow q_2,\quad q_{10}\rightsquigarrow q_9\rightsquigarrow q_3,\quad q_{12}\rightsquigarrow q_{11}\rightsquigarrow q_4. \end{gather*} The multiplicities of the base points are encoded by the classes of~$f$ and $g$: \begin{gather*} [f]=8\,e_0-3\,e_1-3\,e_2-2\,e_3-2\,e_4-2\,e_5-e_6-\ldots-e_{13} \quad\text{and} \\ [g]=5\,\l_0+5\,\l_1-2\,\varepsilon_1-2\,\varepsilon_2-2\,\varepsilon_3-2\,\varepsilon_4-\varepsilon_5-\ldots-\varepsilon_{12}. \end{gather*} We observe that $[f]^2=[g]^2=\deg X=\deg Y=26$. We apply \ALG{set} and find that $h^0([f])=h^0([g])=16$ and thus $({\operatorname{\textbf{c}}}_0(f),{\operatorname{\textbf{c}}}_0(g))=(1,1)$ since $\dim f= \dim g=3<16-1$. We set $(\hat{f},\hat{g}):=({\operatorname{\textbf{r}}}_0(f),{\operatorname{\textbf{r}}}_0(g))$ so that ${\operatorname{\textbf{p}}}(\hat{f})={\operatorname{\textbf{p}}}(\hat{g})=(16,26,1)$. Notice that $[\hat{f}]=[f]$ and $[\hat{g}]=[g]$ as a direct consequence of the definitions. We find that $({\operatorname{\textbf{c}}}_1(\hat{f}),{\operatorname{\textbf{c}}}_1(\hat{g}))=(1,1)$ and $h^0(\hat{f}+\kappa_{\hf})=h^0(\hat{g}+\kappa_{\hg})=12$. We set $(\hat{f},\hat{g}):=({\operatorname{\textbf{r}}}_1(\hat{f}),{\operatorname{\textbf{r}}}_1(\hat{g}))$ so that ${\operatorname{\textbf{p}}}(\hat{f})={\operatorname{\textbf{p}}}(\hat{g})=(12,14,1)$ and \begin{gather*} [\hat{f}]=5\,e_0-2\,e_1-2\,e_2-e_3-e_4-e_5 \quad\text{and}\\ [\hat{g}]=3\,\l_0+3\,\l_1-\varepsilon_1-\varepsilon_2-\varepsilon_3-\varepsilon_4. \end{gather*} We remark that $[f]+\kappa_f=\mc{[f]+\kappa_f}$ and $[g]+\kappa_g=\mc{[g]+\kappa_g}$, unlike as we have seen in \EXM{B4}. We find that $({\operatorname{\textbf{c}}}_1(\hat{f}),{\operatorname{\textbf{c}}}_1(\hat{g}))=(1,1)$ and $h^0(\hat{f}+\kappa_{\hf})=h^0(\hat{g}+\kappa_{\hg})=4$. We set $(\hat{f},\hat{g}):=({\operatorname{\textbf{r}}}_1(\hat{f}),{\operatorname{\textbf{r}}}_1(\hat{g}))$ so that ${\operatorname{\textbf{p}}}(\hat{f})={\operatorname{\textbf{p}}}(\hat{g})=(4,2,1)$ and \begin{gather*} [\hat{f}]=2\,e_0-e_1-e_2 \quad\text{and}\quad [\hat{g}]=\l_0+\l_1. \end{gather*} We verify that $h^0(\hat{f}+\kappa_{\hf})=h^0(\hat{g}+\kappa_{\hg})=0$ so that $({\operatorname{\textbf{c}}}_1(f),{\operatorname{\textbf{c}}}_1(g))=(0,0)$. It follows from \THM{B} that $f$ and $g$ are characterized by base case~B2. We find that $\operatorname{\bf F}(\hat{f})=\{a,b\}$ and $\operatorname{\bf F}(\hat{g})=\{u,v\}$ as defined at \PRP{B2}, where $a:=e_0-e_1$, $b:=e_0-e_2$, $u:=\l_0$ and $v:=\l_1$ so that \begin{gather*} \Psi_{a}\times\Psi_{b}\c\P^2\dasharrow\P^1\times\P^1,\quad x\mapsto (x_0:x_1;x_0:x_2), \\ \Psi_{b}\times\Psi_{a}\c\P^2\dasharrow\P^1\times\P^1,\quad x\mapsto (x_0:x_2;x_0:x_1),\quad \text{and} \\ \Psi_{u}\times\Psi_{v}\c \P^1\times\P^1\to\P^1\times\P^1,\quad y\mapsto y. \end{gather*} We consider the following reparametrizations $\P^2\dasharrow\P^1\times\P^1$: \begin{gather*} s_c:x\mapsto ( c_0\,x_0 + c_1\,x_1: c_2\,x_0 + c_3\,x_1; c_4\,x_0 + c_5\,x_2: c_6\,x_0 + c_7\,x_2 ), \\ t_c:x\mapsto ( c_0\,x_0 + c_1\,x_2: c_2\,x_0 + c_3\,x_2; c_4\,x_0 + c_5\,x_1: c_6\,x_0 + c_7\,x_1 ), \end{gather*} and we set $\mathcal{S}:=\{s_c\}_{c\in \mathcal{I}_{\P^1\times\P^1}}\cup\{t_c\}_{c\in \mathcal{I}_{\P^1\times\P^1}}$. It follows from \PRP{B2} that $\mathcal{S}\supseteq\mathcal{R}(f,g)$. Let us first consider the reparametrizations $\{s_c\}_{c\in \mathcal{I}_{\P^1\times\P^1}}$. For general $c\in\mathcal{I}_{\P^1\times\P^1}$ we observe that $\operatorname{cdeg}(g\circ s_c)=10$, although $\operatorname{cdeg}( f)=8$. We use \citep[Algorithm~2]{n-bp} to compute \[ \mathcal{J}':=\set{c\in\mathcal{I}}{ g\circ s_c \text{ has the same base points as } f }, \] and find that $\mathcal{J}'=\mathcal{J}_{0}\cup\mathcal{J}_{1}$, where \begin{gather*} \mathcal{J}_{0}=\set{c\in \mathcal{I}_{\P^1\times\P^1}}{ c_0=c_4=1,~ c_1=c_2=c_5=c_6=0 }, \\ \mathcal{J}_{1}=\set{c\in \mathcal{I}_{\P^1\times\P^1}}{ c_1=c_5=1,~ c_0=c_3=c_4=c_7=0 }. \end{gather*} If we substitute $s_c$ into $g$ for $c\in\mathcal{J}'$, then the greatest common divisor of the components is $x_0^2$. Therefore $\operatorname{cdeg}(g\circ s_c)=8=\operatorname{cdeg}(f)$ as required. Next we enforce that the $4\times 45$ coefficient matrix $M_{g\circ s_c}$ has the same kernel as the coefficient matrix $M_f$ and compute the corresponding index sets: \begin{gather*} \mathcal{J}_0'':=\set{ c\in \mathcal{J}_0}{ M_{g\circ r_c}\cdot \ker M_f=\operatorname{\textbf{0}} }=\set{c\in \mathcal{J}'}{ c_7=2\,c_3 }\quad\text{and} \\ \mathcal{J}_1'':=\set{ c\in \mathcal{J}_1}{ M_{g\circ r_c}\cdot \ker M_f=\operatorname{\textbf{0}} }=\emptyset. \end{gather*} Next, we perform the same procedure for $\{t_c\}_{c\in \mathcal{I}_{\P^1\times\P^1}}$, but in this case we arrive at only empty-sets. Therefore, it follows that $\mathcal{J}$ as defined at~\EQN{J} is equal to~$\mathcal{J}_0''$. We apply \PRP{S} and recover $\mathcal{P}(f,g)$ in terms of a matrix parametrized in terms of $c_3\neq 0$: {\footnotesize% \[ U:= \begin{bmatrix} 1 & 0 & 0 & 0\\ 1 & 4\,c_3^5 & 0 & 0\\ 0 & 0 & 32\,c_3^6 & 0\\ 0 & 0 & 32\,c_3^6 & 4\,c_3 \end{bmatrix}. \] }% Indeed, if we substitute $\chi_{_U}\circ f$ with indeterminate $c_3$ into the equation of $Y\subset \P^3$, then we obtain 0. See \cite{github} for an implementation of this example. \hfill \ensuremath{\vartriangleleft} \end{example} \begin{remark} The current state is summarized in \ALG{alg} whose correctness follows from \THM{C} and \THM{B}. Recall that in case B3, B4 and B5, the surface~$\operatorname{img} \hat{f}$ is covered by lines or conics. Such surfaces are theoretically well understood, but a complete algorithmic description is left as future work. \hfill \ensuremath{\vartriangleleft} \end{remark} \begin{algorithm} \caption{ \label{alg:alg} \begin{itemize}[itemsep=0pt,topsep=5pt,leftmargin=5mm] \item {\bf Input.} Birational maps $f,g\in\mathcal{M}$. \item {\bf Output.} The set of projective isomorphisms $\mathcal{P}(f,g)$. \item {\bf Method.} {\it We use the \# symbol for comments.} \item[] Compute the base points of $f$ and $g$ with \ALG{get}. \item[] {\bf if\quad}${\operatorname{\textbf{p}}}(f)\neq{\operatorname{\textbf{p}}}(g) ${\bf\quad then\quad}{\bf return\quad}$\emptyset$; \item[] $(\hat{f},\hat{g}):=(f,g)$; \item[] {\bf if\quad}$({\operatorname{\textbf{c}}}_0(\hat{f}),{\operatorname{\textbf{c}}}_0(\hat{g}))=(1,1)${\bf\quad then\quad} $(\hat{f},\hat{g}) :=({\operatorname{\textbf{r}}}_0(\hat{f}),{\operatorname{\textbf{r}}}_0(\hat{g}))$; \item[] {\bf while \quad} $({\operatorname{\textbf{c}}}_1(\hat{f}),{\operatorname{\textbf{c}}}_1(\hat{g}))=(1,1)$ {\bf\quad do:\quad} \begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10mm] \item[] $(\hat{f},\hat{g}) :=({\operatorname{\textbf{r}}}_1(\hat{f}),{\operatorname{\textbf{r}}}_1(\hat{g}))$; \end{itemize} \item[] {\bf if\quad}$({\operatorname{\textbf{c}}}_2(\hat{f}),{\operatorname{\textbf{c}}}_2(\hat{g}))=(1,1)${\bf\quad then\quad} $(\hat{f},\hat{g}) :=({\operatorname{\textbf{r}}}_2(\hat{f}),{\operatorname{\textbf{r}}}_2(\hat{g}))$; \item[] {\bf if\quad}${\operatorname{\textbf{p}}}(\hat{f})\neq{\operatorname{\textbf{p}}}(\hat{g}) ${\bf\quad then\quad}{\bf return\quad}$\emptyset$; \item[] {\bf if\quad}$h^0([\hat{f}])=3${\bf\quad and\quad}$[\hat{f}]^2=1${\bf\quad then\quad}{\it\# case B1} \begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10mm] \item[] Set $\mathcal{S}$ as defined in \PRP{B1}. \end{itemize} \item[] {\bf else if\quad}$h^0([\hat{f}])=4${\bf\quad and\quad}$[\hat{f}]^2=2${\bf\quad then\quad}{\it\# case B2} \begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10mm] \item[] Set $\mathcal{S}$ as defined in \PRP{B2}. \end{itemize} \item[] {\bf else if\quad}$h^0([\hat{f}])=[\hat{f}]^2+1${\bf\quad and\quad}$1\leq [\hat{f}]^2\leq 8${\bf\quad then\quad}{\it\# case B3} \begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10mm] \item[] {\it \# Not considered in this article.} \end{itemize} \item[] {\bf else if\quad}$h^0(2\,[\hat{f}]+\kappa_{\hf})\geq 2${\bf\quad and\quad}$\mc{2\,[\hat{f}]+\kappa_{\hf}}^2=0${\bf\quad then\quad}{\it\# case B4} \begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10mm] \item[] {\it \# Not considered in this article.} \end{itemize} \item[] {\bf else if\quad}$h^0([\hat{f}]+\kappa_{\hf})\geq 2${\bf\quad and\quad}$[\mc{[\hat{f}]+\kappa_{\hf}}^2=0${\bf\quad then\quad}{\it\# case B5} \begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10mm] \item[] {\it \# Not considered in this article.} \end{itemize} \item[] {\bf else:\quad}{\bf return\quad}$\emptyset$; \item[] Compute $\mathcal{P}(f,g)$ from $\mathcal{S}\supseteq\mathcal{R}(f,g)$ using \PRP{S}. \item[] {\bf return\quad}$\mathcal{P}(f,g)$; \end{itemize} \end{algorithm} \section{Applications of the algorithm} \label{sec:A} In this section we outline how to find the projective isomorphism that correspond to (non-) Euclidean isomorphisms between rational surfaces. The \df{M\"obius quadric} is defined as \[ \S^n:=\set{ x\in\P^{n+1} }{ -x_0^2+x_1^2+\ldots+x_{n+1}^2=0 }. \] Let $\pi\c \S^n\dasharrow \P^n$ be the \df{stereographic projection} defined as \[ \pi\c\S^n\dasharrow\P^n,\qquad (x_0:\ldots:x_{n+1}) \mapsto (x_0-x_{n+1}:x_1:\ldots:x_n), \] with inverse $\pi^{-1}\c \P^n\dasharrow \S^n$, \[ z \mapsto (z_0^2+z_1^2+\ldots+z_n^2:2z_0z_1:\ldots:2z_0z_n: -z_0^2+z_1^2+\ldots+z_n^2). \] Suppose that $f,g\in\mathcal{M}$ are birational. The \df{affine isomorphisms} between $\operatorname{img} f$ and $\operatorname{img} g$ are defined as \[ \set{\rho\in\mathcal{P}(f,g)}{\rho(\H_n)=\H_n}, \] where $\H_n:=\set{z\in\P^n}{z_0=0}$. The \df{Euclidean isomorphisms} between $\operatorname{img} f$ and $\operatorname{img} g$ are defined as \[ \set{\rho\in\mathcal{P}(f,g)}{\rho(\mathbb{E}_n)=\mathbb{E}_n}, \] where $\mathbb{E}_n:=\set{z\in\H_n}{z_0^2+\ldots+z_n^2=0}$. We remark that Euclidean isomorphisms are perhaps better known as ``Euclidean similarities''. The \df{M\"obius isomorphisms} between $\operatorname{img} f$ and $\operatorname{img} g$ are defined as \[ \set{\pi \circ \rho \circ \pi^{-1}}{ \rho\in \mathcal{P}(\pi^{-1}\circ f,\pi^{-1}\circ g) \text{ and } \rho(\S^n)=\S^n}. \] Notice that if $\alpha\c \P^n\dasharrow \P^n$ is a M\"obius isomorphism such that $\alpha(\operatorname{img} f)=\operatorname{img} g$, then $\alpha$ is a birational quadratic map such that $\alpha(\mathbb{E}_n)=\mathbb{E}_n$. It is straightforward to recover the affine, Euclidean, or M\"obius isomorphisms from the output $\mathcal{P}(f,g)$ or $\mathcal{P}(\pi^{-1}\circ f,\pi^{-1}\circ g)$ of \ALG{alg}. \begin{example} We continue with \EXM{B1}, where $f$ parametrizes a Roman surface and where the projective isomorphism $\chi_{_U}\in \mathcal{P}(f,f)$ corresponding to $c=(0,1,0,1,0,0,0,0,1)$ is defined as $\chi_{_U}(x)=(x_0:x_1:x_3:x_2)$. We check that $\chi_{_U}(\mathbb{E}_3)=\mathbb{E}_3$ and thus $\chi_{_U}$ is an Euclidean isomorphism. In fact, we verify that $|\mathcal{P}(f,f)|=|\set{\rho\in\mathcal{P}(f,f)}{\rho(\mathbb{E}_3)=\mathbb{E}_3}|=24$ and thus the Roman surface admits 24 Euclidean symmetries, namely the symmetries of a tetrahedron. \hfill \ensuremath{\vartriangleleft} \end{example} \section{The proofs of the theorems} \label{sec:proofs} In this section we prove \THM{C} and \THM{B}. We assume that the reader is familiar with the material of \citep[Sections~II.7 and V.3]{har} and \citep[Chapter~1]{mat}. \begin{definition} \label{def:map2} Suppose that $Z$ is a rational surface and recall that on rational surfaces the numerical- and rational- equivalence relations for divisor classes are the same. Let $c\in N(Z)$ be a class such that $h^0(c)>0$ and let $V:=H^0(Z,c)$ denote the vector space of global sections over the ground field~$\mathbb{F}$. We define $\varphi_c\c Z\to \P^{h^0(c)-1}$ as $\varphi_V$ as defined in \DEF{map}. \hfill \ensuremath{\vartriangleleft} \end{definition} \begin{proof}[Proof of \THM{C}.] We consider the birational morphism $\pi\c\operatorname{bmd} f\to \operatorname{dom} f$ that resolves the base locus of~$f$ so that the composition $\Psi_{[f]}\circ\pi\c\operatorname{bmd} f\to \operatorname{img} f$ is a morphism. Notice that $[f]\in N(\operatorname{bmd} f)$ is the divisor class of the pullback of a hyperplane section along this morphism and that $\kappa_f$ is the canonical class of~$\operatorname{bmd} f$. We recall \DEF{map2} and notice that for all $c\in N(\operatorname{bmd} f)$ the map~$\varphi_c$ is up to a choice of basis equivalent to the morphism $\Psi_c\circ\pi$. In particular, we will notice that the diagrams in this proof remain commutative when an arrow for $\pi\c\operatorname{bmd} f\to\operatorname{dom} f$ is included. The case $\mathcal{R}(f,g)=\emptyset$ is trivial and therefore we will assume that $\mathcal{R}(f,g)\neq \emptyset$. Suppose that $\gamma\in\mathcal{R}(f,g)$ is an arbitrary but fixed compatible reparametrization and let $\beta\in\mathcal{P}(f,g)$ be a projective isomorphism such that $\beta\circ f=g\circ\gamma$. First we observe that $\mathcal{R}({\operatorname{\textbf{r}}}_i(f),{\operatorname{\textbf{r}}}_i(g))$ does not depend on the choice of basis of the associated maps ${\operatorname{\textbf{r}}}_i(f)$ and ${\operatorname{\textbf{r}}}_i(g)$ for all $i\in\{0,1,2\}$ (see \DEF{map}). In order to show that ${\operatorname{\textbf{r}}}_0$ is compatible we need to show that the condition~${\operatorname{\textbf{c}}}_0$ is a projective invariant and that $\gamma\in\mathcal{R}({\operatorname{\textbf{r}}}_0(f),{\operatorname{\textbf{r}}}_0(g))$ if $({\operatorname{\textbf{c}}}_0(f),{\operatorname{\textbf{c}}}_0(g))=(1,1)$. Let $\hat{X}:=\operatorname{img}{\operatorname{\textbf{r}}}_0(f)$, $\hat{Y}:=\operatorname{img}{\operatorname{\textbf{r}}}_0(g)$ and recall that ${\operatorname{\textbf{r}}}_0(f)=\Psi_{[f]}$. As a consequence of the definitions, there exists a birational and degree preserving linear projection~$\rho_f\c \hat{X}\to \operatorname{img} f$ so that $\rho_f\circ\Psi_{[f]}=f$. Let $\alpha\c \operatorname{bmd} f\dasharrow \operatorname{bmd} g$ be the birational map that makes the diagram of \FIG{r0a} commutative. \begin{figure}[!ht] \centering \begin{tikzpicture}[node distance=13mm, auto] \usetikzlibrary{arrows.meta} \node (C1) {}; \node (A) [right of=C1] {$\operatorname{bmd} f$}; \node (C3) [right of=A] {}; \node (S) [right of=C3] {}; \node (C5) [right of=S] {}; \node (C6) [right of=C5] {}; \node (C7) [right of=C6] {}; \node (B) [right of=C7] {$\operatorname{bmd} g$}; \node (X) [below of=A] {$\hat{X}$}; \node (E2) [right of=X] {}; \node (hX) [right of=E2] {$\operatorname{img} f$}; \node (E4) [right of=hX] {}; \node (hY) [right of=E4] {$\operatorname{img} g$}; \node (E6) [right of=hY] {}; \node (Y) [right of=E6] {$\hat{Y}$}; \node (Df) [below of=E2] {$\operatorname{dom} f$}; \node (Dg) [below of=E6] {$\operatorname{dom} g$}; \draw[->] (A) to node [swap] {$\varphi_{[f]}$} (X); \draw[->] (B) to node {$\varphi_{[g]}$} (Y); \draw[->, dashed] (Df) to node {${\operatorname{\textbf{r}}}_0(f)$} (X); \draw[->, dashed] (Dg) to node [swap] {${\operatorname{\textbf{r}}}_0(g)$} (Y); \draw[->, dashed] (Df) to node {$\gamma$} (Dg); \draw[->, dashed] (Df) to node [swap] {$f$} (hX); \draw[->, dashed] (Dg) to node {$g$} (hY); \draw[->] (X) to node {$\rho_f$} (hX); \draw[->] (Y) to node [swap] {$\rho_g$} (hY); \draw[->, dashed] (A) to node {$\alpha$} (B); \draw[->] (hX) to
from vibrations; now spring systems are often used[3]. Additionally, mechanisms for reducing eddy currents are implemented. This article is about magnetic levitation. ... An eddy current is a phenomenon caused by a moving magnetic field intersecting a conductor or vice-versa. ... Maintaining the tip position with respect to the sample, scanning the sample in raster fashion and acquiring the data is computer controlled[6]. The computer is also used for enhancing the image with the help of image processing as well as performing quantitative morphological measurements. UPIICSA IPN - Binary image Image processing is any form of information processing for which the input is an image, such as photographs or frames of video; the output is not necessarily an image, but can be for instance a set of features of the image. ... ## Other STM Related Studies Many other microscopy techniques have been developed based upon STM. These include Photon Scanning Tunneling Microscopy (PSTM), which uses an optical tip to tunnel photons[2]; Scanning Tunneling Potentiometry (STP), which measures electric potential across a surface[2]; and spin polarized scanning tunneling microscopy (SPSTM), which uses a ferromagnetic tip to tunnel spin-polarized electrons into a magnetic sample[7]. Spin polarized scanning tunneling microscopy (SP-STM) is a specialized application of scanning tunneling microscopy (STM) that can provide detailed information of the the surface magnetization distribution of a sample. ... Ferromagnetism is a phenomenon by which a material can exhibit a spontaneous magnetization, and is one of the strongest forms of magnetism. ... Other STM methods involve manipulating the tip in order to change the topography of the sample. This is attractive for several reasons. Firstly the STM has an atomically precise positioning system which allows very accurate atomic scale manipulation. Furthermore, after the surface is modified by the tip, it is a simple matter to then image with the same tip, without changing the instrument. IBM researchers developed a way to manipulate Xenon atoms absorbed on a nickel surface[2] This technique has been used to create electron "corrals" with a small number of adsorbed atoms, which allows the STM to be used to observe electron Friedel Oscillations on the surface of the material. Aside from modifying the actual sample surface, one can also use the STM to tunnel electrons into a layer of E-Beam photoresist on a sample, in order to do lithography. This has the advantage of offering more control of the exposure than traditional Electron beam lithography. For other uses, see IBM (disambiguation) and Big Blue. ... General Name, Symbol, Number xenon, Xe, 54 Chemical series noble gases Group, Period, Block 18, 5, p Appearance colorless Standard atomic weight 131. ... For other uses, see Nickel (disambiguation). ... For other uses, see Electron (disambiguation). ... It has been suggested that this article or section be merged with resist. ... Lithography is a method for printing on a smooth surface. ... // Conventional electron-beam lithography The practice of using a beam of electrons to generate patterns on a surface is known as Electron beam lithography. ... Recently groups have found they can use the STM tip to rotate individual bonds within single molecules. The electrical resistance of the molecule depends on the orientation of the bond, so the molecule effectively becomes a molecular switch. Electrical resistance is a measure of the degree to which an electrical component opposes the passage of current. ... ## Early Invention An early, patented invention, based on the above-mentioned principles, and later acknowledged by the Nobel committee itself, was the Topografiner of R. Young, J. Ward, and F. Scire from the NIST ("National Institute of Science and Technolology" of the USA)[8]. As a non-regulatory agency of the United States Department of Commerce’s Technology Administration, the National Institute of Standards (NIST) develops and promotes measurement, standards, and technology to enhance productivity, facilitate trade, and improve the quality of life. ... ## References 1. ^ G. Binnig, H. Rohrer “Scanning tunneling microscopy” IBM Journal of Research and Development 30,4 (1986) reprinted 44,½ Jan/Mar (2000) 2. ^ a b c d e f g h i C. Bai Scanning tunneling microscopy and its applications Springer Verlag, 2nd edition, New York (1999) 3. ^ a b c d e f g h i j k l m n o p q r s t u v w C. Julian Chen Introduction to Scanning Tunneling Micro scopy(1993) 4. ^ a b c d D. A. Bonnell and B. D. Huey “Basic principles of scanning probe microscopy” from Scanning probe microscopy and spectroscopy: Theory, techniques, and applications 2nd edition Ed. By D. A. Bonnell Wiley-VCH, Inc. New York (2001) 5. ^ J. Bardeen “Tunneling from a many particle point of view” Phys. Rev. Lett. 6,2 57-59 (1961) 6. ^ a b c K. Oura, V. G. Lifshits, A. A. Saranin, A. V. Zotov, and M. Katayama Surface science: an introduction Springer-Verlag Berlin (2003) 7. ^ R. Wiesendanger, I. V. Shvets, D. Bürgler, G. Tarrach, H.-J. Güntherodt, and J.M.D. Coey “Recent advances in spin-polarized scanning tunneling microscopy” Ultramicroscopy 42-44 (1992) 8. ^ R. Young, J. Ward, F. Scire, The Topografiner: An Instrument for Measuring Surface Topography, Rev. Sci. Instrum. 43, 999 (1972) The Opensource Handbook of Nanoscience and Nanotechnology Wikimedia Commons has media related to: Subfields and related fields Part of a series of articles on Nanotechnology Image File history File links Wikibooks-logo-en. ... Nanotechnology refers to a field of applied science and technology whose theme is the control of matter on the atomic and molecular scale, generally 100 nanometers or smaller, and the fabrication of devices that lie within that size range. ... History Implications Applications Organizations In fiction and popular culture List of topics Although nanotechnology is a relatively recent development in scientific research, the development of its central concepts happened over a longer period of time. ... Potential risks of nanotechnology can broadly be grouped into four areas: the risk of environmental damage from nanoparticles and nanomaterials the risk posed by molecular manufacturing (or advanced nanotechnology) societal risks health risks Nanoethics concerns the ethical and social issues associated with developments in nanotechnology, a science which encompass several... This article or section does not cite its references or sources. ... This is a list of organizations involved in nanotechnology. ... This is a list of references and appearances of Nanotechnology in works of fiction. ... This page aims to list all topics related to the field of nanotechnology. ... Nanomaterials Fullerenes Carbon nanotubes Nanoparticles Nanomaterials is the study of how materials behave when their dimensions are reduced to the nanoscale. ... The Icosahedral Fullerene C540 C60 and C-60 redirect here. ... // 3D model of three types of single-walled carbon nanotubes. ... The term nanoparticle is generally used to refer to a small particle with all three dimensions less than 100 nanometres [1]. The term also includes subcategories such as nanopowders, nanoclusters and nanocrystals. ... Nanomedicine Nanotoxicology Nanosensor Nanomedicine is the medical application of nanotechnology. ... Research on ultrafine particles has laid the foundation for the emerging field of nanotoxicology, with the goal of studying the biokinetics of engineered nanomaterials and their potential for causing adverse effects. ... Nanosensors are a technology that may exist in the future. ... Molecular self-assembly Self-assembled monolayer Supramolecular assembly DNA nanotechnology An example of a molecular self-assembly through hydrogen bonds reported by Meijer and coworkers in Angew. ... Self assembled monolayers are surfaces consisting of a single layer of molecules on a substrate. ... A supramolecular assembly is an assembly of molecules held together by noncovalent bonds. ... DNA nanotechnology is an area of scientific research which seeks to use the unique molecular recognition properties of DNA and other nucleic acids to create novel, controllable structures out of DNA. The DNA is thus used as a structural material rather than as a carrier of biological information. ... Nanoelectronics
\section{Introduction} The presence of an inflationary stage of the Universe is strongly supported by the cosmic microwave background observations~\cite{Akrami:2018odb}. The observation can access the period of the last 60 e-folding in inflationary era and we hardly know about a stage of universe before, or in the very beginning of the inflation. From the theoretical view point, the Universe must be in another phase before the inflation. In Ref.~\cite{Borde:2001nh}, it is shown that the spacetime region that is covered by the world lines of comoving observers who feel accelerated expansion is past incomplete. The past incompleteness suggests the presence of a boundary for inflationary spacetime. Then if the boundary is inextendible, the inflation spacetime is said to have an initial singularity, otherwise it is a coordinate singularity. Since a singularity is expected to be a sign of violation of classical gravity, it is important to ask whether the past boundary of an inflationary universe is a singularity or not. In Ref.~\cite{Yoshida:2018ndv}, the past extendibility of accelerated flat Friedmann--Lema\^itre--Robertson--Walker (FLRW) universes is investigated by checking the presence of a parallelly propagated curvature singularity~\cite{Hawking:1973uf, Ellis:1977pj}, which is defined by the divergence of the components of the Riemann tensor with respect to the basis which is parallel transportation along the incomplete geodesics. Then it was found that the extendibility depends on whether $\lim_{t \rightarrow -\infty} \dot{H}/a^2 $ is finite or not, where $a$ is the scale factor, $H$ is the Hubble parameter and $t$ is the proper time of the comoving observers. By applying this method, for example, the Starobinsky's inflation model is found to have an initial parallelly propagated curvature singularity assuming inflation persists on the slow roll trajectory toward $t \rightarrow - \infty$. See also Refs.~\cite{FernandezJambrina:2007sx, Fernandez-Jambrina:2016clh} where the initial singularity of other type inflation like a power law was studied. The analysis of extendibility in Ref.~\cite{Yoshida:2018ndv} has been then extended to the universe with a spatial curvature or anisotropies recently in Ref.~\cite{Nomura:2021lzz}. Interestingly, the singularity of inflationary universe is quite different from the big-bang singularity in the sense that there is no divergence with the curvature invariants. Thus this singularity occurs even when the typical energy scale read from the Hubble parameter is much smaller than the string scale. Therefore it is not clear that this kind of singularity leads the violation of low energy effective theory description of inflation model. The purpose of this paper is to study the propagation of strings on inflationary universe to answer this question. The equations of motion for string in de Sitter spacetime was solved exactly by Refs.~\cite{Combes:1993rw, deVega:1993rm}. Recently, this analysis is revisited to see the consistency between the Higuchi bound and the modified Regge trajectory \cite{Noumi:2019ohm, Lust:2019lmq, Kato:2021rdz, Parmentier:2021nwz}. String propagation in an expanding universe has been studied in many contexts so far \cite{Sanchez:1989cw,Gasperini:1990xg,Larsen:1995vr,deVega:1995bq, Tolley:2005us}. Contrary to the exact de Sitter case, one need to take some approximation to study the string on general expanding universe. In this paper, we will focus on the method of Penrose limit \cite{Penrose1976}, as did in \cite{Tolley:2005us}. The Penrose limit is a way to obtain a plane wave spacetime spreading an infinitesimal neighborhood of a null geodesic outward. It has been studied actively as a way to construct nontrivial supersymmetric solutions \cite{Blau:2002dy} from known supersymmetric spacetimes. This is because the Penrose limit does not decrease a number of symmetry \cite{Geroch:1969ca, Blau:2002mw}. A class of spacetime with a singularity is expected to have a kind of universality~\cite{Blau:2003dz,Blau:2004yi}. As we will see later, an initial singularity of an inflationary universe is out of this universality. String propagation in a plane wave spacetime with a singularity has actively studied so far~\cite{Horowitz:1989bv,deVega:1990kk,Horowitz:1990sr,deVega:1990ke,David:2003vn,Craps:2008bv}. An important feature found in Ref. \cite{David:2003vn} is that the quantum string propagation is possibly ill-defined even when a singularity is weak in the sense of Tipler's classification. Actually, we will see later, that consistency of string propagation can be judged by Krolak's classification, not the Tipler's one. Note that the singular behavior becomes milder for a point-like shock wave spacetime \cite{Aichelburg:1970dh, Veneziano:1987df, Amati:1988ww, deVega:1988ts, deVega:1990nr, Sanchez:2003ek}, not a plane wave. This paper is organized as follows. In Sec.~\ref{sec2}, we examine the presence/absence of an initial singularity of inflationary Universe based on Ref.~\cite{Yoshida:2018ndv} and discuss its past completion and the Penrose limit. As a result of Penrose limit, singular/non-singular inflationary Universe respectively reduces to singular/non-singular plane wave spacetime. In the Sec.~\ref{sec3}, the propagation of test string in a plane wave spacetime is discussed. There we find that string can propagate through the initial singularity of the Universe if the strength of the singularity is sufficiently weak. In the Sec.~\ref{sec4}, we study the strength of the singularity for three models of inflation; Starobinsky inflation and general and quadratic hill top inflation. The final section is devoted to summary and discussion. \section{Inflationary Universe and its past completion} \label{sec2} Here, we review the presence/absence of an initial singularity of inflationary Universe in Sec.~\ref{sec21}. Then we construct a past completion in Sec.~\ref{sec22} and take the Penrose limit to obtain singular/non-singular plane wave spacetime in Sec.~\ref{sec23}. \subsection{Initial singularity of inflation} \label{sec21} We consider $3+1$ dimensional flat FLRW universe, \begin{align} g_{\mu\nu} d x^{\mu} d x^{\nu} &= - d t^2 + a(t)^2 (d r^2 + r^2 d \Omega^2) = a(\eta)^2 \left( - d \eta^2 + dr^2 + r^2 d \Omega^2 \right), \label{FLRW} \end{align} with $d \Omega^2 := d \theta^2+\sin^2\theta d \phi^2$. Here $a$ is the scale factor and $t$ represents the comoving time while $\eta$ represents the conformal time, $d t = a d \eta$. We assume that the scale factor $a(t)$ approaches to zero toward a initial time $t_{i}$: $\lim_{t \rightarrow t_{i}} a(t) = 0$. To discuss the (in)completeness and extendibility of null geodesics at $a \rightarrow 0$, we consider null geodesics defined by $\eta + r = \text{constant}$. The affine parameter $u$ of such geodesics can be defined by \begin{align} du = a^2 d\eta = a dt. \end{align} Thus the incompleteness can be checked by \begin{align} \int^{t_{i}} a(t) dt = \text{finite.} \Leftrightarrow \text{null incomplete}. \end{align} In the following, we focus on the incomplete case and fix the integration constant by $u(t_{i}) = 0$. The incompleteness of the null geodesics implies two possibilities. One is that the whole of the spacetime is not spanned by the flat FLRW coordinates \eqref{FLRW} and the end point of null geodesics corresponds to a coordinate singularity. A simple example of the extendible case is the exact de Sitter space, where flat chart only covers a half of the entire closed de Sitter space. The other possibility is that the end point of null geodesics corresponds to a true singularity. The tangent vector $k^\mu =dx^\mu/du$ of such null geodesics is given by \begin{align} k^{\mu} \partial_{\mu} = \frac{1}{a^2} \partial_{\eta} - \frac{1}{a^2} \partial_{r} = \frac{1}{a} \partial_{t} - \frac{1}{a^2} \partial_{r}. \end{align} Because of the relation $\nabla_{[\mu} k_{\nu]} = 0$, we can introduce a scalar potential for $k$ by $k_{\mu} dx^{\mu} = - d v $, that is, we introduce $v$ by \begin{align} dv := - k_{\mu} dx^{\mu} = \frac{1}{a} dt + dr = d\eta + dr. \end{align} Following, we fix an integration constant by $v = \eta + r$. Note that a curve $v = \text{constant}$ corresponds to a null geodesic. By using $u$ and $v$ coordinates instead of $t$ and $r$, we can rewrite \eqref{FLRW} as \begin{align} g_{\mu\nu}dx^{\mu} dx^{\nu} &= - 2 du dv + a^2(u) dv^2 + a^2(u) (v - \eta(u))^2 d \Omega^2. \label{g=} \end{align} In this coordinates, the Ricci tensor can be represented as \begin{align} R_{\mu\nu} dx^{\mu} dx^{\nu} &= - 2 \frac{a''(u)}{a(u)} du^2 + \left(3 a'(u){}^2 + a(u) a''(u)\right)g_{\mu\nu}dx^{\mu} dx^{\nu} \notag\\ &= - 2 \frac{\dot{H}}{a^2} du^2 + \left( 3 H^2 + \dot{H} \right) g_{\mu\nu}dx^{\mu} dx^{\nu}, \end{align} where the prime stands for derivatives with respect to $u$ and $H$ is the Hubble parameter defined by $H := \dot{a}/ a = a'$. The expression appeared in $R_{uu}$ plays an important role in this paper and let us call it $A(u)$: \begin{align} A(u) :=- \frac{1}{2} R_{uu}=\frac{a''(u)}{a(u)} = \frac{\dot{H}}{a^2}. \label{Au} \end{align} One can see that the
\section{Introduction} Matrix integrals appear in diverse fields of mathematics and theoretical physics (see \cite{Rossi:1996hs,Morozov:2009jv} for reviews). In particular they play important roles in random matrix theory (RMT) \cite{Guhr:1997ve,Mehtabook,Akemann:2011csh}. The list of most well-studied unitary matrix integrals includes, but is not limited to, the Brezin-Gross-Witten integral \cite{Brezin:1980rk,Gross:1980he} (also known as the Leutwyler-Smilga integral \cite{Leutwyler:1992yt}), the Harish-Chandra-Itzykson-Zuber integral \cite{Harish-Chandra:1957dhy,Itzykson:1979fi}, and the Berezin-Karpelevich integral \cite{Berezin1958,Guhr:1996vx,Jackson:1996jb}. They are relevant to quantum gravity, lattice gauge theory, Quantum Chromodynamics (QCD), quantum chaos and disordered mesoscopic systems. In QCD, due to spontaneous breaking of chiral symmetry, the low-energy physics may be described by a non-linear sigma model. In the so-called $\epsilon$-regime \cite{Gasser:1987ah,Leutwyler:1992yt}, exact zero modes of the Nambu-Goldstone modes dominate the partition function and the infinite-dimensional path integral reduces to a finite-dimensional integral over a coset space. It is known since olden times that there are three patterns of chiral symmetry breaking in QCD, depending on the representation of quarks \cite{Peskin:1980gc}. In QCD-like theories with quarks in \cc{real} representations of the gauge group, the pattern is $\text{SU}(2N_f)\to\text{SO}(2N_f)$ where $N_f$ is the number of Dirac fermions \cite{Smilga:1994tb,Kogut:2000ek}. The basic degrees of freedom at low energy are expressed through $U^T U$ where $U$ is a matrix field that lives on $\text{U}(2N_f)$. When a quark chemical potential $\mu$ is added, the low-energy sigma model attains additional terms \cite{Kogut:2000ek}. In this paper, we are interesting in evaluating the $\epsilon$-regime partition function of \cc{QCD with real quarks} when the chemical potential is different for each flavor, i.e., $(\mu_1,\cdots,\mu_{N_f})$ are all distinct. This case was not covered in \cite{Kogut:2000ek}. We will show that the partition function has a pfaffian form. We also argue that the integral formula has an application to the symmetry crossover between the Gaussian Orthogonal Ensemble (GOE) and the Gaussian Unitary Ensemble (GUE) in RMT. Moreover, since $U^TU$ is an element of the Circular Orthogonal Ensemble in RMT \cite{Dyson:1962es}, our result may have a potential application in this direction as well. This paper is organized as follows. In section~\ref{sc:main} the main analytical results of this paper are summarized. In section~\ref{sc:appl} some applications are illustrated. In section~\ref{sc:deriv} we give a derivation of the integral formulae presented in section~\ref{sc:main}. \cc{In section~\ref{sc:added} we give a formula (without proof) for a related unitary matrix integral that has applications to QCD-like theories with quarks in pseudoreal representations of the gauge group.} Finally in section~\ref{sc:conc} we conclude. \section{\label{sc:main}Main results} For an arbitrary Hermitian $N\times N$ matrix $H$ with mutually distinct eigenvalues $\{e_k\}$ and an arbitrary nonzero $\gamma\in\mathbb{C}$, we have \ba & \int_{\text{U}(N)} \!\!\!\! \mathrm{d}{U} \exp \kkakko{\gamma^2 \Tr(U^T U H U^\dagger U^* H^*)} \notag \\ = \; & \kkakko{\prod_{k=1}^{N} \Gamma\mkakko{\frac{k+1}{2}}} \frac{\exp\mkakko{\displaystyle \gamma^2\sum_{k=1}^{N}e_k^2}}{\gamma^{N(N-1)/2}\Delta_N(e)} \notag \\ & \times \underset{1\leq i,j\leq N}{\Pf}\Big[\erf\mkakko{\gamma(e_i-e_j)}\Big] \label{eq:main1} \ea for even $N$ and \ba & \int_{\text{U}(N)} \!\!\!\! \mathrm{d}{U} \exp \kkakko{\gamma^2\Tr(U^T U H U^\dagger U^* H^*)} \notag \\ =\; & \kkakko{\prod_{k=1}^{N} \Gamma\mkakko{\frac{k+1}{2}}} \frac{\exp\mkakko{\displaystyle \gamma^2\sum_{k=1}^{N}e_k^2}}{\gamma^{N(N-1)/2}\Delta_N(e)} \notag \\ & \times \Pf \kkakko{\begin{array}{c|c} \big\{\erf\mkakko{\gamma(e_i-e_j)}\big\}_{1\leq i,j\leq N} & \{1\}_{1 \leq i\leq N} \vspace{1pt}\\\hline \{ -1 \}_{1\leq j \leq N} & 0 \end{array}} \label{eq:main2} \ea for odd $N$, where $\mathrm{d} U$ denotes the normalized Haar measure, $\displaystyle\Delta_N(e)\equiv \prod_{i<j}(e_i-e_j)$ is the Vandermonde determinant, $\Pf$ denotes a pfaffian, and $\erf(x)$ is the error function. We performed an intensive numerical check of these formulae for $N$ up to $10$ by estimating the left hand sides of the formulae by Monte Carlo methods and verified their correctness. We used a Python library \textsf{pfapack} \cite{wimmer2012algorithm} for an efficient calculation of a pfaffian. As a side remark we note that a similar pfaffian formula has been obtained for a unitary matrix integral considered in \cite{Kanazawa:2018kbo}. \section{\label{sc:appl}Applications} \subsection{\label{sc:adjq}\cc{QCD with real quarks}} It follows from a slight generalization of \cite{Kogut:2000ek} that the static part of the partition function for the low-energy limit of massless QCD with $N_f$ flavors \cc{of quarks in real representations} with chemical potential $\{\mu_f\}_{f=1}^{N_f}$ is given by \ba Z = \int_{\text{U}(2N_f)}\hspace{-6mm} \mathrm{d} U \exp \ckakko{V_4 F^2 \Tr[U^T U B (U^T U)^\dagger B + B^2]} \ea where $V_4$ is the Euclidean spacetime volume, $F$ is the pion decay constant, and \ba B\equiv \diag(\mu_1,\cdots,\mu_{N_f},-\mu_1,\cdots,-\mu_{N_f}) \ea is the chemical potential matrix. The same sigma model is expected to arise in the large-$N$ limit of the chiral symplectic Ginibre ensemble \cite{Akemann:2005fd} which has exactly the same symmetry as \cc{QCD with real quarks}. Let us define the dimensionless variables \ba \left\{\begin{array}{rl} \widehat{\mu}_f & \equiv \sqrt{V_4} F \mu_f \\ \widehat{\mu}_{N_f+f} & \equiv - \widehat{\mu}_f \end{array} \quad \text{for}~f=1,2,\cdots,N_f\,. \right. \ea Then a straightforward application of \eqref{eq:main1} yields \ba Z \propto \frac{ \exp\mkakko{\displaystyle 4\sum_{f=1}^{N_f}\widehat{\mu}_f^2} \underset{1\leq f,g\leq 2N_f}{\Pf}\big[\erf\mkakko{\widehat{\mu}_f - \widehat{\mu}_g}\big] }{ \displaystyle \kkakko{\prod_{1\leq f<g\leq N_f}(\widehat{\mu}_f-\widehat{\mu}_g)^2(\widehat{\mu}_f+\widehat{\mu}_g)^2} \prod_{f=1}^{N_f}\widehat{\mu}_f } \ea which completely fixes the chemical potential dependence of the effective theory in the $\epsilon$-regime. \subsection{\label{sc:0923234}GOE-GUE crossover} In the classical papers \cite{Pandey:1982br,Mehta:1983ns}, Mehta and Pandey solved the random matrix ensemble intermediate between GOE and GUE. Their ingenious approach was to consider the random matrix \ba H = S + \alpha T\,, \label{eq:MPrm} \ea where $S$ is a Gaussian real symmetric matrix and $T$ is a Gaussian Hermitian matrix. As $\alpha$ grows from zero, the level statistics evolve from GOE to GUE. The transition occurs at the scale $\alpha^2\sim 1/N$ where $N$ is the matrix size. Mehta and Pandey successfully derived the joint probability distribution function of eigenvalues of $H$ by using the Harish-Chandra-Itzykson-Zuber integral \cite{Harish-Chandra:1957dhy,Itzykson:1979fi}. An alternative approach to the GOE-GUE transition would be to consider the random matrix \ba H = S + i \alpha A\,, \label{eq:werm} \ea where $A$ is a Gaussian real anti-symmetric matrix. [Actually the ensemble \eqref{eq:MPrm} is essentially equivalent to \eqref{eq:werm}, because the sum of two Gaussian real symmetric matrices is again a Gaussian real symmetric matrix with a modified variance.] Then the Gaussian weight for $S$ and $A$ reads as \ba & \quad \exp(-\Tr S^2 + \Tr A^2) \notag \\ & = \exp\kkakko{ -\frac{1+\alpha^2}{2\alpha^2}\Tr(H^2) + \frac{1-\alpha^2}{2\alpha^2} \Tr (HH^T) }. \ea Upon diagonalization $H=UEU^\dagger$ we end up with the unitary integral which exactly has the form \eqref{eq:main1} and \eqref{eq:main2}. Carrying out the integrals, we immediately arrive at the joint eigenvalue density derived by Mehta and Pandey \cite{Pandey:1982br,Mehta:1983ns}. Our integral thus provides a way to solve the transitive ensemble without recourse to the Harish-Chandra-Itzykson-Zuber integral. \section{\label{sc:deriv}Derivation of the formulae} The derivation proceeds in three steps. \subsection{Step 1:~the heat equation} We adopt the method of heat equation \cite{Itzykson:1979fi}. Let us assume $t>0$ and consider a function \ba z_N(t,H,U) & \equiv \frac{1}{t^{\alpha}} \exp\kkakko{-\frac{1}{t}\Tr(H-P^\dagger H^TP)^2} \ea where we wrote $P=U^T U$ for brevity. Then \ba \frac{\partial}{\partial t}z_N(t,H,U) & = \kkakko{-\frac{\alpha}{t} + \frac{1}{t^2}\Tr(H-P^\dagger H^TP)^2} \notag \\ & \quad \times z_N(t,H,U)\,. \label{eq:zdert} \ea The Laplacian over Hermitian matrices is defined by \ba \Delta_H & \equiv \sum_{i}\frac{\partial^2}{\partial H_{ii}^2} + \frac{1}{2} \sum_{i<j}\kkakko{\frac{\partial^2}{\partial(\mathrm{Re}\,H_{ij})^2}+\frac{\partial^2}{\partial(\mathrm{Im}\,H_{ij})^2}} \hspace{-2mm} \\ & = \sum_{i,j}\frac{\partial}{\partial H_{ij}}\frac{\partial}{\partial H_{ji}}\,. \ea Then we have (assuming that repeated indices are summed) \ba & \quad \Delta_H z_N(t,H,U) \notag \\ & = \frac{1}{t^\alpha}\frac{\partial}{\partial H_{ij}}\frac{\partial}{\partial H_{ji}} \exp\kkakko{-\frac{1}{t}\Tr(H-P^\dagger H^TP)^2} \\ & = \kkakko{-\frac{4N(N-1)}{t} + \frac{16}{t^2}\Tr(H-P^\dagger H^T P)^2 } z_N(t,H,U)\,. \label{eq:zderH} \ea Comparison of \eqref{eq:zdert} and \eqref{eq:zderH} indicates that, if we set $\displaystyle \alpha=N(N-1)/4$, then \ba \mkakko{\frac{\partial}{\partial t} - \frac{1}{16}\Delta_H} z_N(t,H,U) = 0 \ea holds. Then it is obvious that \ba Z_N(t,H) & \equiv \int_{\text{U}(N)} \!\!\!\! \mathrm{d}{U}~ z_N(t,H,U) \\ & = \frac{1}{t^\alpha}\exp\mkakko{-\frac{2}{t}\Tr H^2} \notag \\ & \quad \times \int_{\text{U}(N)} \!\!\!\! \mathrm{d}{U} \exp \kkakko{\frac{2}{t}\Tr(PHP^\dagger H^T)} \label{eq:ZNdefu} \ea also satisfies the same differential equation as $z_N(t,H,U)$. Using the invariance of the Haar measure it is easy to verify that $Z_N(t,H)$ depends on $H$ only through its eigenvalues $\{e_1,\cdots,e_N\}$. An important property of $Z_N(t,H)$ is its translational invariance. Namely, $Z_N(t,H)=Z_N(t,H+a\mathbbm{1}_N)$ for an arbitrary $a\in\mathbb{R}$, which can be easily verified from the definition \eqref{eq:ZNdefu}. This means that $Z_N(t,H)$ depends on $\{e_k\}$ only through the differences $\{e_k-e_\ell \}$. For us it is beneficial to transform the Laplacian into the ``polar coordinate'' \cite{ZinnJustin:2002pk} \ba \Delta_H = \frac{1}{\Delta_N(e)}\sum_{k=1}^{N}\frac{\partial^2}{\partial e_k^2}\Delta_N(e) + \Delta_X \ea where $X$ denote the angular variables. Then \ba \mkakko{\frac{\partial}{\partial t}-\frac{1}{16}\sum_{k=1}^{N}\frac{\partial^2}{\partial e_k^2}}[\Delta_N(e)Z_N(t,H)] = 0\,. \label{eq:differ3242} \ea To obtain the basic building block of $Z_N(t,H)$, we shall explicitly work out the $N=2$ case in the next subsection. \subsection{\boldmath Step 2:~the $N=2$ case} Let us look at $N=2$, for which \ba Z_2(t,H) & = \frac{1}{\sqrt{t}}\exp\kkakko{-\frac{2}{t}(e_1^2+e_2^2)} \notag \\ & \quad \times \int_{\text{SU}(2)} \!\!\!\! \mathrm{d}{U} \exp\ckakko{\frac{2}{t}\Tr[(U^TU)E(U^TU)^\dagger E]} \ea where $E=\diag(e_1,e_2)$ and we dropped the $\text{U}(1)$ phase of $U$ because it decouples. Using the parametrization $U=x_0\mathbbm{1}_2 + i x_k \sigma_k$ with $x_0^2+x_1^2+x_2^2+x_3^2=1$, it is easily verified that \ba & \quad \Tr[(U^TU)E(U^TU)^\dagger E] \notag \\ & = e_1^2+e_2^2-4(e_1-e_2)^2 (x_0x_1+x_2x_3)^2, \ea hence \ba Z_2(t,H) = \frac{1}{\sqrt{t}} \int_{\text{SU}(2)} \!\!\!\! \mathrm{d}{U} \exp\ckakko{-\frac{8}{t}(e_1-e_2)^2 (x_0x_1+x_2x_3)^2}\,. \ea Next we employ the Hopf coordinate of $S^3$: \ba & (x_0,x_1,x_2,x_3) \notag \\ = \; & (\cos \xi_1\sin \eta, \cos \xi_2 \cos\eta, \sin \xi_1 \sin \eta, \sin \xi_2 \cos\eta) \ea which yields \ba Z_2(t,H) & = \frac{1}{2\pi^2\sqrt{t}} \int_0^{2\pi}\mathrm{d} \xi_1
see that in all these anti-diffusive cases $| \hat \nu_{\rm turb}| < {\rm Pr} = 0.3$, so the system overall still behaves diffusively overall (i.e. with ${\rm Pr} + \hat \nu_{\rm turb} > 0$). In fact, so far we have found that $| \hat \nu_{\rm turb}| \ll {\rm Pr}$ in all such runs, so the anti-diffusive effect is in practice negligible. Whether this would continue to be the case in stellar interiors remains to be determined, but is quite likely since the turbulent velocities are expected to scale as $\sqrt{{\rm Pr}/(R_0-1)}$ (in the pure fingering case at least) so the Reynolds stresses should scale as ${\rm Pr}/(R_0-1)$ \citep[see, e.g.][]{Brownal2013,SenguptaGaraud2018}. For large $R_0$ approaching marginal stability, which is the region of parameter space where anti-diffusive behavior may be expected \citep{Xieal2019}, the Reynolds stress is therefore expected to be very small. For this reason, we now set aside the mathematically interesting but in practice probably irrelevant scenario of anti-diffusive fingering convection. \begin{figure}[h] \epsscale{0.6} \plotone{Dnu.pdf} \caption{Comparison between two methods of measuring $\hat \nu_{\rm turb}$ from the simulations presented in Table \ref{tab:data1}. The horizontal axis shows the results from the method using the mean flow using (\ref{eq:nuturb1}), and the vertical axis shows the results using the stress-strain relation using (\ref{eq:linearfit}). Error bars shown are computed as discussed in the main text. } \label{fig:Dnu} \end{figure} A more practical result can be obtained by comparing the turbulent viscosity to the turbulent compositional diffusivity computed from the simulations, which is related to the flux $\hat F_C$ defined earlier in equation (\ref{eq:FCdef}) via \begin{equation} \hat \kappa_{C,{\rm turb}} = \frac{- \langle wC \rangle}{\kappa_T C_{0z}} = R_0 \hat F_C. \end{equation} Note that the second term in that expression is the ratio of two dimensional quantities, while the third is non-dimensional. Figure \ref{fig:DnuvsDC} shows the time-average of $\hat \kappa_{C,{\rm turb}}$ (during the statistically stationary state) against $\hat \nu_{\rm turb}$. We see that the data follows some interesting trends, and falls into two categories. Simulations that are clearly fingering-dominated, such as those with low density ratio ($R_0 = 1.5$ and $R_0 = 1.75$), or intermediate density ratio ($R_0 = 2$) but low shear, satisfy the relationship $\hat \nu_{\rm turb} \simeq 0.25 \hat \kappa_{C,{\rm turb}}$ (red line). By contrast, simulations that are clearly shear-dominated, such as those with high density ratio and high shear, satisfy the relationship $\hat \nu_{\rm turb} \simeq \hat \kappa_{C,{\rm turb}}$ (blue line). For some values of the density ratio (especially $R_0 = 2$), the data spans both limits, and continuously moves from one to the other as the Richardson number decreases (i.e. moving from right to left on the figure). The only data points that do not fall on either lines (or in between them) correspond to simulations with high density ratio and low shearing rates. In almost all cases, these also correspond to parameters where anti-diffusive behavior is observed, so $\hat \nu_{\rm turb}$ cannot remain proportional to $\hat \kappa_{C,{\rm turb}}$ in that limit\footnote{By construction, $\hat \kappa_{C,{\rm turb}}$ has to be positive, since $\langle \hat w \hat C \rangle$ has to be negative, so it cannot remain proportional to $\hat \nu_{\rm turb}$ when $\hat \nu_{\rm turb}$ changes sign.}. Since this behavior is unlikely to persist at stellar values of the Prandtl number, these data points a probably not significant for astrophysical purposes. \begin{figure}[h] \epsscale{0.6} \plotone{DnuvsDC.pdf} \caption{Comparison between $\hat \nu_{\rm turb}$ and $\hat \kappa_{C,{\rm turb}}$ for a wide range of simulations, in statistically stationary state. Cases with $R_0 < 3$ are for ${\rm Pr} = \tau = 0.3$, from the simulations described in Section \ref{sec:num} and Table \ref{tab:data1}. The $R_0 = 10$ and $R_0 = 20$ sets of simulations are for ${\rm Pr} = \tau = 0.03$, and are described in Section \ref{sec:simplemodel} and Table \ref{tab:data2}. For fixed values of $R_0$, increasing shear (lower Ri) reduces $\hat \kappa_{C,{\rm turb}}$. Vertical errorbars on $\hat \nu_{\rm turb}$ are computed as described in the main text. Horizontal errobars on $\hat \kappa_{C,{\rm turb}}$ are not shown as they would be smaller than the symbol size. The blue line shows the relationship $\hat \nu_{\rm turb} \simeq \hat \kappa_{C,{\rm turb}}$, and the red line shows $\hat \nu_{\rm turb} \simeq 0.25 \hat \kappa_{C,{\rm turb}}$. } \label{fig:DnuvsDC} \end{figure} These results are important because they relate the momentum transport to the compositional transport. Hence if one can somehow be derived from observations (of, e.g. surface abundances or subsurface velocity profiles), the other can be indirectly inferred. \section{Model} \label{sec:simplemodel} We now propose a simple model to explain the trends observed in the previous section, which extends the work of \citet{Brownal2013} in the presence of a large-scale shear. For pedagogical purposes, we first briefly recall the salient properties of the original \citet{Brownal2013} model for mixing by fingering convection, and then add the shear. Given that the saturation of the fingering instability occurs as a result of parasitic shear instabilities that develop between up-flowing and down-flowing fingers, a key ingredient of the \citet{Brownal2013} model lies in assuming that this saturation occurs when the growth rate of the parasitic instabilities $\hat \sigma$ equals a universal constant times the growth rate of the original fingering instability $\hat \lambda_f$, i.e. when $ \hat \lambda_f = C_B \hat \sigma$. Furthermore, from dimensional analysis (or exact computations), it can be shown that $\hat \sigma = \eta \hat w_f \hat l_f$, where $\eta$ is a known constant (whose value is irrelevant). Hence the model predicts a simple relationship between the vertical velocity within the fingers $\hat w_f$, their growth rate $\hat \lambda_f$, and their wavenumber $\hat l_f$, namely \begin{equation} \hat \lambda_f = C_B \eta \hat w_f \hat l_f. \label{eq:parasitic} \end{equation} This idea was successfully verified by \citet{SenguptaGaraud2018}. Next, using the fact that \begin{equation} \hat \lambda_f \hat C_f + R_0^{-1} \hat w_f = - \tau \hat l_f^2 \hat C_f , \end{equation} from equation (\ref{eq:nondimcomposition}) in the absence of shear, \citet{Brownal2013} estimated the amplitude of the compositional perturbation $\hat C_f$ to be \begin{equation} \hat C_{f} = - \frac{R_0^{-1} \hat w_{f} }{ \hat \lambda_f + \tau \hat l_f^2} . \end{equation} Finally, by dimensional analysis the compositional flux must be proportional to $\hat w_f \hat C_f$, which implies that \begin{equation} \hat F_C = | \langle \hat w \hat C \rangle | = K_B | \hat w_f \hat C_f | = K_B \frac{R_0^{-1} \hat w^2_{f} }{ \hat \lambda_f + \tau \hat l_f^2} = \frac{K_B}{(C_B \eta)^2} \frac{R_0^{-1} \hat \lambda^2_{f} }{ \hat \lambda_f \hat l_f^2 + \tau \hat l_f^4}, \label{eq:brownflux} \end{equation} where $K_B$ is, again, a universal constant. Note how all these universal constants now conveniently combine into a single one, that can be calibrated against the data. The model was successfully validated by \citet{Brownal2013} against DNSs of unsheared fingering convection with the constant $K_B/(C_B\eta)^2 \simeq 49$. In the presence of shear, the model must be modified to account for the fact that the fingers become tilted. This changes both their intrinsic velocity (which acquires a horizontal component) and their wavenumber (which decreases as a result of the tilt), see Figure \ref{fig:tilt}. To include this in the model, we begin with a thought experiment in which the fingers initially develop as they would without the shear (i.e. they grow at the same rate $\hat \lambda_f$ and have the same wavenumber $\hat l_f$ as the unsheared fingers), but are then subject to a homogeneous shear flow with constant shearing rate $\hat S$, whose effect is to tilt them away from the vertical. \begin{figure}[h] \epsscale{0.6} \plotone{Tilt.pdf} \caption{Illustration of fingering convection in the presence of a uniform shear. The originally vertical finger, whose wavenumber is $\hat l_f$, is tilted by the shear, causing its width to decrease. The tilt also causes the addition of a horizontal velocity component, so the total velocity shear across two fingers would increase relative to the non-tilted one. Saturation by parasitic instabilities therefore occurs {\it earlier} in tilted fingers, reducing the vertical velocities and the turbulent flux. } \label{fig:tilt} \end{figure} With this assumption, a fluid parcel that would be flowing purely vertically in the absence of shear acquires
# Talk:Just intonation WikiProject Tunings, Temperaments, and Scales ## Key of examples Not that there's anything wrong with it, but is there any reason for the examples being changed from C major to F major? Just curious. --Camembert (22 August 2003) ## Outline My proposed outline: 1. introduction: Just intonation is any musical tuning in which the frequencies of notes are related by whole number ratios. Another way of considering just intonation is as being based on lower members of the harmonic series. Any interval tuned in this way is called a just interval. Intervals used are then capable of greater consonance and greater dissonance, however ratios of extrodinarily large numbers, such as 1024:927, are rarely purposefully included just tunings. 2. Why JI, Why ET 1. JI is good 1. "A fifth isn't a fifth unless its just"-Lou Harrison 2. Why isn't just intonation used much? 1. Circle of fifths: Loking at the Circle of fifths, it appears that if one where to stack enough perfect fifths, one would eventually (after twelve fifths) reach an octave of the original pitch, and this is true of equal tempered fifths. However, no matter how just perfect fifths are stacked, one never repeats a pitch, and modulation through the circle of fifths is impossible. The distance between the seventh octave and the twelfth fifth is called a pythagorean comma. 2. Wolf tone: When one composes music, of course, one rarely uses an infinite set of pitches, in what Lou Harrison calls the Free Style or extended just intonation. Rather one selects a finite set of pitches or a scale with a finite number, such as the diatonic scale below. Even if one creates a just "chromatic" scale with all the usual twelve tones, one is not able to modulate because of wolf intervals. The diatonic scale below allows a minor tone to occur next to a semitone which produces the awkward ratio 32/27 for Bb/G. 3. Just tunings 1. Limit: Composers often impose a limit on how complex the ratios used are: for example, a composer may write in "7-limit JI", meaning that no prime number larger than 7 features in the ratios they use. Under this scheme, the ratio 10:7, for example, would be permitted, but 11:7 would not be, as all non-prime numbers are octaves of, or mathematically and tonally related to, lower primes (example: 12 is an octave of 6, while 9 is a multiple of 3). 2. Diatonic Scale: It is possible to tune the familiar diatonic scale or chromatic scale in just intonation but many other justly tuned scales have also been used. 4. JI Composers: include Glenn Branca, Arnold Dreyblatt, Kyle Gann, Lou Harrison, Ben Johnston, Harry Partch, Terry Riley, LaMonte Young, James Tenney, Pauline Oliveros, Stuart Dempster, and Elodie Lauten. 5. conclusion http://www.musicmavericks.org/features/essay_justintonation.html Hyacinth (30 January 2004) ## Just tuning I was going to merge the content below from Just tuning, but which "one possible scheme of implementing just intonation frequencies" does the table show? Hyacinth 10:29, 1 Apr 2005 (UTC) It shows the normally used just intonation scale - I don't think that it has a special name. It can be constructed by 3 triads of 4:5:6 ratio that link to each other, e.g. F-A-C, C-E-G, G-B-D will make the scale of C. Yes, this should definitely be included. (3 April 2005) Please Wikipedia:Sign your posts on talk pages. Thanks. Hyacinth 22:03, 3 Apr 2005 (UTC) I am no expert, and I've not done any Wikipedia changes either, so forgive me if I'm wrong in what I'm doing (content) or how I'm doing it (method), but the main text gives 6/5 as a minor third, and I currently disagree. Scholes' Oxford Companion to Music, eighth edition, in the section on intervals, says a minor third is a semitone below a major third, ie 15/16 * 5/4 = 75/64 and not 6/5 as stated in the main text. The Oxford Companion to Music also states that by going up a semitone an interval becomes an augmented interval and so a major tone (a second) would become an augmented second as follows: 9/8 * 16/15 = 6/5. Thus 6/5 is an augmented second, and 75/64 is a minor third. Ivan Urwin There are many semitones. A minor third is a chromatic semitone (25/24) smaller than a major third. I can also give you a reductio ad absurdum for your reasoning. If 6/5 is an augmented second, then 5/4 * 6/5 = 3/2 is not a perfect fifth, but a doubly augmented fourth. Then if 4/3 is a perfect fourth, 4/3 * 3/2 = 2/1 is not an octave, but an augmented seventh. —Keenan Pepper 00:32, 14 April 2006 (UTC) Why not split the difference, and call a semitone the twelfth root of 2, or 1.059463...? right between 16/15 at 1.066667 and 25/24 at 1.041667. Even better, one could call a minor third 300 cents, or the fourth root of 2, again smack between those silly over-simplified integer ratios. Of course I'm kidding; thanks, KP! Ivan, I'm not familiar with your Scholes reference; how completely does it treat the differences between just tunings and various temperaments? I'm guessing that's where the oversimplification may lie. Just plain Bill 01:49, 14 April 2006 (UTC) Okay, I think I missed a key word in the Scholes Oxford Guide to Music text, approximately like this ... If an inverval be chromatically increased a semitone, it becomes augmented. I am looking at this as a mathematician and so the musical terminology throws me somewhat: dividing by 5 being called thirds and dividing by 3 being called fifths, etc. If I were to rewrite the Scholes text as shown below and use 25/24 as a definitiion for a 'chromatic semitone' as per Keenan Pepper's remarks, then I'd agree. If an inverval be increased by a chromatic semitone, it becomes augmented. The way this arose was me looking at the ratios with my mathematical background. Prime factorisation of integers is unique. The only primes less than 10 are 2,3,5, and 7. I gather that 7 is used for the 'blue' note in blues, and that most western music just uses or approximates ratios based on 2, 3 and 5. With 2 being used to determine octaves and with notes an octave apart being named similarly, that brings practical ratios down to just determining the power of 3 and the power of 5. I was making a 2 dimensional table of the intervals, and putting names to the numbers with the help of a borrowed book, but it appears the complex terminology for simple mathematics got the better of me. I am happy to drop my remarks and delete all this in a few days time (including Keenan's and Bill's remarks), but I'll give you chance to read it and object before I do. Maybe some moderator will do that. Maybe you two are the moderators. Whatever. Anyway, thanks guys. Ivan Urwin You're mostly right about the primes. See Limit (music). The labeling of intervals as "seconds", "thirds", etc. corresponds to the number of steps they map to in 7-per-octave equal temperament. 3/2 maps to four steps, so it's a "fifth", 5/4 maps to two steps, so it's a "third", 16/15 maps to one step, so it's a "second", and 25/24 maps to zero steps, so it's a kind of "unison" or "prime". Intervals separated by 25/24 have the same name, for example the 6/5 "minor third" and the 5/4 "major third". Conversations on talk pages are usually never deleted, only archived when they become too long, so don't worry about that. —Keenan Pepper 05:04, 15 April 2006 (UTC) I'm lost here (not a moderator, by the way, just another netizen) when you speak of "determining the power of 3 and the power of 5." The interval of a fifth is just the musical pitch space between the first and fifth notes of a scale.
typical ANN formalism requires that each property is either an input or an output of the network, and all inputs must be provided to obtain a valid output. \change{In our example composition would be inputs, whereas ultimate tensile strength and hardness are outputs. To exploit the known relationship between ultimate tensile strength and hardness, and allow either the hardness and ultimate tensile strength to inform missing data in the other property}, we treat all properties as both inputs and outputs of the ANN. \change{We have a single ANN rather than an exponentially large number of them (one for each combination of available composition and properties).} We then adopt an expectation-maximization algorithm\cite{Krishnan08}. This is an iterative approach, where we first provide an estimate for the missing data, and then use the ANN to iteratively correct that initial value. The algorithm is shown in \figref{fig:2-01b-sketch2}. For any material $\textbf{x}$ we check which properties are unknown. In the non-trivial case of missing entries, we first set missing values to the average of the values present in the data set. An alternative approach would be to adopt a value suggested by that of a local cluster. With estimates for all values of the neural network we then iteratively compute \begin{align} \textbf{x}^{n+1}=\gamma\textbf{x}^n+(1-\gamma)\textbf{f}(\textbf{x}^n)\punc{.} \end{align} The converged result is then returned instead of $\textbf{f}\textbf(\textbf{x})$. The function $\textbf{f}$ remains fixed on each iteration of the cycle. We include a softening parameter $0\le\gamma\le1$. With $\gamma=0$ we ignore the initial guess for the unknowns in $\textbf{x}$ and determine them purely by applying $\textbf{f}$ to those entries. However, introducing $\gamma>0$ will prevent oscillations and divergences of the sequence, typically we set $\gamma=0.5$. \subsection{Functional properties} \label{sec:frameworkgraph} Many material properties are functional graphs, for example to capture the variation of the yield stress with temperature\cite{Ritchie73}. To handle this data efficiently, we promote the two varying quantities to become interdependent vectors. This will reduce the amount of memory space and computation time used by a factor roughly proportional to the number of entries in the vector quantities. \change{It also allows the tool to model functional properties on the same footing as the main model, rather than as a parameterization of the curve such as mean and gradient. The graph is represented by a series of points indexed by variable $\ell$.} Let $\textbf{x}$ be a point from a training data set. Let $x_{1,\ell}$ and $x_{2,\ell}$ be the varying graphical properties, and let all other properties $x_3,x_4,\ldots$ be normal scalar quantities. When $\textbf{f}(\textbf{x})$ is computed, the evaluation of the vector quantities is performed individually for each component of the vector, \begin{align} y_{1,l}=f_1(x_{1,\ell},x_{2,\ell},x_3,x_4,\ldots)\punc{.} \end{align} When evaluating the scalar quantities, we aim to provide the ANN with information of the $x_2(x_1)$ dependency as a whole, instead of the individual data points (i.e. parts of the vectors $x_{1,\ell}$, and $x_{2,\ell}$). It is reasonable to describe the curve in terms of different moments with respect to some basis functions for modeling the curve. For most expansions, the moment that appears in lowest order is the average $\anglemean{x_1}$, or $\anglemean{x_2}$ respectively. We therefore evaluate the scalar quantities by computing, \begin{align} y_3=f_3(\anglemean{x_1},\anglemean{x_2},x_3,x_4,\ldots)\punc{.} \end{align} This can be extended by defining a function basis for expansion, and include their higher order moments. \change{This approach automatically removes the bias due to differeing numbers of points in the graphs.} \subsection{Training process}\label{sec:frameworktrain} The ANN has to first be trained on a provided data set. Starting from random values for $A_{ihj}$, $B_{hj}$, $C_{hj}$, and $D_j$, the parameters are varied following a random walk, and the new values are accepted, if the new function $\textbf{f}$ models the fixed-point equation $\textbf{f}(\textbf{x})=\textbf{x}$ better. This is quantitatively measured by the error function, \begin{align} \delta=\sqrt{\frac{1}{N}\sum_{x\in X}\sum_{j=1}^I\left[ f_j(\textbf{x})-x_j\right]^2}\punc{.} \label{eq:rms} \end{align} The optimization proceeds by a steepest descent approach\cite{Floudas08}, where the number of optimization cycles $C$ is a run-time variable. In order to calculate the uncertainty in the ANN's prediction, $\textbf{f}^\sigma (\textbf{x})$, we train a whole suite of ANNs simultaneously, and return their average as the overall prediction and their standard deviation as the uncertainty\cite{Steck03}. We choose the number of models $M$ to be between $4$ and $64$, since this should be sufficient to extract the mean and uncertainty. In \secref{sec:Testing} we show how the uncertainty reflects the noise in the training data and uncertainty in interpolation. Moreover, on systems that are not uniquely defined, knowledge of the full distribution of models will expose the degenerate solutions. \subsection{Alternative approaches} \label{sec:comparetoGP} ANNs like the one proposed in this paper (with one hidden layer and a bounded transfer function; see \eqnref{eq:2-01-mapping}) can be expressed as a Gaussian process using the construction first outlined by Neal \cite{NEAL96} in 1996. Gaussian processes were considered as an alternative to building the framework in this paper, but were rejected for two reasons. Firstly, the ANNs have a lower computational cost, which scales linearly with the number of entries $N$, and therefore ANNs are feasible to train and run on large-scale databases. The cost for Gaussian processes scales as $N^3$, and therefore does not provide the required speed. Secondly, materials data tends to be clustered. Often, experimental data is easy to produce in one region of the parameter space, and hard to produce in another region. Gaussian processes can only define a unique length-scale of correlation and consequently fail to model clustered data whereas ANNs perform well. \section{Testing and validation}\label{sec:Testing} Having developed the ANN formalism, we proceed by testing it on exemplar data. We will take data from a range of models to train the ANN, and validate its results. We validate the ability of the ANN to capture functional relations between materials properties, handle incomplete data, and calculate graphical quantities. In \secref{sec:BasicTests}, we interpolate a set of 1-dimensional functional dependencies (cosine, logarithmic, quadratic), and present a method to determine the optimal number of hidden nodes. In \secref{sec:ErroneousData}, we demonstrate how to determine erroneous entries in a data set, and to predict the number of remaining erroneous entries. \secref{sec:FragmentedData} provides an example of the ANN performing on incomplete data sets. Finally, in \secref{sec:FunctionalData}, we present a test for the ANN's graphing capability. \subsection{One-dimensional tests}\label{sec:BasicTests} \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{3A-01a-plot-cos} \includegraphics[width=0.49\linewidth]{3A-01b-plot-log} \includegraphics[width=0.49\linewidth]{3A-01c-plot-qua} \includegraphics[width=0.49\linewidth]{3A-02-plot-quadratic-summary} \caption{Training an ANN on toy-model data for (a) a cosine function, (b) a logarithmic function with unequally distributed data, and (c) a quadratic function with Gaussian noise. (d) For the quadratic function, the performance with different number of hidden nodes is tested, and the rms (\eqnref{eq:rms}), the reduced rms (\eqnref{eq:rmsred}), and the cross-validation rms are computed and plotted.} \label{fig:toymodeltest} \end{figure*} \begin{table} \caption{\change{The results of cross-validation testing for the three models (a) a cosine function, (b) a logarithmic function with unequally distributed data, and (c) a quadratic function with Gaussian noise. The second column gives the error when the ANN is trained on all of the data, and the third column the error in points unseen in training during cross-validation.}} \begin{tabular}{lcc} \bf{Data set}& \bf{Error all}& \bf{Error cross-validation}\\ \hline Cosine&\num{0.06}&\num{0.07}\\ Logarithm&\num{0.05}&\num{0.06}\\ Quadratic&\num{1.2}&\num{1.4} \end{tabular} \label{tab:3-01-cross} \end{table} The ANN was trained on a (a) cosine function, (b) logarithmic function with unequally distributed data, and (c) quadratic function with results shown in \figref{fig:toymodeltest}. All of the data is generated with Gaussian distributed noise to reflect experimental uncertainty in real-world material databases. The cosine function is selected to test the ability to model a function with multiple turning points, and was studied with $H=3$ hidden nodes. The logarithmic function is selected because it often occurs in physical examples such as precipitate growth, and is performed with $H=1$. The quadratic function is selected because it captures the two lowest term in a Taylor expansion, and is performed with $H=2$. \figref{fig:toymodeltest} shows that the ANN recovers the underlying functional dependence of the data sets well. The uncertainty of the model is larger at the boundaries, because the ANN has less information about the gradient. The uncertainty also reflects the Gaussian noise in the training data, as can be observed from the test with the $\log$ function, where we increased the Gaussian noise of the generated data from left to right in this test. For the test on the $\sin$ function, the ANN has a larger uncertainty for maxima and minima, because these have higher curvature, and are therefore harder to fit. \change{The correct modeling of the smooth curvature of the cosine curve could not be captured by simple linear interpolation.} The choice of the number of hidden nodes $H$ is critical: Too few will prevent the ANN from modeling the data accurately; too many hidden nodes leads to over-fitting. To study the effect of changing the number of hidden nodes, we
# DIY Metrics: Normalizing by Posession As promised, today we’re going to talk about normalizing by possession instead of time on court. First, a but of motivation. Different teams play at different paces. Some teams try to score a lot in transition, some teams try to slow the ball down and make sure they get good shots in the half-court. Part of this is related to a team’s defense and how quickly they get rebounds in the hands of players who can push the ball. While folks make fun of Russell Westbrook for “stealing” rebounds, it’s part of a deliberate strategy by the Thunder to push the ball up court quickly. All this adds up to possessions being arguably a more useful tool by which to compare team and player performance. Scoring 105 points on 110 possessions is a lot different from scoring 105 points on 95 possessions, even if both took the same 48 minutes of game time. The problem is that it’s a lot harder to do. First, you have to agree on what a “possession” is (not as obvious as it seems), and then you’ve got to actually go into the data and break the game up by those specific tags. Oh, and players can conceivably enter and leave the game within a possession. So there’s that. Luckily, rather than trying to reinvent the wheel, (Wait a minute, isn’t this whole series an attempt to reinvent the wheel?) we can borrow some stuff. First, definitions: ## What is a ‘posession’? According to our original data source, NBAStuffer, a possession concludes whenever a player/team: 1. attempts a field goal, 2. misses a shot and does not get the offensive rebound, 3. turns the ball over (some sources add “turnovers that are assigned to teams” for a more precise possession calculation), 4. goes to the line for two or three shots and either makes the last shot or does not get the rebound of a missed last shot. This is a little different from the NBA’s official definition: Section XVIII-Team Possession A team is in possession when a player is holding, dribbling or passing the ball. Team possession ends when the defensive team gains possession or there is a field goal attempt. BasketballReference has weird, convoluted formulas because they’re trying to use aggregated stats rather than play-by-play numbers to calculate possession-based statistics, but I think they’re trying to do basically what’s listed above from NBAStuffer. So we’ll just stick with those four bullets above for our purposes. ## Getting a bit more precise In order to translate our definition of a possession above into something we can practically use with our data, we need to be a bit more precise. Take the first two statements from above: 1. attempts a field goal, 2. misses a shot and does not get the offensive rebound, These are kinda the same thing; you can’t “mist a shot and not get the offensive rebound” unless you… attempt a field goal. Here’s my translation of conditions where a “possession” ends: 2. Missed field goal without offensive rebound 3. Turnover 4. Two- or three-shot trips to the line with a make on the last shot OR a miss with no offensive rebound on the last shot. For our data set, two of these are easy, and the other two are challenging. To code this information into our data, we adjust our data cleaning step a bit, specifically by adding in a new function: possession_change <- function(tb){ # This function identifies whether a particular event resulted in a change-of-possession tb %>% mutate(possessionchange = map2_lgl(event_type, isoreb, function(a, b) {a == "turnover" | a == "shot" | !b})) } possession_change makes use of an existing column (event_type) that already tell us if there was a turnover (event_type == "turnover") or a made shot (event_type == "shot", and don’t ask my why it’s coded as “shot” instead of “made shot” or something clearer like that). So that’s numbers 1 and 3 on our list above. And the other two are related. They’re both basically “missed shots that don’t result in an offensive rebound”. possession_change accounts for this using the column, isoreb, which I mean to be read as “is oreb” rather than “iso reb”. To generate this column, I also added the following function: id_orebs <- function(teamtbl, thatteam){ # this is currently written to interface with team-specific dataframes. # There's no good reason for this, and it would be more efficient to # re-write and create this column for the full data table. ftmiss <- teamtbl %>% filter(event_type == "free throw", type %in% c("Free Throw 1 of 1", "Free Throw 2 of 2", "Free Throw 3 of 3")) %>% mutate(miss = map_chr(description, str_match, "MISS"), shooterhome = (player %in% c(o1, o2, o3, o4, o5)), hometeam == thatteam) %>% filter(miss == "MISS") omiss <- teamtbl %>% filter(event_type == "miss") %>% mutate(miss = "MISS", shooterhome = (player %in% c(o1, o2, o3, o4, o5)), hometeam == thatteam) ... teamtbl %>% left_join( (bind_rows(ab, bb) %>% select(isoreb, old_id) %>% rename(mplay_id = old_id)) ) } The ellipsis in that function is doing a lot of work. There’s another 100 lines of code in there that I’ve omitted. It’s probably the ugliest function I’ve written thus far on this project, and if I wasn’t trying to get this post done on a (self-imposed) deadline, I’d probably re-factor it into a couple (or 10) smaller functions that are more intelligible. I’ll include the full version at the end, but the gist of it is: For missed last free throws from unmade baskets and missed field goals, this function creates a column of logicals that tell you whether the miss resulted in an offensive rebound. The tricky bit is that the missed free throw or missed shot was coded on a single row of the data frame, and the rebound was on a subsequent line. Usually (but not always) this was the exact next line. Regardless, I had to recover this information and feed it back into the previous line. Doing this involved getting a consistent ID for each event (mplay_id) and then pairing up missed shots with the subsequent rebound information. old_id helped me do that while keeping track of the location of the original roles. Check out the end of the post if you really want to get into the details. (Or see a from-the-wild example of bad code that could be improved!) The chief result is the two extra lines in get_team_events, which you might recall from previous editions of this series is the principal work-horse function for cleaning the data: get_team_events <- function(whichteam, tb){ out <- NULL for(i in seq_along(whichteam$team)){ thatteam <- whichteam$team[i] teamtbl <- tb %>% mutate(currentteam = thatteam) %>% filter(hometeam == currentteam | awayteam == currentteam) %>% fiveplayers() %>% mutate(fiveplayers = pmap(list(p1, p2, p3, p4, p5), make_player_list), p1 = map_chr(fiveplayers, get_player, 1), p2 = map_chr(fiveplayers, get_player, 2), p3 = map_chr(fiveplayers, get_player, 3), p4 = map_chr(fiveplayers, get_player, 4), p5 = map_chr(fiveplayers, get_player, 5)) %>% select(-fiveplayers) %>% arrange(game_id) %>% id_orebs(thatteam) thisteamsplayers <- get_this_teams_players(teamtbl, thatteam) out <- teamtbl %>% possession_change() %>% pos_count() %>% mutate(netpoints = pmap_dbl(list(points, currentteam, team), get_net_points)) %>% select(-team) %>% rename(team = currentteam) %>% bind_rows(out) } out %>% group_by(team) %>% nest(.key = team events) } ## Plots, but using net ratings! Now that we’ve got our net ratings, lets plot them! We can basically re-use our code from two weeks ago to make cool visuals using net rating instead of time-normalized plus/minus: bm <- tmp %>% unnest() %>% group_by(team, p1, p2, p3, p4, p5) %>% summarise(pts = sum(netpoints, na.rm = T), npos = length(unique(possessioncount)), netrtg = pts / npos * 100) bm %>% filter(npos > 200) %>% ggplot(aes(x = netrtg, y = 1, size = npos, color = team)) + geom_jitter(width = 0) +scale_color_nba() + theme_bw() + ylab("") + theme(legend.position = "top") + xlab("Net Rating") + scale_y_continuous(labels = NULL) + guides(size = guide_legend(title = "Possessions Played")) + facet_wrap(~team) good stuff! ### How to ID ORebs You’ve been warned… id_orebs <- function(teamtbl, thatteam){ # this is currently written to interface with team-specific dataframes. # There's no good reason for this, and it would be more efficient to # re-write and create this column for the full data table. ftmiss <- teamtbl %>% filter(event_type == "free throw", type %in% c("Free Throw 1 of 1", "Free Throw 2 of 2", "Free Throw 3 of 3"))
oscillators \maths{\hat{a}_{\mathrm{in}}^{\vphantom{\dag}}(k)} and \maths{\hat{a}_{\mathrm{in}}^{\dag}(-k)} with Bogoliubov-type amplitudes \maths{u_{T}(k,\zeta)} and \maths{v_{T}(k,\zeta)} depending on the nature of the quench and on the time \maths{T} elapsed after its occurrence (see, e.g., Ref.~\cite{Carusotto2010}). Futhermore, it should be stressed that the phonon limit \maths{|k|\,\xi\ll1} is implicitly considered in the latter equations, precisely because we made use of Eq.~\eqref{Eq:PlaneWaveEvolutionDisorder} to derive them. Correspondingly, the in-fiber Bogoliubov amplitudes \maths{u(k,\zeta)} and \maths{v(k,\zeta)} defined through Eq.~\eqref{Eq:BogoliubovAmplitudesDisorderApprox} must be evaluated in this limit. As one anticipates from Eqs.~\eqref{Eq:BogoliubovAmplitudeQuenchU} and \eqref{Eq:BogoliubovAmplitudeQuenchV}, this is fully true as soon as \maths{T\to\infty}. In this limit indeed, the integral over the wavenumber \maths{k} in Eq.~\eqref{Eq:InputOutputRelation} is dominated by the Bogoliubov modes with \maths{\tilde{E}_{k}\to0}, and so with \maths{|k|\to0} since \maths{\tilde{E}_{k}\propto|k|} as \maths{|k|\to0}. In the proper units, Eqs.~\eqref{Eq:InputOutputRelation}--\eqref{Eq:BogoliubovAmplitudeQuenchV} are actually valid when \begin{equation} \label{Eq:LargeT-LowK} |k|\,\xi\ll\frac{\hbar}{\mu\,T}\ll1, \end{equation} to which we restrict ourselves from now on. Coming back to the original coordinates \maths{z} and \maths{t}, this amounts to consider a large optical-fiber length \maths{L=v_{1}\,T} as well as small angular-frequency detunings \maths{\Delta=-v_{1}\,k}, precisely such that \maths{(\xi/v_{1})\,|\Delta|\ll(\hbar\,v_{1}/\mu)\,L^{-1}\ll1}. \section{Postquench coherence} \label{Sec:PostquenchCoherence} Assuming that the electric field measured at \maths{z=0^{-}} (i.e., just before the quench) is a perfect monochromatic plane wave, one has \maths{\langle\hat{E}_{1}(x,y,0^{-},t)\rangle=\mathcal{E}_{1}\,e^{-i\omega_{1}t}}, where \maths{\langle{\cdots}\rangle=\langle\mathrm{vac}|{\cdots}|\mathrm{vac}\rangle} stands for the expectation value in the vacuum state \maths{|\mathrm{vac}\rangle} of \maths{\delta\hat{\mathcal{E}}_{1}(x,y,0^{-},t)}. According to Sec.~\ref{Sec:QuantumQuench}, this can be translated into \maths{\hat{a}_{\mathrm{in}}(k)\,|\mathrm{vac}\rangle=0,\,\forall k}. Therefore, one initially has \begin{equation} \label{Eq:InitialCondition1} \langle\hat{a}_{\mathrm{in}}^{\vphantom{\dag}}(k)\,\hat{a}_{\mathrm{in}}^{\vphantom{\dag}}(k')\rangle=\langle\hat{a}_{\mathrm{in}}^{\dag}(k)\,\hat{a}_{\mathrm{in}}^{\vphantom{\dag}}(k')\rangle=0 \end{equation} and, making use of the same-\maths{\tau} commutation relation in free space \eqref{Eq:CommutationRelationA}, \begin{equation} \label{Eq:InitialCondition2} \langle\hat{a}_{\mathrm{in}}^{\vphantom{\dag}}(k)\,\hat{a}_{\mathrm{in}}^{\dag}(k')\rangle=2\pi\,\delta(k-k'). \end{equation} In this section, we analyze the consequences of the quench at \maths{\tau=0} through the coherence function \begin{equation} \label{Eq:AutocorrelationFunction} g^{(1)}(\zeta-\zeta')=\overline{\langle\hat{\Psi}^{\dag}(\zeta,T^{+})\,\hat{\Psi}(\zeta',T^{+})\rangle} \end{equation} of the field \maths{\hat{\Psi}(\zeta,T^{+}=L^{+}/v_{1})} just exiting the fiber, where light is imaged (see Fig.~\ref{Fig:Setup}). In Eq.~\eqref{Eq:AutocorrelationFunction}, the overbar refers to disorder averaging. Thus defined, \maths{g^{(1)}} only depends on \maths{|\zeta-\zeta'|} since \maths{\rho_{\mathrm{out}}=\mathrm{const}} and \maths{\bar{\rho}_{0}=\mathrm{const}}. Coming back to the original space and time variables \maths{z} and \maths{t}, this means that it only depends on \maths{|t-t'|}. Note that imaging the signal at \maths{z>L} amounts to calculate the \maths{g^{(1)}} function of the field \maths{\hat{\Psi}(\zeta,\tau>T)}. The latter is given in Eqs.~\eqref{Eq:CommutationRelationA}--\eqref{Eq:MatterFieldFreeSpaceTer-d} but can alternatively be obtained from Kirchhoff's diffraction formula for nonmonochromatic waves \cite{Goodman1968}, as sketched in Ref.~\cite{Larre2016}. \subsection{General formulas} \label{SubSec:GeneralFormulas} In the nonlinear optical fiber, the 1D quantum fluid of light is weakly interacting. In this case, the coherence function \eqref{Eq:AutocorrelationFunction} is expressed in terms of the density and the phase quantum fluctuations \maths{\hat{\rho}_{1}(\zeta,T^{+})} and \maths{\hat{\varphi}_{1}(\zeta,T^{+})} of the field \maths{\hat{\Psi}(\zeta,T^{+})} in the following form \cite{Mora2003, Larre2013}: \begin{align} \notag &\left.\ln\!\bigg[\frac{g^{(1)}(\zeta-\zeta')}{\rho_{\mathrm{out}}}\bigg]\right. \\ \notag &\left.{\quad}=-\frac{1}{8}\,\overline{\bigg\langle\mathopen{:\,}\bigg[\frac{\hat{\rho}_{1}(\zeta,T^{+})}{\rho_{\mathrm{out}}}-\frac{\hat{\rho}_{1}(\zeta',T^{+})}{\rho_{\mathrm{out}}}\bigg]^{2}\mathopen{\,:}\bigg\rangle}\right. \\ \label{Eq:AutocorrelationFunctionBis} &\left.\hphantom{{\quad}=}-\frac{1}{2}\,\overline{\langle\mathopen{:\,}[\hat{\varphi}_{1}(\zeta,T^{+})-\hat{\varphi}_{1}(\zeta',T^{+})]^{2}\mathopen{\,:}\rangle}.\right. \end{align} This formula involves the background density \maths{\rho_{0}(T^{+})=\rho_{\mathrm{out}}} of the outgoing optical beam \maths{\boldsymbol{1}} and the normal ordering \maths{\mathopen{:\,}{\cdots}\mathopen{\,:}} with respect to the Bogoliubov-type quantum field \maths{\hat{\gamma}(\zeta,T^{+})=\hat{\gamma}_{\mathrm{out}}(\zeta)} as a function of which \maths{\hat{\rho}_{1}(\zeta,T^{+})} and \maths{\hat{\varphi}_{1}(\zeta,T^{+})} are defined [see Eq.~\eqref{Eq:MatterFieldFreeSpaceTer-d}]. To obtain Eq.~\eqref{Eq:AutocorrelationFunctionBis}, we proceed in two steps. First, we evaluate the average \maths{\langle{\cdots}\rangle} over the quantum fluctuations of the incoming field. To do so, we use that \maths{\hat{\rho}_{1}(\zeta,T^{+})} is small and that the latter and \maths{\hat{\varphi}_{1}(\zeta,T^{+})} are Gaussianly distributed at the here-considered Bogoliubov level \cite{Mora2003}. Second, we evaluate the average \maths{\overline{{\cdots}}} over the classical fluctuations of the disordered potential. To do so, we take advantage of the fact that the quantum average \maths{\langle\mathopen{:\,}[{\cdots}]^{2}\mathopen{\,:}\rangle} involving the phase fluctuations in Eq.~\eqref{Eq:AutocorrelationFunctionBis} is---due to the normal ordering---as small as the one involving the density fluctuations \cite{Larre2013} (indeed, this correlator looks closely like the two-point correlation function of the velocity field, which is weakly fluctuating). In this case, the approximation \maths{\ln\overline{\exp X}\simeq\overline{X}} holds, which eventually yields Eq.~\eqref{Eq:AutocorrelationFunctionBis}. Note that a Popov approach \cite{Popov1972, Popov1983} for calculating the quantum averages would have yielded the same result \cite{Larre2013}. Inserting the input-output relations \eqref{Eq:ContinuityPoyntingVectorClassicalLevel} and \eqref{Eq:InputOutputRelation} into Eq.~\eqref{Eq:AutocorrelationFunctionBis} and making use of Eqs.~\eqref{Eq:InitialCondition1} and \eqref{Eq:InitialCondition2}, we obtain, after multiplying by \maths{(c_{0}/v_{1})\,\rho_{\mathrm{in}}\,\xi}, \begin{align} \notag &\left.\frac{c_{0}}{v_{1}}\,\rho_{\mathrm{in}}\,\xi\;\ln\!\bigg[\frac{g^{(1)}(\zeta-\zeta')}{\rho_{\mathrm{in}}}\bigg]\right. \\ \label{Eq:AutocorrelationFunctionTer} &\left.{\quad}=-\int\frac{dk\,\xi}{2\pi}\,\frac{\overline{|v_{T}(k,\zeta)\,e^{ik\zeta}-v_{T}(k,\zeta')\,e^{ik\zeta'}|^{2}}}{2}.\right. \end{align} Note that this equation involves the second quench-modified Bogoliubov amplitude, \maths{v_{T}(k,\zeta)}, but not the first one, \maths{u_{T}(k,\zeta)}. This is due to the fact that we assumed all the fluctuation modes of the incident quantum field to be in the vacuum state. If we were in a configuration where \maths{\langle\hat{a}_{\mathrm{in}}^{\dag}(k)\,\hat{a}_{\mathrm{in}}^{\vphantom{\dag}}(k')\rangle\neq0}, the \maths{g^{(1)}} function would present a \maths{u_{T}(k,\zeta)} dependence, as for a weakly interacting dilute atomic Bose gas at thermal equilibrium \cite{Dalfovo1999, Pitaevskii2016}. Plugging Eq.~\eqref{Eq:BogoliubovAmplitudeQuenchV} supplemented by Eq.~\eqref{Eq:BogoliubovAmplitudesDisorderApprox} into Eq.~\eqref{Eq:AutocorrelationFunctionTer} and making use of Eq.~\eqref{Eq:DensityCorrelations}, we then get \begin{align} \notag &\left.\frac{c_{0}}{v_{1}}\,\rho_{\mathrm{in}}\,\xi\;\ln\!\bigg[\frac{g^{(1)}(\zeta-\zeta')}{\rho_{\mathrm{in}}}\bigg]\right. \\ \label{Eq:AutocorrelationFunctionQuater} &\left.{\quad}=-I_{T}(X_{1};\zeta-\zeta')-\frac{3}{2}\,\frac{G(0)}{\bar{\rho}_{0}^{2}}\,I_{T}[X_{2}(\zeta-\zeta');\zeta-\zeta'],\right. \end{align} where we introduced the short-hand notations \begin{align} \label{Eq:X1} X_{1}&=1, \\ \label{Eq:X2} X_{2}(\zeta-\zeta')&=\frac{1}{3}\,\bigg[1+2\,\frac{G(\zeta-\zeta')}{G(0)}\bigg]. \end{align} Here, we made an expansion at the second order in \maths{|\delta\rho_{0}(\zeta)|/\bar{\rho}_{0}\sim\mathcal{V}/\mu\ll1}: The first term in the right-hand side of Eq.~\eqref{Eq:AutocorrelationFunctionQuater} is of the order of \maths{(\mathcal{V}/\mu)^{0}=1} while the second one is of the order of \maths{(\mathcal{V}/\mu)^{2}}. These terms involve the position- and time-dependent integral \begin{align} \notag I_{T}(X;\zeta-\zeta')&\left.=\int\frac{dk\,\xi}{2\pi}\,\frac{\sin^{2}(k\,\zeta_{T}/2)}{k^{2}\,\xi^{2}}\right. \\ \label{Eq:TrapezeFunction} &\left.\hphantom{=}\times[1-X\cos(k\,|\zeta-\zeta'|)],\right. \end{align} In Eq.~\eqref{Eq:TrapezeFunction}, the integrand was evaluated in the large-\maths{T}, small-\maths{k} limit \eqref{Eq:LargeT-LowK}. In this form, the integral simply reduces to a two-step trapezoidal function of \maths{|\zeta-\zeta'|} that is linear up to \begin{equation} \label{Eq:LightConeBoundary} |\zeta-\zeta'|=\zeta_{T}=2\,\tilde{s}\,T=2\,s\,T\,\bigg(1+\frac{\Delta\tilde{s}}{s}\bigg) \end{equation} and that stays constant above: \begin{align} \notag &\left.I_{T}(X;\zeta-\zeta')\right. \\ \label{Eq:TrapezeFunctionBis} &\left.{\quad}= \begin{dcases} \frac{X}{4}\,\frac{|\zeta-\zeta'|}{\xi}+\frac{1-X}{4}\,\frac{\zeta_{T}}{\xi}, & |\zeta-\zeta'|<\zeta_{T}, \\ \frac{1}{4}\,\frac{\zeta_{T}}{\xi}, & |\zeta-\zeta'|>\zeta_{T}. \end{dcases} \right. \end{align} Inserting the explicit expression \eqref{Eq:TrapezeFunctionBis} for \maths{X=X_{1}} [Eq.~\eqref{Eq:X1}] and \maths{X=X_{2}(\zeta-\zeta')} [Eq.~\eqref{Eq:X2}] into Eq.~\eqref{Eq:AutocorrelationFunctionQuater}, we obtain a closed analytical expression for the coherence function \eqref{Eq:AutocorrelationFunction}, as detailed below. \subsubsection{Case where \texorpdfstring{\maths{|\zeta-\zeta'|<\zeta_{T}}}{Lg}} \label{SubSubSec:CaseWhere<} The \maths{g^{(1)}} function depends on \maths{|\zeta-\zeta'|} and is given by \begin{align} \notag &\left.\frac{c_{0}}{v_{1}}\,\rho_{\mathrm{in}}\,\xi\;\ln\!\bigg[\frac{g^{(1)}(\zeta-\zeta')}{\rho_{\mathrm{in}}}\bigg]\right. \\ \notag &\left.{\quad}=-\frac{1}{2}\,\frac{\tilde{\theta}_{\mathrm{eff}}}{\mu}\,\frac{|\zeta-\zeta'|}{\xi}-\frac{1}{4}\,\frac{G(0)}{\bar{\rho}_{0}^{2}}\,\frac{\zeta_{T}}{\xi}\right. \\ \label{Eq:AutocorrelationFunctionQuinquies} &\left.\hphantom{{\quad}=}+\frac{1}{4}\,\bigg(1-\frac{|\zeta-\zeta'|}{\zeta_{T}}\bigg)\,\frac{G(\zeta-\zeta')}{\bar{\rho}_{0}^{2}}\,\frac{\zeta_{T}}{\xi}.\right. \end{align} Since \maths{\mu} is an energy, the quantity \maths{\tilde{\theta}_{\mathrm{eff}}} may be referred to as a temperature in units of the Boltzmann constant. It weakly deviates from its disorder-free counterpart \maths{\theta_{\mathrm{eff}}=\mu/2} as \begin{subequations} \label{Eq:EffectiveTemperature} \begin{align} \label{Eq:EffectiveTemperature-a} \tilde{\theta}_{\mathrm{eff}}&\left.=\theta_{\mathrm{eff}}+\Delta\tilde{\theta}_{\mathrm{eff}},\right. \\ \notag \frac{\Delta\tilde{\theta}_{\mathrm{eff}}}{\theta_{\mathrm{eff}}}&\left.=\frac{1}{2}\,\frac{G(0)}{\bar{\rho}_{0}^{2}}=\frac{\sqrt{\pi}}{4}\,\bigg(\frac{\mathcal{V}}{\mu}\bigg)^{2}\,\frac{\sigma}{\xi}\,\bigg[e^{\sigma^{2}/\xi^{2}}\,\bigg(1-2\,\frac{\sigma^{2}}{\xi^{2}}\bigg)\right. \\ \label{Eq:EffectiveTemperature-b} &\left.\hphantom{=\frac{1}{2}\,\frac{G(0)}{\bar{\rho}_{0}^{2}}=}\times\mathrm{erfc}\bigg(\frac{\sigma}{\xi}\bigg)+\frac{2}{\sqrt{\pi}}\,\frac{\sigma}{\xi}\bigg].\right. \end{align} \end{subequations} The last equality in Eqs.~\eqref{Eq:EffectiveTemperature-b} follows from Eqs.~\eqref{Eq:DensityCorrelationsResult}. When \maths{\sigma/\xi\ll1} or \maths{\sigma/\xi\gg1}, the disorder-induced relative correction to \maths{\theta_{\mathrm{eff}}} reduces to \begin{equation} \label{Eq:EffectiveTemperatureAsymptoticResult} \frac{\Delta\tilde{\theta}_{\mathrm{eff}}}{\theta_{\mathrm{eff}}}\simeq \begin{dcases} \displaystyle{\frac{\sqrt{\pi}}{4}\,\bigg(\frac{\mathcal{V}}{\mu}\bigg)^{2}\,\frac{\sigma}{\xi}}, & \displaystyle{\frac{\sigma}{\xi}\ll1}, \\ \displaystyle{\frac{1}{2}\,\bigg(\frac{\mathcal{V}}{\mu}\bigg)^{2}\,\bigg(1-\frac{1}{\sigma^{2}/\xi^{2}}\bigg)}, & \displaystyle{\frac{\sigma}{\xi}\gg1}. \end{dcases} \end{equation} From Eq.~\eqref{Eq:AutocorrelationFunctionQuinquies}, one may extract the behavior of the \maths{g^{(1)}} function at very short \maths{|\zeta-\zeta'|}: \begin{align} \notag &\left.\frac{c_{0}}{v_{1}}\,\rho_{\mathrm{in}}\,\xi\;\ln\!\bigg[\frac{g^{(1)}(\zeta-\zeta')}{\rho_{\mathrm{in}}}\bigg]\right. \\ \notag &\left.{\quad}\simeq-\frac{1}{4}\,\bigg[1+\frac{3}{2}\,\frac{G(0)}{\bar{\rho}_{0}^{2}}\bigg]\,\frac{|\zeta-\zeta'|}{\xi}\right. \\ \label{Eq:AutocorrelationFunctionVeryShortRanges} &\left.\hphantom{{\quad}\simeq}+\frac{1}{8}\,\frac{\zeta_{T}}{\xi}\,\frac{\partial^{2}(G/\bar{\rho}_{0}^{2})}{\partial|\zeta-\zeta'|^{2}}(0)\,(\zeta-\zeta')^{2}.\right. \end{align} \subsubsection{Case where \texorpdfstring{\maths{|\zeta-\zeta'|>\zeta_{T}}}{Lg}} \label{SubSubSec:CaseWhere>} The \maths{g^{(1)}} function stays locked to the value it takes at \maths{|\zeta-\zeta'|=\zeta_{T}} [Eq.~\eqref{Eq:AutocorrelationFunctionQuinquies} for \maths{|\zeta-\zeta'|=\zeta_{T}}] and then no longer depends on \maths{|\zeta-\zeta'|}: \begin{equation} \label{Eq:LongRangePlateau} {\frac{c_{0}}{v_{1}}\,\rho_{\mathrm{in}}\,\xi\;\ln\!\bigg[\frac{g^{(1)}(\zeta-\zeta')}{\rho_{\mathrm{in}}}\bigg]=-\frac{1}{4}\,\bigg[1+\frac{3}{2}\,\frac{G(0)}{\bar{\rho}_{0}^{2}}\bigg]\,\frac{\zeta_{T}}{\xi}.} \end{equation} According to Eqs.~\eqref{Eq:AutocorrelationFunctionVeryShortRanges} and \eqref{Eq:LongRangePlateau}, the curve's points of abscissas \maths{|\zeta-\zeta'|=0} and \maths{|\zeta-\zeta'|=\zeta_{T}} belong to the straight line of slope \maths{-\frac{1}{4}\,[1+\frac{3}{2}\,G(0)/\bar{\rho}_{0}^{2}]}. \subsection{Prethermalization in disorder} \label{SubSec:PrethermalizationInDisorder} In Fig.~\ref{Fig:AutocorrelationFunction}, we plot \maths{(c_{0}/v_{1})\,\rho_{\mathrm{in}}\,\xi\;\ln[g^{(1)}(\zeta-\zeta')/\rho_{\mathrm{in}}]} as a function of \maths{|\zeta-\zeta'|/\xi} for different values of (i) \maths{\mu\,T/\hbar\gg1} but fixed values of (ii) \maths{\mathcal{V}/\mu\ll1} and (iii) \maths{\sigma/\xi\ll\mu\,T/\hbar}. The condition (i) is the limit of long postquench duration discussed in the last paragraph of Sec.~\ref{SubSec:InputOutputRelations}. The condition (ii) is the limit of weak disorder assumed from the third paragraph of Sec.~\ref{SubSec:GrossPitaevskiiClassicalField}. In the limit (iii) finally, the system feels the presence of a sufficient number of random scatterers after the occurrence of the quench so to consider the effect of the disordered potential relevant. \begin{figure}[t!] \includegraphics[width=\linewidth]{Coherence.pdf} \caption{(Color online) Solid curves: Normalized coherence function of the disordered 1D quantum fluid of light as a function of \maths{|\zeta-\zeta'|/\xi} for different values of the dimensionless time \maths{\mu\,T/\hbar} elapsed after the occurrence of the quench, as given in Eqs.~\eqref{Eq:AutocorrelationFunctionQuinquies} and \eqref{Eq:LongRangePlateau} supplemented by Eqs.~\eqref{Eq:DensityCorrelationsResult}, \eqref{Eq:SoundSpeedRelativeShiftResult}, \eqref{Eq:LightConeBoundary}, and \eqref{Eq:EffectiveTemperature} for \maths{\mathcal{V}/\mu=0.5} and \maths{\sigma/\xi=1}; the abscissas of the curves' siding edges equal \maths{\zeta_{T}/\xi=(2\,\mu\,T/\hbar)\,(1+\Delta\tilde{s}/s)} [cf.~Eqs.~\eqref{Eq:LightConeBoundary} and use \maths{s/\xi=\mu/\hbar}]. Dashed curves: Corresponding behaviors in the strict absence of disorder, that is, when \maths{\mathcal{V}=0}; in this case, \maths{\Delta\tilde{s}=0} and the abscissas of the curves' siding edges equal \maths{2\,\mu\,T/\hbar}.} \label{Fig:AutocorrelationFunction} \end{figure} At \maths{\tau=0^{+}}, right after the quench, one may show that the coherence function of \maths{\hat{\Psi}(\zeta,\tau)} equals \maths{(\rho_{\mathrm{in}})^{-}} for all \maths{|\zeta-\zeta'|}. This means that the beam of light \maths{\boldsymbol{1}} remains as fully coherent as before entering the fiber. The \maths{g^{(1)}} function starts being affected by the quench a significant duration \maths{T} after its occurrence. Focusing on one of the solid curves of Fig.~\ref{Fig:AutocorrelationFunction}, three regimes depending on \maths{|\zeta-\zeta'|} may be identified. At very short ranges, \maths{g^{(1)}} displays a nontrivial \maths{|\zeta-\zeta'|} dependence given in Eq.~\eqref{Eq:AutocorrelationFunctionVeryShortRanges}. Afterwards and up to \maths{|\zeta-\zeta'|=\zeta_{T}}, its natural logarithm linearly decays, which corresponds to an exponential decay for \maths{g^{(1)}}. This interesting regime is entirely described by the first row in the right-hand side of Eq.~\eqref{Eq:AutocorrelationFunctionQuinquies} and is discussed in detail below. For \maths{|\zeta-\zeta'|>\zeta_{T}} finally, the \maths{g^{(1)}} function no longer depends on \maths{|\zeta-\zeta'|}. Its constant value is given in Eq.~\eqref{Eq:LongRangePlateau} and is also subjected to a discussion in the next paragraphs. As \maths{T} increases, \maths{\zeta_{T}} is pushed to larger values of \maths{|\zeta-\zeta'|} and the long-range, \maths{|\zeta-\zeta'|>\zeta_{T}}, plateau of the \maths{g^{(1)}} function decreases, which we will discuss later. This evolution continues until the system reaches, in the limit \maths{\mu\,T/\hbar=\infty}, a state where
of regions with vanishing cosmological term String effective actions in ten dimensions Expression of massless vector bosons in ten dimensions in terms of spinor bilinears and the Fierz rearrangements of four-fermion terms Classical solutions of a quadratic gravity theory Uncertainty relations in string theory 1997 Scalar field theory and the generalization of momentum-space Feynman rules to curved space Product rules for horospherical functions describing generalized plane waves on the four-dimensional hyperboloid Equivalence of different vacua in anti-de Sitter space scalar field theory Momentum shifts in curved space and the bosonic string Hamiltonian Interactions between phases of different $\Lambda$ and the decay of the of the cosmological term Upper bound for the superstring amplitude given a genus-independent limits upon different degenerations of the Riemann surface Counting of the number of ends of infinite-genus surfaces and the contribution to the scattering amplitude Application of the Ostrogradski formalism to the canonical quantization of a quadratic gravity theory and the Wheeler-DeWitt equation 1998 Quantum cosmology of higher-derivative gravity theories Exponential bound for superstring amplitudes and summability of superstring theory Mathematical applications of string theory Spin structures on Riemann surfaces and the even perfect numbers Rationality condition for the existence of odd perfect numbers 1999 The applications of number-theoretical theorems on primes to the problem of the extent of the sequence of Mersenne primes Boundary conditions for the sixth-order differential equation for the quantum cosmological wavefunction of the quadratic gravity theory Congruence conditions for the compositeness of Mersenne numbers Point symmetries of higher-order partial differential equations One-parameter family of vacua in maximally symmetric space-times and the one-parameter family of string vacua 2000 Effect of higher-order curvature terms on string quantum cosmology Bounce solutions to the string effective equations of motion in the minisuperspace of Friedmann-Robertson-Walker metrics Catalan's conjecture Proof of non-existence of odd perfect numbers for a large class of odd integers Gauge symmetries defined by submanifolds of fibres admitting parallelisms 2001 Integral transforms of higher-order partial differential equations Difference-differential equations of mixed type Calculation of higher-order corrections to the quantum cosmological wavefunction resulting from the additional curvature terms Reduction of odd-perfect number conjecture to three conditions on the primes and exponents of the odd integer Analysis and proof of the exponential bounds for the primitive-element products in the superstring measure Bound on the number of integer solutions to the three auxiliary equations in in the theorem on odd perfect numbers 2002 Boundary conditions and the physical interpretation of the quantum cosmological wavefunction Third-order contributions to the deviation in the expectation value of the scale factor and dilaton from the expressions for their classical time-dependence Proof of the existence of a lower bound for the number of Goldbach partitions of an even integer Projections of fibre parallelism onto group submanifolds Consistency of theories with the number of vector bosons different from the dimension of the fibre of a principal bundle with the energy-dependence of the strong-interaction coupling Effect of spatial conformal isometries on the field equations for the hypersurface three-metric Products of Mersenne numbers Sequence where $p$ is a Mersenne prime index Expansion of the two-loop string-integrand in powers of the coordinate in the neighbourhood of the compactification divisor and the exponential bound for the superstring amplitudes at arbitrary genus Numerical solution of the third-order differential equation for the wavefunction when the derivatives with respect to the dilaton are set to zero 2003 Demonstration of singular solution to the third-order differential equation in the scale factor in the $a\to 0$ limit Application of integral transform techniques to the sixth-order partial differential equation for the wavefunction and the necessity of a scalar for the consistency of the quantum cosmology of the quadratic gravity theory Identification of the domain of superstring perturbation theory with the with the space of effectively closed surfaces having boundary with zero capacity Unitarity and the contribution of infinite-genus surfaces to the vacuum amplitude Solution to the condition on the partitioning of the interval yielding the generalized Cantor set of ends of an $O_G$ surface Coefficients of the normalized superstring amplitudes and the equality of the base of the exponential with the unified gauge coupling Estimate of the path integral with the Neveu-Schwarz-Ramond superstring action by summation over dominant contributions and the string coupling 2004 Odd perfect number conjecture Generation of Mersenne primes and the sequence of even perfect numbers Variants of Yang-Mills theory based on exotic structures on seven spheres Generalized gauge theory defined by the Chern class Derivation of the dependence of the string tension on the compactification Slope of the pomeron trajectory Decrease of the cosmological term based on the interaction of domains of different $\Lambda$ Recursion relation for the number of Goldbach partitions of an even integer Goldbach partitions of twice an even integer 2005 Functional relations for roots of polynomials Real formulation of the scattering matrix in quantum field theory Complexity of curves and the geometrical characterization of dynamical processes Supersymmetric extension of the quadratic gravity theory derived from the heterotic string effective action Physical consistency of supersymmetric theories in a cosmological background Interacting supersymmetric field theories in curved space with $N=2$ supersymmetry The precise value for the string coupling including non-perturbative effects 2006 Truncation and renormalizability of superstring and heterotic string effective actions Classification of Riemann surfaces and string theory Resolution of the integration region problem for supermoduli space integral Genus-dependence of superstring amplitudes Geometrical complexity of curves Energy in the very early universe Coset space of the unified field theory A prime generating algorithmeS. Davis, Rationality in Hodge Theory, RFSC-17-01. with improved efficiency A numerical computation of the decrease of the cosmological term from the Planck era to present times 2007 Conditions for supersymmetry of space-times with an eight-dimensional coset space in an ansatz for the metric of the Freund-Rubin kind Valence quark momentum distribution in nucleons String theoretical description of hadron jets Effect of vacuum polarization on the propagation of charged particles and light A condition on the masses of composite particles Mass mixing between generations of quarks and neutrinos Finite algorithm for the solution to an algebraic equation Coulomb energy term in the atomic mass formula and the charge distribution based on the quark distribution of the nucleons Physical interpretation of the Breit-Wigner formula Computation of the partial width of $Z^0\to \ell^+ \ell^{-}$ Meson multiplets and the mass of the $\eta^\prime$ meson 2008 A computation of the percentage of $s{\bar s}$ quarks in the sea of quark-anti-quark pairs in a nucleus with equal numbers of protons and neutrons Connection between algebraic roots and existence of a proper normal subgroup of the Galois group of roots Automorphism group of the spinor space of the standard model and the wreath product of the isometry group of the coset space of the unified field theory and the symmetric permutation group $S_3$ Localization of the proof of the equivalent of Fermat's theorem in a function field Derivation of the Cabibbo angle through coupling of quarks to gluons Theoretical basis for the fractional charges of quarks Equivalence between isometry group of $(D-1)$-dimensional de Sitter group $D$-dimensional Lorentz group and sequence of reductions of dimension Superfield representations of N=2 anti-de anti-de Sitter supersymmetry Standard topology of the coset space ${{G_2\times SU(2)\times U(1)}\over {SU(3)\times U(1)^\prime \times U(1)^{\prime\prime}}}$ and presence of $N=1$ supersymmetry Relation between gauge coupling and string tension and scale of supersymmetry breaking eS. Davis, Rationality in Hodge Theory, RFSC-17-01. Separation of transcendental factors from the diagonalization of the coefficient matrix of the linear system equivalent to an algebraic equation and independence of transcendental numbers of ${\Bbb Q}$ CPT theorem in curved space, non-conservation of PT and the weak interactions during the creation of the imbalance between matter and anti-matter Reduced homology group of the empty set and the quantum creation of matter Serre spectral sequences and the Hopf fibrations Solution to the Tijdeman-Zagier-Beal conjecture Even-power polynomials and the product of composition-irreducible polynomials Preparation of composite states of ultrarelativistic particles Lagrangian derivation of the geodesic equation and the existence of complete trajectories within the distribution defined by quantum variations surrounding a geodesic congruence 2009 Residual symmetry groups in the infrared limit of the strong interactions Extrinsic complexity of various curves Low-curvature limit of field equations in quadratic gravity theories Derivation of the momentum-space propagator for particles with finite lifetimes Superficial degree of divergence of diagrams in S-matrix expansion of the quartic sector of the heterotic string effective action The role of instantons in field theory and superstring theory Virtual string states in diaeS. Davis, Rationality in Hodge Theory, RFSC-17-01. grams with arbitrarily large genus and the uncertainty principle Scale invariance of the differential equation in the scale factor for the quantum cosmological wavefunction and the Weyl hypothesis $E_6$ invariance of the string vertex algebra and interactions Supersymmetric generalization of the Bieberbach conjecture New tensor term with vanishing
import os import sys __all__ = [ 'lexsort','sort', 'argsort','argmin', 'argmax', 'searchsorted'] from pnumpy._pnumpy import getitem, lexsort32, lexsort64 import numpy as np from numpy import asarray, array, asanyarray from numpy import concatenate #array_function_dispatch = functools.partial( # overrides.array_function_dispatch, module='numpy') # functions that are now methods def _wrapit(obj, method, *args, **kwds): try: wrap = obj.__array_wrap__ except AttributeError: wrap = None result = getattr(asarray(obj), method)(*args, **kwds) if wrap: if not isinstance(result, mu.ndarray): result = asarray(result) result = wrap(result) return result def _wrapfunc(obj, method, *args, **kwds): bound = getattr(obj, method, None) if bound is None: return _wrapit(obj, method, *args, **kwds) try: return bound(*args, **kwds) except TypeError: # A TypeError occurs if the object does have such a method in its # class, but its signature is not identical to that of NumPy's. This # situation has occurred in the case of a downstream library like # 'pandas'. # # Call _wrapit from within the except clause to ensure a potential # exception has a traceback chain. return _wrapit(obj, method, *args, **kwds) def sort(a, axis=-1, kind=None, order=None): """ Return a sorted copy of an array. Parameters ---------- a : array_like Array to be sorted. axis : int or None, optional Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis. kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, optional Sorting algorithm. The default is 'quicksort'. Note that both 'stable' and 'mergesort' use timsort or radix sort under the covers and, in general, the actual implementation will vary with data type. The 'mergesort' option is retained for backwards compatibility. .. versionchanged:: 1.15.0. The 'stable' option was added. order : str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. Returns ------- sorted_array : ndarray Array of the same type and shape as `a`. Threading --------- Up to 8 threads See Also -------- ndarray.sort : Method to sort an array in-place. argsort : Indirect sort. lexsort : Indirect stable sort on multiple keys. searchsorted : Find elements in a sorted array. partition : Partial sort. Notes ----- The various sorting algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The four algorithms implemented in NumPy have the following properties: =========== ======= ============= ============ ======== kind speed worst case work space stable =========== ======= ============= ============ ======== 'quicksort' 1 O(n^2) 0 no 'heapsort' 3 O(n*log(n)) 0 no 'mergesort' 2 O(n*log(n)) ~n/2 yes 'timsort' 2 O(n*log(n)) ~n/2 yes =========== ======= ============= ============ ======== .. note:: The datatype determines which of 'mergesort' or 'timsort' is actually used, even if 'mergesort' is specified. User selection at a finer scale is not currently available. All the sort algorithms make temporary copies of the data when sorting along any but the last axis. Consequently, sorting along the last axis is faster and uses less space than sorting along any other axis. The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts. Previous to numpy 1.4.0 sorting real and complex arrays containing nan values led to undefined behaviour. In numpy versions >= 1.4.0 nan values are sorted to the end. The extended sort order is: * Real: [R, nan] * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] where R is a non-nan real value. Complex values with the same nan placements are sorted according to the non-nan part if it exists. Non-nan values are sorted as before. .. versionadded:: 1.12.0 quicksort has been changed to `introsort <https://en.wikipedia.org/wiki/Introsort>`_. When sorting does not make enough progress it switches to `heapsort <https://en.wikipedia.org/wiki/Heapsort>`_. This implementation makes quicksort O(n*log(n)) in the worst case. 'stable' automatically chooses the best stable sorting algorithm for the data type being sorted. It, along with 'mergesort' is currently mapped to `timsort <https://en.wikipedia.org/wiki/Timsort>`_ or `radix sort <https://en.wikipedia.org/wiki/Radix_sort>`_ depending on the data type. API forward compatibility currently limits the ability to select the implementation and it is hardwired for the different data types. .. versionadded:: 1.17.0 Timsort is added for better performance on already or nearly sorted data. On random data timsort is almost identical to mergesort. It is now used for stable sort while quicksort is still the default sort if none is chosen. For timsort details, refer to `CPython listsort.txt <https://github.com/python/cpython/blob/3.7/Objects/listsort.txt>`_. 'mergesort' and 'stable' are mapped to radix sort for integer data types. Radix sort is an O(n) sort instead of O(n log n). .. versionchanged:: 1.18.0 NaT now sorts to the end of arrays for consistency with NaN. Examples -------- >>> a = np.array([[1,4],[3,1]]) >>> np.sort(a) # sort along the last axis array([[1, 4], [1, 3]]) >>> np.sort(a, axis=None) # sort the flattened array array([1, 1, 3, 4]) >>> np.sort(a, axis=0) # sort along the first axis array([[1, 1], [3, 4]]) Use the `order` keyword to specify a field to use when sorting a structured array: >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), ... ('Galahad', 1.7, 38)] >>> a = np.array(values, dtype=dtype) # create a structured array >>> np.sort(a, order='height') # doctest: +SKIP array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), ('Lancelot', 1.8999999999999999, 38)], dtype=[('name', '|S10'), ('height', '<f8'), ('age', '<i4')]) Sort by age, then height if ages are equal: >>> np.sort(a, order=['age', 'height']) # doctest: +SKIP array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), ('Arthur', 1.8, 41)], dtype=[('name', '|S10'), ('height', '<f8'), ('age', '<i4')]) """ if axis is None: # flatten returns (1, N) for np.matrix, so always use the last axis a = asanyarray(a).flatten() axis = -1 try: # attempt a parallel sort sort(a, kind=kind) return a except Exception: pass else: a = asanyarray(a).copy(order="K") # normal numpy code a.sort(axis=axis, kind=kind, order=order) return a def lexsort(*args, **kwargs): """ Perform an indirect stable sort using a sequence of keys. Given multiple sorting keys, which can be interpreted as columns in a spreadsheet, lexsort returns an array of integer indices that describes the sort order by multiple columns. The last key in the sequence is used for the primary sort order, the second-to-last key for the secondary sort order, and so on. The keys argument must be a sequence of objects that can be converted to arrays of the same shape. If a 2D array is provided for the keys argument, it's rows are interpreted as the sorting keys and sorting is according to the last row, second last row etc. Parameters ---------- keys : (k, N) array or tuple containing k (N,)-shaped sequences The `k` different "columns" to be sorted. The last column (or row if `keys` is a 2D array) is the primary sort key. axis : int, optional Axis to be indirectly sorted. By default, sort over the last axis. Returns ------- indices : (N,) ndarray of ints Array of indices that sort the keys along the specified axis. Threading --------- Up to 8 threads See Also -------- argsort : Indirect sort. ndarray.sort : In-place sort. sort : Return a sorted copy of an array. Examples -------- Sort names: first by surname, then by name. >>> surnames = ('Hertz', 'Galilei', 'Hertz') >>> first_names = ('Heinrich', 'Galileo', 'Gustav') >>> ind = np.lexsort((first_names, surnames)) >>> ind array([1, 2, 0]) >>> [surnames[i] + ", " + first_names[i] for i in ind] ['Galilei, Galileo', 'Hertz, Gustav', 'Hertz, Heinrich'] Sort two columns
The Applied Algebra Seminar A Monday afternoon research seminar The seminar is currently organized by Laura Colmenarejo and Nantel Bergeron. During 2017-18, the seminar takes place from 15:00-16:00 in Ross Building room N638. If you come by bus, the route 196A, 196B takes you to campus from Downsview subway station. If you come by car, you can find the available parking lots here. The seminar has been running since 1997. The topics of talks have typically been any mixture of algebra with any other field: combinatorics, geometry, topology, physics, etc. Further down this page you will find links to the seminar webpages for previous years. The audience usually consists of 6–12 people, including several graduate students and post-docs. For this reason, speakers are encouraged to devote a portion of their talk to the suggestion of open problems and the directions for research in their area. If you are interested in speaking at the seminar, contact Laura Colmenarejo or Nantel Bergeron. You may also be interested in the Algebraic Combinatorics Seminar at the Fields Institute. ### Schedule Dates are listed in reverse-chronological order. Unless otherwise indicated, all talks will take place on Monday from 15:00-16:00 in N638 Ross Building (York University). Date Speaker Title (click titles for abstract) 26 Mar. 2018 Carolina Benedetti (Universidad de Los Andes) Volumes of flow polytopes In this talk we give a happy ending to a conjecture posed last year in this very seminar, related to volumes of flow polytopes. In order to do this, we introduce new families of combinatorial objects whose enumeration computes volumes of flow polytopes. These objects provide an interpretation of Baldoni and Vergne's volume formula. A highlight of our model is an elegant formula for the flow polytope of a graph we call the caracol graph, hence, solving the aforementioned conjecture. As by-products of our work, we uncover a new triangle of numbers that interpolates between Catalan numbers and the number of parking functions. This is joint work with R. Gonzalez, C. Hanusa, P. Harris, A. Khare, A. Morales, M. Yip. 23 Mar. 2018 at YorkU, 2:30 pm! François Bergeron (UQAM) Why should we care about (diagonal) coinvariant spaces? One of the most striking results of finite group representation theory (in characteristic zero) is the characterization of groups generated by reflexions in terms of the simplicity of the structure of their space of invariant polynomials. Indeed, the Chevalley-Shephard-Todd theorem (1954-55) states that this ring is a polynomial ring if and only if the group is generated by reflexions. The group coinvariant space is obtained by considering polynomials modulo the ideal generated by constant term free symmetric polynomials. Chevalley-Shephard-Todd's theorem is equivalent to saying that this coinvariant (vector) space is of dimension equal to the order of the group, exactly in the case of reflexion groups. We will explain in a broadly accessible manner (including for graduate students) how this striking result has sparked many lines of inquiry, including extensions to groups that are not generated by reflexions such as in the seminal work of Garsia and Haiman on diagonal coinvariant spaces (started in the 1990's). More recent extensions along these lines make apparent deep ties with many areas such as: Rectangular Catalan Combinatorics, Homology of $(m,n)$-Torus Knots, Algebraic Geometry (Hilbert Scheme of points in the plane), and Theoretical Physics. 12 Mar. 2018 Angele Hamel (Wilfrid Laurier U.) Chromatic Symmetric Functions and H-Free Graphs Chromatic symmetric functions are defined in terms of colourings of particular graphs. Some key conjectures in this area concern whether chromatic symmetric functions of claw-free graphs can be written in terms of other symmetric functions with positive coefficients. Here we extend the claw-free idea to consider the e-positivity question for chromatic symmetric functions of H-free graphs with H ={claw, F}, where F is a four-vertex graph. We settle the question for all cases except H={claw, co-diamond}, and we provide some partial results in that case. This is joint work with Chinh Hoang and Jake Tuero.​ 5 Mar. 2018 Aaron Lauve (Loyola U. Chicago) Transfer of Structure for Hopf Algebras Having arrived on the combinatorial scene in the late '70s, there is by now an overabundance of Hopf algebras built on combinatorial gadgets: partitions, compositions, permutations, planar binary trees, Dyck paths, Feynman graphs, rooted forests, and posets only begins to scratch the surface. And many boast both (co)commutative and non(co)commutative versions. The reason is plain enough to state: if your favorite gadgets carry with them natural ways to break and combine, then you should treat this as a notion of coproduct and product and build yourself a Hopf algebra. One reward for your effort will be Formulas! I'll give a few examples. In this talk, reporting on joint work with Mitja Mastnak, we take a step back and ask for some way of organizing the study of this menagerie. We introduce the notion of Hopf algebra coverings and highlight the transfer of structure they provide. In particular, we advertise a way to find primitives and antipode in any graded connected cocommutative Hopf algebra. 26 Feb. 2018 Cameron Marcott (U. of Waterloo) Adding gauge data to the positive grassmannian The Grassmannian Gr(n,k) is the set of k dimensional subspaces of an n dimensional space. We often represent such a subspace as the row span of a n x k matrix. Introducing an extra column vector to such a matrix, we obtain a point in Gr(n+1,k). This talk will explore some interesting features of this construction. 19 Feb. 2018 No seminar! 12 Feb. 2018 Nat Thiem A q-partition algebra and splatter combinatorics The partition algebra is a classical diagram algebra arising out of Schur—Weyl duality in representation theory. While there are numerous approaches to constructing a q-deformation of the partition algebra, they each seem to have had different obstructions. This talk will review the partition algebra, its possible generalizations, and then focus on one particular version that has seen some recent progress. We recover some possibly new combinatorial objects called splatters, which are a generalization of set partitions. This work is joint work with Tom Halverson. 05 Feb. 2018 Nantel Bergeron (York U.) Hypergraphical polytope and antipode On The Geometry of Springer Varieties There are quite a number of subvarieties that can be defined to sit inside flag varieties. One of such is the family of Hessenberg varieties. In this talk, I will give combinatorial and geometric descriptions of the computation of the Betti numbers of a member of this family known as Springer varieties and discuss the current problem in this direction. 22 Jan. 2017 Mike Zabrocki (York Univeristy) Products of characters of the symmetric groups In joint work with Rosa Orellana, we defined a basis of the symmetric functions (arXiv:1605.06672) indexed by a partition $\lambda$ which are the irreducible characters indexed by the partition $(n -|\lambda|, \lambda)$ of the symmetric group $S_n$ (as permutation matrices). In a recent paper (arXiv:1709.08098) we were able to find combinatorial interpretations for certain products of symmetric functions expanded in the character basis in terms of multiset valued tableaux. I'll explain the relationship to representation theory and how this is related to finding a combinatorial interpretation for the reduced Kronecker product and the $Gl_n$ to $S_n$ restriction problem. 15 Jan. 2017 Alexander Nenashev (York U.) Homological algebra for pointed sets Although the category of pointed sets is not additive, it turns out to be possible to develop quite a good homological algebra for it. We introduce such notions as complexes and their homology objects and prove analogues of some classical statements of homological algebra in the context of pointed sets. This includes the long exact sequence in homology, the 5-Lemma, and the Mayer-Vietoris sequence. 11 Dec. 2017 at Fields, 2pm! Alejandro Morales (UMass Amherst) Volume and lattice point formulas for flow polytopes The Lidskii formulas of Baldoni and Vergne show that the Kostant partition function of type A flow polytopes can be deduced from a formula for their volume. The formulas that Baldoni and Vergne give for the number of lattice
all). Pursuing an MBA entered my head as a result of years of introspection, some real-world IT/management consulting experience, and the advice of my best friend, who attends an ultra-elite law school and opened my eyes to the realities of the hiring world. I realized that, if I wanted to be a consultant at a firm worth a damn, I needed a name-brand MBA. Nonetheless, the path that so many MBA-hopefuls seem to have taken is unconscionable to me, both during and immediately following their undergraduate degrees. It seems mind-numbingly repressive and creatively stifling. Yet my mind tells me that this is just part of playing the game in the real world. Law school might not really train people to be lawyers, but it is still a hoop to jump through. The GMAT doesn't test your business acumen, but you still have to jump through that hoop. If I want to be a consultant for 3-5 years or so while moonlighting as a performer, then somebody with my background seems to need an MBA to unlock the gate. See the dilemma? Perhaps my aversion to the real world is a disease that needs a good dose of face-the-facts. Thanks again so much for the reply! _________________ Each moment of time ought to be put to proper use, either in business, in improving the mind, in the innocent and necessary relaxations and entertainments of life, or in the care of the moral and religious part of our nature. -William Andrus Alcott Current Student Joined: 02 Nov 2010 Posts: 55 Followers: 0 Kudos [?]: 5 [0], given: 0 Re: What Are You Giving Up For Your MBA? [#permalink]  09 Apr 2011, 04:18 Speaking only for myself, I seriously started thinking about an MBA about 2 years ago after numerous professional experiences that made me feel the necessity of an MBA degree in order to gain a specific skillset/experiences to pursue a new career path from where I currently am. I'd expect the majority of people who pursue MBAs are similar - it's a means to an end rather than the goal itself...which is probably why career goal essays are so popular Manager Status: Planning to retake. Affiliations: Alpha Psi Omega Joined: 25 Oct 2010 Posts: 89 Concentration: General Management, Entrepreneurship GMAT 1: 650 Q42 V37 GRE 1: 1310 Q630 V680 GPA: 3.16 Followers: 1 Kudos [?]: 17 [0], given: 14 Re: What Are You Giving Up For Your MBA? [#permalink]  09 Apr 2011, 04:28 woomba wrote: I'd expect the majority of people who pursue MBAs are similar - it's a means to an end rather than the goal itself...which is probably why career goal essays are so popular If the MBA felt more like a means to a beginning than a means to an end then it would be more appealing. Starting an elite-MBA seems to be akin to retiring from young adulthood. It would be a lot easier to hang up the old ratty t-shirt and jeans uniform if my overarching aspiration were simply "work at company X, go to school X, do job X and be happy." Then again, I still don't see how that makes sense, unless somebody's primary motivation is money/prestige. (If all you wanted were education, then any number of unranked MBA programs would serve that purpose.) Money, prestige, and upward social mobility are things that we all want. The question, then, is how much of our life and other interests are we willing to give up to obtain them? _________________ Each moment of time ought to be put to proper use, either in business, in improving the mind, in the innocent and necessary relaxations and entertainments of life, or in the care of the moral and religious part of our nature. -William Andrus Alcott Joined: 26 Dec 2008 Posts: 2451 Location: Los Angeles, CA Followers: 80 Kudos [?]: 548 [6] , given: 0 Re: What Are You Giving Up For Your MBA? [#permalink]  09 Apr 2011, 10:16 6 KUDOS I think you may be over stating how "focused" MBA types are. Especially during the applications process, where it's hard for others to distinguish "applications/essay narrative" and what one *really* feels and believes. A lot of applicants whether on online message boards or on campus interviews/events tend to be discussing "applications narrative" i.e. what makes a good story, whether it's really true or not. The reality is, most applicants (including those who are successful at getting in) aren't as sure as they say they are. In other words, a lot more applicants feel more like you than you may realize. A lot of folks that I've seen go into it with some hazy ideas of what they'd like, but nothing really too specific. And what they come up with for their essays/interviews is usually more specific than what they actually believe anyhow (because they know they have to come up with a *specific* story). The fact is, most people (whether they are MBAs or not) aren't really sure, but will try and figure it out as it comes, taking everything in stride. The dynamics that you're talking about isn't unique at all. Most of us struggle with it. Just replace "standup comedy" with "time with my spouse and kids", "restoring vintage cars", "playing in my band" or whatever. I've been out of b-school for 10 years now (10 year reunion next month...). With most of our classmates, I can safely say that many of us probably still struggle with that. Taking the paycheck vs doing what we really want to do. What does happen though is that as time goes on, you feel less dependent on your "full-time career" as your sole source of fulfillment and happiness. You can certainly be happy if your "passion" ends up being a full-time career. But if it doesn't, plenty of folks still find a way to be happy - it just may mean that they don't get it from their "day job". In my opinion, that's always been the danger of the "follow your dreams" mantra. Yes, if you can do so, great. But if you can't or wont, don't feel horrible about not doing so (i.e. don't assume that you won't be happy if you don't follow your dreams). It's easy to fall into that trap of feeling that you will be unhappy and unfulfilled if you don't end up having a career that you love. The fact is, most people don't work their dream jobs - they have found a compromise of some sort -- a job that they like *enough* that keeps them afloat, with the capacity to do other things as well that they enjoy (or to spend time with loved ones). And even those who do get their "dream job", after a while it morphs into just a job -- some will become miserable, whereas others will find happiness elsewhere while they continue in their "dream job that is now only a job". As a standup, I'm sure you've met plenty of comedians -- some of whom are happy and blessed about being able to do it, and others who are f*cking miserable curmudgeons. And for the office people, there are happy accountants and miserable ones. So much of it is about learning how to compromise between competing priorities. And when you learn how to compromise in a way that allows you to move forward, you'll learn that your happiness has more to do with making the most of what you have, and less to do with going after what you want. You may still have wants and ambitions, but you're not defined by them. _________________ Alex Chu [email protected] http://www.mbaapply.com Current Student Joined: 16 Jul 2010 Posts: 153 Location: Los Angeles, CA Schools: UCLA Anderson FEMBA Class of 2014 Followers: 3 Kudos [?]: 17 [0], given: 4 Re: What Are You Giving Up For Your MBA? [#permalink]  10 Apr 2011, 11:55 I've had a variation of this same mental debate. I just turned 27 last month and will be starting in a 3 year part-time MBA program this fall. So work during the day, school
from collections import defaultdict from typing import Callable from typing import DefaultDict from typing import List from typing import Optional from typing import Tuple from typing import Union import numpy as np from scipy.interpolate import griddata from optuna._experimental import experimental from optuna.logging import get_logger from optuna.study import Study from optuna.study import StudyDirection from optuna.trial import FrozenTrial from optuna.trial import TrialState from optuna.visualization._utils import _check_plot_args from optuna.visualization.matplotlib._matplotlib_imports import _imports from optuna.visualization.matplotlib._utils import _is_log_scale from optuna.visualization.matplotlib._utils import _is_numerical if _imports.is_successful(): from optuna.visualization.matplotlib._matplotlib_imports import Axes from optuna.visualization.matplotlib._matplotlib_imports import Colormap from optuna.visualization.matplotlib._matplotlib_imports import ContourSet from optuna.visualization.matplotlib._matplotlib_imports import plt _logger = get_logger(__name__) @experimental("2.2.0") def plot_contour( study: Study, params: Optional[List[str]] = None, *, target: Optional[Callable[[FrozenTrial], float]] = None, target_name: str = "Objective Value", ) -> "Axes": """Plot the parameter relationship as contour plot in a study with Matplotlib. Note that, if a parameter contains missing values, a trial with missing values is not plotted. .. seealso:: Please refer to :func:`optuna.visualization.plot_contour` for an example. Warnings: Output figures of this Matplotlib-based :func:`~optuna.visualization.matplotlib.plot_contour` function would be different from those of the Plotly-based :func:`~optuna.visualization.plot_contour`. Example: The following code snippet shows how to plot the parameter relationship as contour plot. .. plot:: import optuna def objective(trial): x = trial.suggest_float("x", -100, 100) y = trial.suggest_categorical("y", [-1, 0, 1]) return x ** 2 + y sampler = optuna.samplers.TPESampler(seed=10) study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=30) optuna.visualization.matplotlib.plot_contour(study, params=["x", "y"]) Args: study: A :class:`~optuna.study.Study` object whose trials are plotted for their target values. params: Parameter list to visualize. The default is all parameters. target: A function to specify the value to display. If it is :obj:`None` and ``study`` is being used for single-objective optimization, the objective values are plotted. .. note:: Specify this argument if ``study`` is being used for multi-objective optimization. target_name: Target's name to display on the color bar. Returns: A :class:`matplotlib.axes.Axes` object. Raises: :exc:`ValueError`: If ``target`` is :obj:`None` and ``study`` is being used for multi-objective optimization. """ _imports.check() _check_plot_args(study, target, target_name) _logger.warning( "Output figures of this Matplotlib-based `plot_contour` function would be different from " "those of the Plotly-based `plot_contour`." ) return _get_contour_plot(study, params, target, target_name) def _get_contour_plot( study: Study, params: Optional[List[str]] = None, target: Optional[Callable[[FrozenTrial], float]] = None, target_name: str = "Objective Value", ) -> "Axes": # Calculate basic numbers for plotting. trials = [trial for trial in study.trials if trial.state == TrialState.COMPLETE] if len(trials) == 0: _logger.warning("Your study does not have any completed trials.") _, ax = plt.subplots() return ax all_params = {p_name for t in trials for p_name in t.params.keys()} if params is None: sorted_params = sorted(all_params) elif len(params) <= 1: _logger.warning("The length of params must be greater than 1.") _, ax = plt.subplots() return ax else: for input_p_name in params: if input_p_name not in all_params: raise ValueError("Parameter {} does not exist in your study.".format(input_p_name)) sorted_params = sorted(set(params)) n_params = len(sorted_params) plt.style.use("ggplot") # Use ggplot style sheet for similar outputs to plotly. if n_params == 2: # Set up the graph style. fig, axs = plt.subplots() axs.set_title("Contour Plot") cmap = _set_cmap(study, target) contour_point_num = 1000 # Prepare data and draw contour plots. if params: x_param = params[0] y_param = params[1] else: x_param = sorted_params[0] y_param = sorted_params[1] cs = _generate_contour_subplot( trials, x_param, y_param, axs, cmap, contour_point_num, target ) if isinstance(cs, ContourSet): axcb = fig.colorbar(cs) axcb.set_label(target_name) else: # Set up the graph style. fig, axs = plt.subplots(n_params, n_params) fig.suptitle("Contour Plot") cmap = _set_cmap(study, target) contour_point_num = 100 # Prepare data and draw contour plots. cs_list = [] for x_i, x_param in enumerate(sorted_params): for y_i, y_param in enumerate(sorted_params): ax = axs[y_i, x_i] cs = _generate_contour_subplot( trials, x_param, y_param, ax, cmap, contour_point_num, target ) if isinstance(cs, ContourSet): cs_list.append(cs) if cs_list: axcb = fig.colorbar(cs_list[0], ax=axs) axcb.set_label(target_name) return axs def _set_cmap(study: Study, target: Optional[Callable[[FrozenTrial], float]]) -> "Colormap": cmap = "Blues_r" if target is None and study.direction == StudyDirection.MINIMIZE else "Blues" return plt.get_cmap(cmap) def _convert_categorical2int(values: List[str]) -> Tuple[List[int], List[str], List[int]]: vocab = defaultdict(lambda: len(vocab)) # type: DefaultDict[str, int] [vocab[v] for v in sorted(values)] values_converted = [vocab[v] for v in values] vocab_item_sorted = sorted(vocab.items(), key=lambda x: x[1]) cat_param_labels = [v[0] for v in vocab_item_sorted] cat_param_pos = [v[1] for v in vocab_item_sorted] return values_converted, cat_param_labels, cat_param_pos def _calculate_griddata( trials: List[FrozenTrial], x_param: str, x_indices: List[Union[str, int, float]], y_param: str, y_indices: List[Union[str, int, float]], contour_point_num: int, target: Optional[Callable[[FrozenTrial], float]], ) -> Tuple[ np.ndarray, np.ndarray, np.ndarray, List[Union[int, float]], List[Union[int, float]], List[Union[int, float]], List[Union[int, float]], List[int], List[str], List[int], List[str], int, int, ]: # Extract values for x, y, z axes from each trail. x_values = [] y_values = [] z_values = [] for trial in trials: if x_param not in trial.params or y_param not in trial.params: continue x_values.append(trial.params[x_param]) y_values.append(trial.params[y_param]) if target is None: value = trial.value else: value = target(trial) if isinstance(value, int): value = float(value) elif not isinstance(value, float): raise ValueError( "Trial{} has COMPLETE state, but its target value is non-numeric.".format( trial.number ) ) z_values.append(value) # Return empty values when x or y has no value. if len(x_values) == 0 or len(y_values) == 0: return ( np.array([]), np.array([]), np.array([]), x_values, y_values, [], [], [], [], [], [], 0, 0, ) # Add dummy values for grid data calculation when a parameter has one unique value. x_values_dummy = [] y_values_dummy = [] if len(set(x_values)) == 1: x_values_dummy = [x for x in x_indices if x not in x_values] x_values = x_values + x_values_dummy * len(x_values) y_values = y_values + (y_values * len(x_values_dummy)) z_values = z_values + (z_values * len(x_values_dummy)) if len(set(y_values)) == 1: y_values_dummy = [y for y in y_indices if y not in y_values] y_values = y_values + y_values_dummy * len(y_values) x_values = x_values + (x_values * len(y_values_dummy)) z_values = z_values + (z_values * len(y_values_dummy)) # Convert categorical values to int. cat_param_labels_x = [] # type: List[str] cat_param_pos_x = [] # type: List[int] cat_param_labels_y = [] # type: List[str] cat_param_pos_y = [] # type: List[int] if not _is_numerical(trials, x_param): x_values = [str(x) for x in x_values] ( x_values, cat_param_labels_x, cat_param_pos_x, ) = _convert_categorical2int(x_values) if not _is_numerical(trials, y_param): y_values = [str(y) for y in y_values] ( y_values, cat_param_labels_y, cat_param_pos_y, ) = _convert_categorical2int(y_values) # Calculate min and max of x and y. x_values_min = min(x_values) x_values_max = max(x_values) y_values_min = min(y_values) y_values_max = max(y_values) # Calculate grid data points. # For x and y, create 1-D array of evenly spaced coordinates on linear or log scale. xi = np.array([]) yi = np.array([]) zi = np.array([]) if x_param != y_param: if _is_log_scale(trials, x_param): xi = np.logspace(np.log10(x_values_min), np.log10(x_values_max), contour_point_num) else: xi = np.linspace(x_values_min, x_values_max, contour_point_num) if _is_log_scale(trials, y_param): yi = np.logspace(np.log10(y_values_min), np.log10(y_values_max), contour_point_num) else: yi = np.linspace(y_values_min, y_values_max, contour_point_num) # Interpolate z-axis data on a grid with cubic interpolator. # TODO(ytknzw): Implement Plotly-like interpolation algorithm. zi = griddata( np.column_stack((x_values, y_values)), z_values, (xi[None, :], yi[:, None]), method="cubic", ) return ( xi, yi, zi, x_values, y_values, [x_values_min, x_values_max], [y_values_min, y_values_max], cat_param_pos_x, cat_param_labels_x, cat_param_pos_y, cat_param_labels_y, len(x_values_dummy), len(y_values_dummy), ) def _generate_contour_subplot( trials: List[FrozenTrial], x_param: str, y_param: str, ax: "Axes", cmap: "Colormap", contour_point_num: int, target: Optional[Callable[[FrozenTrial], float]], ) -> "ContourSet": x_indices = sorted({t.params[x_param] for t in trials if x_param in t.params}) y_indices = sorted({t.params[y_param] for t in trials if y_param in t.params}) if len(x_indices) < 2: _logger.warning("Param {} unique value length is less than 2.".format(x_param)) return ax if len(y_indices) < 2: _logger.warning("Param {} unique value length is less than 2.".format(y_param)) return ax ( xi, yi, zi, x_values, y_values, x_values_range, y_values_range, x_cat_param_pos, x_cat_param_label, y_cat_param_pos, y_cat_param_label, x_values_dummy_count, y_values_dummy_count, ) = _calculate_griddata( trials, x_param, x_indices, y_param, y_indices, contour_point_num, target ) cs = None ax.set(xlabel=x_param, ylabel=y_param) if len(zi) > 0: ax.set_xlim(x_values_range[0], x_values_range[1]) ax.set_ylim(y_values_range[0], y_values_range[1]) ax.set(xlabel=x_param, ylabel=y_param) if _is_log_scale(trials, x_param): ax.set_xscale("log") if _is_log_scale(trials, y_param): ax.set_yscale("log") if x_param != y_param: # Contour the gridded data. ax.contour(xi, yi, zi, 15, linewidths=0.5, colors="k") cs = ax.contourf(xi, yi, zi, 15, cmap=cmap.reversed()) # Plot data points. if x_values_dummy_count > 0: x_org_len = int(len(x_values) / (x_values_dummy_count + 1)) y_org_len = int(len(y_values) / (x_values_dummy_count + 1)) elif y_values_dummy_count > 0: x_org_len = int(len(x_values) / (y_values_dummy_count + 1)) y_org_len = int(len(y_values) / (y_values_dummy_count + 1)) else: x_org_len = len(x_values) y_org_len = len(x_values) ax.scatter( x_values[:x_org_len], y_values[:y_org_len], marker="o", c="black",
. When the water on the bottom reaches boiling temperature, water vapor bubbles form there. When they rise, they cool down again and collapse. They produce the typical crackling noise that can be heard shortly before the boil. With further heat supply, only the small bubbles collapse, the large ones rise. The boiling noise becomes quieter and disappears completely when the water is completely boiled. Heating water on earth (left) and in a spaceship (heat source is below), see also video file on boiling water under weightlessness The vapor bubbles do not rise in the water under weightlessness . Instead, they stay near the bottom of the pot and conglomerate into larger bubbles and eventually into a single large bubble. The lack of convection and the reduced heat conduction through the steam bubbles make it difficult to boil water quickly in a spaceship. #### Sublimation and resublimation In the temperature range from about 0 K to 273.16 K (−273.15 ° C to +0.01 ° C) and a pressure range from high vacuum to about 0.006 bar, in the range below the triple point, water does not exist in liquid form, but only gaseous and solid. In this area, i.e. at the point of sublimation , ice changes directly into the gaseous state without any change in the state of aggregation into a liquid . This process is known as sublimation or, in the opposite direction, as resublimation . In a vacuum, sublimation takes place down to almost 0 Kelvin (−273.15 ° C). The upper limit, however, is given by the triple point. ### Specific heat capacity Liquid water has a very high specific heat capacity of around 4.2 kJ / (kg · K) (under normal pressure in the temperature range from zero to one hundred degrees Celsius between 4.219 and 4.178 kJ / (kg · K)). So you need 4.2 kilojoules of thermal energy to heat one kilogram by one Kelvin  . This means that water can absorb quite a lot of energy compared to other liquids and solids. The comparatively high specific heat capacity of water is used, for example, in heat storage systems for heating systems. If you heat 1 kg of water from 15 ° C to 100 ° C, then you need 4.2 kJ / (kg · K) · 85 K · 1 kg = 357 kJ. One kilowatt hour (kWh) is 3.6 MJ. To heat one liter of water from pipe temperature to 100 ° C under normal pressure, you need about 0.1 kWh of energy. To allow the water to evaporate, six times the amount of energy is also required ( see below ). Water vapor (at 100 ° C) has a specific heat capacity of 1.870 kJ / (kg · K) and ice (at 0 ° C) 2.060 kJ / (kg · K). Solid substances usually have a significantly lower specific heat capacity. For example, lead has a heat capacity of 0.129 kJ / (kg · K), copper one of 0.380 kJ / (kg · K). ### Heat of fusion and evaporation For thawing, i.e. the conversion of 0 ° C cold ice into 0 ° C cold water, an energy of 333.5 kJ / kg must be applied. With the same amount of energy, the same amount of water can be heated from 0 ° C to 80 ° C. To convert 100 ° C warm water into 100 ° C warm steam , 2257 kJ / kg are required. To convert 0 ° C cold water into 100 ° C warm steam, you need 100 K · 4.19 kJ / (kg · K) + 2257 kJ / kg = 2676 kJ / kg. The specific heat of vaporization of water is much higher than that of other liquids. Methanol has a heat of vaporization of only 845 kJ / kg and mercury even of only 285 kJ / kg. However, if one compares the molar evaporation heats, then mercury with 57.2 kJ / mol has a higher value than water with 40.6 kJ / mol. In meteorology, the heat of fusion and evaporation are of great importance in the context of latent heat . ### Thermal conductivity Compared to other liquids, water has a high thermal conductivity , but very low compared to metals. The thermal conductivity of liquid water increases with increasing temperature, but ice conducts heat much better than liquid water. At 20 ° C, water has a thermal conductivity of 0.60 W / (m · K). For comparison: copper 394 W / (m · K) and silver 429 W / (m · K). Even the worst heat conductor among all metals, bismuth comes to 7.87 W / (m · K). The thermal conductivity of water in the form of ice at −20 ° C is at least 2.33 W / (m · K). ### Density and density anomaly Density of water as a function of temperature Water has a density of around one kilogram per liter (one liter corresponds to one cubic decimeter ). This round relationship is no coincidence: it goes back to the unit Grave , which is one of the historical roots of today's international system of units (SI) . A grave was defined as the mass of one liter of water at 4 ° C. At normal pressure, water has its greatest density at 3.98 ° C and thus shows a density anomaly . This consists in the fact that water below 3.98 ° C expands again when the temperature drops further, even when it changes to the solid state of aggregation, which is only known from a few substances. In addition to the temperature, substances dissolved in the water also influence its density, which can be measured with a hydrometer . Since the dissolved particles are distributed between the water molecules and the increase in volume is small, the density increases as a result. The increase in density roughly corresponds to the mass of dissolved substance per volume and plays an important role in large-scale water movements, for example in the context of thermohaline circulation or the dynamics of freshwater lenses . ### Smell and taste In its pure state, water is tasteless and odorless . ### Optical properties Complex refractive index of water in the range of visible light Reflection on the water surface of a pond #### Refraction and reflective properties In the visible light range, water has an index of refraction of approximately 1.33. If light hits the interface between air (refractive index ≈ 1) and water, it is therefore refracted towards the perpendicular . The refractive index is lower in comparison to many other materials, so the refraction through water is less pronounced than, for example, when air passes into most types of glass or even into diamond . But there are also materials like methanol that have a lower refractive index. The refraction of light leads to optical illusions , so that one sees an object underwater in a different place than where it actually is. The same applies to a view from the water into the airspace. Animals that specialize in fishing, such as herons, or fish hunting for insects over the water can take this image shift into account and therefore usually hit their prey without any problems. According to Fresnel's formulas, the reflectivity of the surface air-water is about 2% at normal incidence. As with all materials, this value increases with a flatter angle of incidence and is approximately 100% with grazing incidence. The reflection behavior, however, depends on the polarization of the light. Parallel polarized light generally has a lower degree of reflection than perpendicularly polarized light, which means that light is polarized when it hits the interface between air and water. Due to the relatively low refractive index of water, this effect is less pronounced than with many other (transparent) materials with
light source in terms of intensity noise and linewidth. The excellent beam quality of the SH light is confirmed by a SM fiber coupling efficiency of 83\%. \subsection{Relative intensity noise} The relative intensity noise spectral density $S_{\rm RIN}(\nu)$ of the SH output was measured by shining a beam of $\sim 120\,\mu W$ on a low-noise photodiode (Newport 1801, \linebreak 125-MHz bandwidth) and recording the signal using a digital oscilloscope (Pico Technology PicoScope 4424) in AC mode, yielding the relative power fluctuations $\varepsilon(t)$ after normalization, where $I(t)/\langle I \rangle_T = 1 + \varepsilon(t)$ with $I(t)$ the intensity and $\langle I \rangle_T$ its temporal average. The definition of $S_{\rm RIN}(\nu)$ is \begin{equation} S_{\rm RIN}(\nu) = \lim\limits_{t_{\rm m} \rightarrow \infty}\,{\frac{1}{t_{\rm m}} \left\langle\left| \int_{0}^{t_{\rm m}} \! \varepsilon(t) \mathrm{e}^{{\rm i} 2\pi\nu t} \, \mathrm{d}t \right|^2 \right\rangle }\,\mathrm{} \end{equation} with the measurement time $t_{\rm m}$ and $\langle ... \rangle$ denoting temporal averaging. It was realized employing a time-dis\-crete Fourier transformation method and averaging over 100 spectra. The result is shown in Fig.\,\ref{noise}. The broad peak at $\simeq 100\,{\rm kHz}$ can be attributed to the laser relaxation oscillations. The structure in the $10\,{\rm kHz}$ region can be attributed to the locking system. Above $300\,\rm kHz$ $S_{\rm RIN}$ drops to the photon shot-noise level, as indicated by the spectrum of a noncoherent source producing an equivalent photocurrent (spectrum B in Fig.\,\ref{noise}). The narrow peaks at $1\,\rm MHz$ and harmonics stem from the phase modulation of the pump light, see Section\,\ref{cavlock}. The square root of the integral of $S_{\rm RIN}(\nu)$ from $1\,{\rm kHz}$ to $5\,{\rm MHz}$ ($1\,{\rm kHz}$ to $0.9\,{\rm MHz}$) yields a RMS relative intensity noise of $1.1\times10^{-3}$ ($0.8\times10^{-3}$). \begin{figure} \includegraphics[width=1\columnwidth]{noise_averaged.eps} \caption{(Color online) The SH relative intensity noise spectrum (A), noise for an equivalent photocurrent from a non-coherent source (B) and noise of the detection circuit with no photocurrent (C).} \label{noise} \end{figure} \subsection{Absorption spectroscopy of ultracold atoms} The laser setup was used as an absorption imaging light source for our lithium quantum gas experiment described elsewhere \citep{Nascimb`ene2009}. A sample of around $1.2\times10^5$ $^7$Li atoms above Bose-Einstein condensation threshold was prepared in an elongated optical dipole trap. Putting a $700\,{\rm G}$ magnetic offset field, the internal electronic states of the atoms are to be described in the Paschen-Back regime. The corresponding lift of degeneracy for the $F=2\rightarrow F'=3$ transition frequencies results in a cycling transition, rendering this method insensitive to constant homogeneous stray fields. By applying a laser frequency detuning $\delta$ with respect to atomic resonance using the offset lock as described in section \ref{sll}, one detects a different atom number $N(\delta)$ while assuming constant trap conditions according to \begin{equation} \frac{N(\delta)}{N(0)} = \left[1+\left(\frac{2\delta}{\Gamma}\right)^2\right]^{-1}\,\mathrm{,} \label{natoms} \end{equation} where $\Gamma$ is the measured linewidth of the transition and $N(0)$ the atom number detected at resonance. \begin{figure} \includegraphics[width=1\columnwidth]{Linewidth.eps} \caption{(Color online) In-situ absorption imaging of ultra-cold atoms in an optical dipole trap. The laser was detuned by $\delta$ from the atomic resonance using the offset lock described in Section\,\ref{sll}, varying the detected atom number (circles). A Lorentzian of width $\Gamma_{\rm Fit} = 2\pi\times(6.1 \pm 0.4)\,{\rm MHz}$ is fitted to the data (solid line).} \label{linewidth} \end{figure} The results are presented in Fig.\,\ref{linewidth}. A least-squares fit according to eq.\,(\ref{natoms}) results in a linewidth of $\Gamma_{\rm fit} = 2\pi\times(6.1 \pm 0.4)\,{\rm MHz}$, a value compatible with the natural linewidth of $2\pi\times(5.872 \pm 0.002)\,{\rm MHz}$ of \citep{McAlexander1996}. Within our experimental resolution we infer that the laser linewidth is much smaller than the natural linewidth of the atomic transition. Assuming a Lorentzian lineshape for the laser, the linewidth can be given as $200^{+400}_{-200}\,{\rm kHz}$, compatible with zero. \subsection{Long-term stability} Fig.\,\ref{f:longterm} shows a long-term stability plot of the laser system under laboratory conditions. The system remained locked during the measurement time of 8.5\,hours. The SH output power drops by 7\% and shows small modulations of a period of $\simeq 15\,\rm min$. This is attributable to slight angular tilts when the cavity's slow PZT (M$_4$/M$_3'$) is driven. This effect, changing the alignments, is confirmed by monitoring the laser output power, which drops by 5\% in the same time interval and displays the same modulations. \begin{figure} \includegraphics[width=1\columnwidth]{longterm.eps} \caption{(Color online) Long-term stability of the SH output power, experiencing a drop of 7\% over the measurement time. The system remained offset-locked to the lithium resonance.} \label{f:longterm} \end{figure} \section{Intra-cavity doubling} \label{icd} We also implemented the more direct approach of intra-cavity doubling a 1342-nm laser. This concept simplifies the optical design since only one cavity is needed. It was achieved by using a setup similar to the one presented in Fig.\,\ref{laser}. All cavity mirrors are highly reflective at $1342\,\rm nm$ and mirror M$_3$ is also transmitting at $671\,\rm nm$. A nonlinear crystal is put in the waist between mirrors M$_3$ and M$_4$. For the Faraday rotator, we have tried various arrangements, using either Gadolinium Gallium Garnet (GGG) or TGG as the Faraday material and we have used either a rotatory power plate (made either of TeO$_2$ or of crystalline quartz) or a half-wave plate to compensate the Faraday rotation. Although theory \citep{Biraben1979} favors the use of a rotatory power with respect to a half-wave plate, we have found that the wave plate was more convenient, with a slightly larger output power. \subsection{Infrared power} In a setup involving intra-cavity frequency doubling, it is essential to have very low parasitic losses $\mathcal{L}_{\rm par}$ \citep{Smith1970}. We start by evaluating these losses by measuring the emitted infrared laser power as a function of the output mirror transmission $\mathcal{T}_{\rm oc}$ for a fixed absorbed pump power $P_{\rm abs} =13\,\rm W$ for which thermal effects in the Nd:YVO$_4$ crystal remain small. The data is presented in Fig.\,\ref{rigrod}. Accurate values of the transmission coefficient of the various output mirrors have been obtained with an absolute uncertainty near $0.03$\% by measuring successively the direct and transmitted power of a laser beam in an auxiliary experiment. According to \citep{Chen1997,Rigrod1965,Laporta1991,Chen1996,Chen1999}, the output power $P_{\rm out}$ is given by \begin{eqnarray} \label{eqrigrod} P_{\rm out} = P_{\rm sat} \mathcal{T}_{\rm oc} \left[\frac{G_0}{\mathcal{T}_{\rm oc}+\mathcal{L}_{\rm par}} -1\right]\,\rm , \end{eqnarray} \noindent where $P_{\rm sat}$ is the gain medium saturation power and $G_0$ the laser gain. We performed a nonlinear curve fit yielding $\mathcal{L}_{\rm par} = (0.0101 \pm 0.0006)$, $P_{\rm sat} = (26.3 \pm 2.0) \,\rm W$ and $G_0 = (0.150 \pm 0.006)$. The measured losses $\mathcal{L}_{\rm par}$ of $\sim1\%$ are in accordance with expectations for a cavity made of four mirrors (three high reflection mirrors plus the output mirror), three AR-coated crystals and a Brewster plate. In Annex~\ref{Aaddmat} we relate the values measured for $P_{\rm sat}$ and $G_0$ to the parameters of the lasing crystal and the laser cavity. We find good agreement with literature values. \begin{figure} \includegraphics[width=1\columnwidth]{rigrodplot.eps} \caption{ Output power $P_{\rm out}$ of the laser emitting at $1342\,\rm nm$ as a function of the mirror transmission $\mathcal{T}_{\rm oc}$. The data points are experimental while the curve is the best fit using eq.\,(\ref{eqrigrod}).} \label{rigrod} \end{figure} \subsection{Doubling and frequency behavior} Several Nd:YVO$_4$ lasers emitting at 671\,nm have been built based on intra-cavity frequency doubling using a LBO (lithium triborate, LiB$_3$O$_5$) crystal \citep{Agnesi2002,Agnesi2004,Lue2010,Ogilvy2003,Yao2004,Yao2005,Zhang2005,Zheng2002,Zheng2004}. The largest achieved power was $5.5\,\rm W$ but none of these lasers have run in SLM operation. We have tried frequency doubling with both LBO and BIBO (bismuth triborate, BiB$_3$O$_6$) crystals installed in the small waist of the laser cavity (see Fig. \ref{laser}). The BIBO crystal gave slightly more power but with a substantially more astigmatic laser mode. Therefore we use a LBO crystal of $15\,\rm mm$ length and $3 \times 3 \,\rm mm^2$ cross-section. We apply type~I SH generation, with critical phase matching at $\theta = 86.1^{\circ}$ and $\phi =0^{\circ}$. The crystal is AR coated with a {specified} residual reflection equal to $0.15$\% at 1342\,nm and $0.5$\% at 671\,nm. The non-linear optical coefficient of LBO is $d_{\rm eff} = 0.817 \,\rm pm.V^{-1}$. Using the expressions given in ref.~\citep{Kontur2007} and the SNLO software \citep{SNLO} to evaluate the crystal properties, we have calculated the expected optimum conversion coefficient $\eta$ for this crystal. We get $\eta = 7.3 \times 10^{-5}\,\rm W^{-1}$ with an optimum waist in the crystal equal to $w_0= 29\,\rm \mu m$. We use a slightly larger laser waist of $\simeq 45\,\rm \mu m$ for which theory predicts $\eta = 4.9 \times 10^{-5}\,\rm W^{-1}$. We have measured $\eta$ by running the laser with a weakly IR-transmitting mirror M$_2$, with a coupling transmission value $\mathcal{T}_{\rm oc} = (0.55\pm0.03)\%$, and by measuring simultaneously
no singularity for all distances including $r=0$. This way we ensure that at the origin $(x,y)=(0,0)$ the system in Cartesian coordinates is well defined. In addition to ensure the existence of a steady state, the spatial density of searchers shall be normalizable. This will set another condition on $\kappa(r)$. The time evolution of the angle $z$ between the direction of motion $\theta$ and the position of the home $\beta$ is now given by: \begin{equation} \dot{z} = -\left(\frac{v_0}{r}-\kappa(r)\right) \sin(z). \label{eq:dotzU} \end{equation} It follows for the deterministic trajectories, that \begin{equation} \sin(z(r))\exp\left(-\frac{U(r)}{v_0}\right)r=X=\sin(z_0)\exp\left(-\frac{U(r_0)}{v_0}\right)r_0 \label{eq:sinzvrU} \end{equation} holds wherein we have defined \begin{equation} U(r)=\int^r {\rm d} r^\prime \kappa(r^\prime). \label{eq:kappaofr} \end{equation} Again a particle with an initial angle $z_0=0$ or $z_0=\pi$ will move on a straight line. There also exists a minimal and a maximal distance from the home, with $\sin(z)=\pm1$. Previously, in Equation \eqref{eq:sinzvr} we could fix $r_0$ such that the integral of motion $X(z_0)$ only depended on the initial angle $z_0$. The value of $X(z_0)$ represented a trajectory in the $(r,z)$ plane, or in the $(r,X)$ plane in an unique way. This might now be no longer true. Now the variable $X(r_0,z_0)$ might be dependent on the initial position and the initial angle, if one wants to uniquely identify a trajectory in the $(r,z)$ plane and have $z_0$ as parameter. One could, however, fix the initial angle $z_0=\pm\pi/2$ and then the variable $X(r_0)$ would still be dependent on only one parameter and be unique for the trajectories in the $(r,z)$ plane, but we chose the initial angle $z_0$ to be a parameter. Figure \ref{fig:capU} displays such a case. The upper left plot shows the steady state pdf. It is obtained from \eqref{eq:fpe_full} by replacing $\kappa$ with $\kappa(r)$ and following the same steps as in Section \ref{sec:spat_dist}: \begin{equation} P_0(r,z)=cr\exp\left(-\frac{U(r)}{v_0} \right), \label{eq:p0U} \end{equation} which again is independent of the noise. It reads in Cartesian coordinates \begin{equation} P_0(x,y)=c\exp\left(-\frac{U(\sqrt{x^2+y^2})}{v_0} \right), \label{eq:pxy0U} \end{equation} with $c$ being the normalization constant. This sets the second condition on $\kappa(r)$ as we require the normalization to be possible. \begin{figure}[h] \includegraphics[width=0.4\linewidth]{rvU.pdf} \includegraphics[width=0.44\linewidth]{r_z_planeU.pdf} \includegraphics[width=0.34\linewidth]{dat1_mr2p4rm4.pdf} \includegraphics[width=0.45\linewidth]{rx.pdf} \caption{Example for distance dependent coupling strength $\kappa(r)$. (a) Steady state spatial distribution $P_0$. (b) Corresponding deterministic trajectories in the $(r,z)$ plane, with initial condition $r_0$, as indicated by broken lines and initial angle $z_0$ according to the colorbar. (c) Sample trajectory in the $(x,y)$ plane without noise corresponding to the blue separatrix in the $(r,z)$ plane. (d) Trajectories in the $(r,X)$ plane with initial condition $r_0$, as indicated by broken lines and initial angle $z_0$ according to the colorbar.} \label{fig:capU} \end{figure} The noise still acts perpendicular on the deterministic trajectory $X$ causing a switching and the ensemble average \,$<X>$\, still follows the exponential decay from Equation \eqref{eq:avXt}, with the same relaxation time as given before \eqref{eq:tau}. The overdamped dynamics for the radial transition pdf $P(r,t|r_0,t_0)$ \begin{eqnarray} &&\frac{\partial}{\partial t}P=D_{{\rm eff }} \frac{\partial}{\partial r}\left(\frac{\partial}{\partial r}P-\left(\frac{1}{r}-\frac{\kappa(r)}{v_0} \right)P\right)\,, \label{eq:fpe_overU} \end{eqnarray} has \eqref{eq:p0U} as asymptotic steady pdf. As above, the radial relaxation again slows down with increased noise strength. Likewise in the situation with constant $\kappa$, only $\tau$ or $D_{{\rm eff}}$ expresses the influence of the various $\alpha$-values of the noise. Figure \ref{fig:capU} gives an example for a space dependent coupling $\kappa(r)$. We chose $\kappa(r)=r^2-4r+4$ such that the steady state pdf exhibits two maxima. This can be seen in panel (a). In panel (b), the deterministic trajectories are shown in the $(r,z)$ plane with two spatial initial condition $r_{0,i}$ ($i=0,1$) as indicated by the two broken lines and initial angle $z_0$ according to the colorbar are shown. Due to the quadratic term in the coupling trajectories are significantly shorter than in the initial model since the coupling to the home pointing direction increases with the distance from the home. Extremal points of radial probability follow from ${\rm d}P_0/{\rm d}r=0$, respectively, for the considered coupling from \begin{equation} 1-\frac{r}{v_0} \kappa(r)=0. \label{eq:ext} \end{equation} Solutions are easily found. Minimal probability is found around $r_1=1$ and maximal one at the two distances $r_{2,3}=(3\pm \sqrt{5})/2$ as shown in Figure \eqref{fig:capU} (a). In the $r,z$-plane (see (b) of the same Figure, these distances are connected with the fixed points of the deterministic flow of trajectories located at angles $z=\pm\pi/2)$. The fixed points at $r_1$ are of saddle type whereas these, at the two other distances, are centers. They always correspond to a circular solution for the deterministic trajectories in the $(x,y)$ plane since at those points the angular and the radial velocity $\dot{z}=0$, $\dot{r}=0$ vanish. Small changes in the initial angle $z_0$ around the circular solutions in the $(x,y)$ plane at the maxima $r_{2,3}$ correspond to trajectories in the $(x,y)$ plane similar to Figure \ref{fig:x_y_plane} (a). For initial angles $z_0$ close to $0$ or $\pm\pi$ trajectories in the $(x,y)$ plane will be comparable to Figure \ref{fig:x_y_plane} (b), but shorter with faster turnings. Between the two mentioned solutions a separatrix lays. The picture (c) is a trajectory in the $(x,y)$ plane close to the separatrix. This solution exists if the initial angle $z_0=\pm\pi/2$ at the minima is slightly changed. The resulting trajectory in the $(x,y)$ plane rotates two times around the home before reaching the maximal distance twice. The noise facilitates again switchings between trajectories. If the steady state pdf of the position can be experimentally measured it can be fitted to our solution \eqref{eq:pxy0U}, thus determining the coupling strength $\kappa(r)$ to the home. Having found a suitable dependence, the relaxation time $\tau$ can be determined by the ensemble or time average of the variable $X$ through measuring $r(t)$ and $z(t)$. Fitting experimental data thus allows to determine the relaxation time $\tau$, and therefore the noise strength $\sigma^\alpha$ of the model. \section{Conclusions} \label{sec:concl} We laid the foundation for a minimal stochastic model for a local searcher which was motivated by experimental observations of the stochastic oscillatory motion of insects around a given home. The main ingredients of this minimal model are the constant speed of the searcher and stochastic angular variation, that only requires the knowledge of the position angle and the heading direction, which allows the particle to explore the vicinity of the given home. The specific interaction with this home results in an exploration of the neighborhood around the home and an attraction towards it in dependence on the mutual orientations of the position and heading vectors. The model was formulated with four parameters. $\kappa$ defines the strength of interaction with the home, $v_0$ is the speed of the searcher and $\sigma$ stands for the intensity of the noise. Since the observed turning angle behavior in experiments such as fruit flies can be of Non-Gaussian statistics we introduced $\alpha$ stable noise as source of randomness. The corresponding parameter $0<\alpha \le 2$ as fourth parameter allows to vary the support of the noise source between special types of noise as Gaussian, Lorentzian, etc. The introduced model showed qualitative agreement with the behavior of insects. The advantage of our model is the analytical and simple numerical tractability. In consequence we were able to discuss typical behavior of the trajectories and of characteristic times. For example, we found the characteristic return times in the noise free case and obtained an apsidal precession of the oscillatory trajectories reminiscent of celestial motion. The analysis of the models allowed us to discuss in detail the deterministic properties and the effects originating through the addition of different symmetric white noise sources. The inclusion of noise has a stabilizing effect on the system since unstable trajectories disappear. Generally, trajectories start to randomize. This is manifested by the noise dependent relaxation time $\tau$ that is proportional to $1/\sigma^\alpha$. For larger times the stochastic dynamics has forgotten its initial directions and trajectories have spread over all possible orbits. This investigation has concentrated on the relaxation of the deterministic integral of motion $X$. Its first moment conditioned to initial values has decayed at times larger $\tau$. At high noise the particles start to perform diffusive motion. As every active stochastic particle, the corresponding effective diffusion coefficient depends inversely proportional on $\sigma$. We derived
GeV}^2 {\bf e_z}$; in comparison to the nucleon mass, the dimensionless ratio $e |{\bf B}|/M_N^2 \sim 0.013$ for the smallest field suggesting the deformations arising from the magnetic field are perturbatively small compared to QCD effects for $|\tilde{n}|\, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } 10$. As $m_u=m_d=m_s$ in these calculations, the up-quark propagator in the $\tilde n=1$ field is the same as the down- or strange-quark propagator in the $\tilde n=-2$ field, with similar relations for the other magnetic field strengths. To optimize the re-use of light-quark propagators in the calculation, different quark electric charges were implemented by using a different magnetic field (with the same charge). Consequently, the $U_Q(1)$ fields that are used in this work correspond to $\tilde n=0,+1,-2,+3,+4,-6,+12$, corresponding to magnetic fields of $e {\bf B} \sim 0, 0.06, -0.12, 0.18, 0.24, -0.36, 0.71~{\rm GeV}^2 {\bf e_z}$, respectively. In the presence of a time-independent and uniform magnetic field, the energy eigenstates of a structureless charged particle with definite angular momentum along the field direction are described by Landau levels and plane waves, rather than by three-momentum plane waves alone. One of the subtle finite-volume (FV) effects introduced into this calculation is the loss of translational invariance in the interaction of charged particles with the background gauge field. We give a brief description of this problem, and relegate the more technical aspects of the discussion to Appendix~\ref{app:CPcors}. For the implementation of the magnetic field using the links in Eq.~(\ref{eq:bkdgfield}), the lack of translational invariance is made obvious by the Wilson loops, \begin{eqnarray} W_1(x_2) & = & \prod\limits_{j=0}^{L/a -1}\ U_1^{(Q)} (x+j a \hat x_1) \ =\ \exp\left[-i q_q \tilde n {6\pi x_2 \over L} \right] \ \ , \nonumber\\ W_2(x_1) & = & \prod\limits_{j=0}^{L/a -1}\ U_2^{(Q)} (x+j a \hat x_2) \ =\ \exp\left[i q_q \tilde n {6\pi x_1 \over L} \right] \ \ \ , \label{eq:wilsonloops} \end{eqnarray} which wrap around the $x_1$ and $x_2$ directions of the lattice geometry, respectively. These exhibit explicit spatial dependence. Further, there are additional effects for charged-particle correlation functions arising from their gauge dependence. Because of the lack of translational invariance, the links employed in Eq.~(\ref{eq:bkdgfield}) define a spatial origin ${\bf x} = {\bf 0}$, where the gauge potential vanishes, ${\bf A}({\bf x})={\bf 0}$. In performing the present calculations, the source points for the correlation functions are not restricted to ${\bf x} = {\bf 0}$ but instead are randomly distributed within the lattice volume, approximately restores translational invariance. In the case of charged-particle correlation functions, this averaging leads to non-trivial effects, because the overlap of a given hadronic operator onto the various Landau levels depends on the source location. This is explicated in Appendix~\ref{app:CPcors}, where additional methods of restoring translational invariance are discussed in the context of a structureless charged particle. Post-multiplication of $U_Q(1)$ links onto the QCD gauge links omits the effects of the external magnetic field on the gluonic degrees of freedom through the fermion determinant. The present calculations therefore correspond to a partially-quenched theory in which the sea quark charges are set to zero while the valence quark charges assume their physical values~\cite{Savage:2001dy,Chen:2001yi,Beane:2002vq,Detmold:2006vu}. For a SU(3) symmetric choice of quark masses, as used herein, this does not affect the magnetic moments or the $np \to d \gamma$ transition (linear responses to the field) because the $N_f=3$ charge matrix is traceless \cite{Detmold:2006vu,Beane:2014ora} and couplings to sea quarks explicitly cancel at this order (indeed the isovector nature of the $np \to d \gamma$ transition make it insensitive to disconnected contributions even away from the SU(3) symmetric point). However, the magnetic polarizabilities receive contributions from terms in which the two photons associated with the magnetic field interact with zero, one or two sea quark loops. The terms involving zero and two sea quark loops are correctly implemented, however the square of the light-quark electric-charge matrix is not traceless and terms involving the two photons interacting with one sea quark loop will contribute to isoscalar quantities for any non-zero charge matrix. In the present work, these terms are omitted for computational expediency.\footnote{ Several approaches to these terms have been considered recently~\cite{Lujan:2014kia,Freeman:2014kka} and may be investigated in future studies of nuclei, although significant computational resources are required. } Generally, it has been found that the related disconnected contributions to analogous single-hadron observables are small for the vector current \cite{Deka:2013zha,Green:2015wqa}, and we expect that this behavior persists in nuclei. It is important to remember that this systematic ambiguity is restricted to the case of the isoscalar polarizabilities, and that the isovector and isotensor combinations, such as $\beta_{p} - \beta_{n}$, will be correctly calculated for the SU(3) symmetric case. \subsection{Interpolating Operators and Contractions} In order to probe the energy eigenstates of the systems under consideration, interpolating operators are constructed with the desired quantum numbers. In principle, any choice of operator that has a overlap onto a given eigenstate is acceptable. However, poor choices will have small overlaps onto the state of interest and hence will give rise to ``noisy'' signals with significant contamination from other states with the same quantum numbers. For a vanishing background magnetic field, the energy eigenstates are also momentum eigenstates, and in order to access the ground state energy, it is simplest to choose interpolating operators that project onto states with zero three-momentum. In this work, this approach is followed for both the neutron and proton (despite the proton carrying a positive electric charge). The proton correlation functions are of the form \begin{eqnarray} \label{eq:corr_prot} C^{(P,S)}_p(t; {\bf x}_i) = \langle 0 | \widetilde{\cal O}^{(P,S)}_p(t;{\bf 0}) \overline{\cal O}^{(S)}_p({\bf x}_i,0) | 0 \rangle_{\bf B}\,, \end{eqnarray} with interpolating operators that are given by \begin{eqnarray} {\cal O}^{(S)}_p({\bf x},t) & = & \epsilon^{ijk} [\tilde u_+^i({\bf x},t) C \gamma_5 \tilde d_+^j({\bf x},t)] \tilde u_+^k({\bf x},t)\,, \nonumber\\ \widetilde {\cal O}^{(P)}_p(t;{\bf p}) & = & \sum_{{\bf x}} e^{i\bf{p}\cdot{\bf x}}\epsilon^{ijk} [u_+^i({\bf x},t) C \gamma_5 d_+^j({\bf x},t)] u_+^k({\bf x},t) \,, \nonumber\\ \widetilde {\cal O}^{(S)}_p(t;{\bf p}) & = & \sum_{{\bf x}} e^{i\bf{p}\cdot{\bf x}}\epsilon^{ijk} [\tilde u_+^i({\bf x},t) C \gamma_5 \tilde d_+^j({\bf x},t)] \tilde u_+^k({\bf x},t) \ =\ \sum_{{\bf x}} e^{i\bf{p}\cdot{\bf x}} \ {\cal O}^{(S)}_p({\bf x},t) \,, \label{eq:ops} \end{eqnarray} where $\langle\ldots\rangle_{{\bf B}}$ indicates ensemble averaging with respect to QCD in the presence of the $U_Q(1)$ links corresponding to a uniform background magnetic field ${\bf B} = B {\bf e_z}$, and where the spin indices of the operators, carried by the third quark, are suppressed. In Eq.~(\ref{eq:ops}), $\tilde q(x)$ corresponds to a quark field of flavor $q=u,d$ that has been smeared \cite{Albanese:1987ds} in the spatial directions using a Gaussian form, while $q(x)$ corresponds to a local field. Additionally, the subscript $_+$ on the quark fields implies that they are explicitly projected onto positive energy modes, that is $u^i_+({\bf x},t)= (1+\gamma_4)u^i({\bf x},t)$. The superscript $(P,S)$ on an interpolating operator (and hence the correlation function) indicates a point or smeared form, respectively, $C=i\gamma_4\gamma_2$ is the charge conjugation matrix, and $\overline{\cal O}^{(S)}_p = {\cal O}^{(S)\dagger }_p\gamma_4$. Neutron correlation functions are constructed from those of the proton by interchanging $u\leftrightarrow d$. The correlation functions with the quantum numbers of nuclei, constructed using the methods discussed in detail in Refs.~\cite{Detmold:2012eu,Beane:2012vq}, are built recursively using sink-projected nucleon ``blocks'' that involve either smeared or local fields. For the present calculations, zero momentum states are built from zero-momentum blocks, although more complicated constructions can also be considered. As suggested in Ref.~\cite{Tiburzi:2012ks}, in order to study the proton in a magnetic field, it would be more appropriate to use interpolating operators that project onto the lowest-lying Landau level, rather than projecting onto three-momentum plane-waves. This would enhance the overlap of the interpolating operator onto the lowest, close-to-Landau energy eigenstate and suppress the overlap onto higher states. However, it is unclear how to generalize such a framework to nuclei constructed from nucleon blocks. While single-hadron blocks provide a good basis for the construction of correlation functions of nuclei in the absence of background fields, this will not necessarily be the case in a magnetic field. The problem is exemplified by $^3$He, a $j ={1\over 2}$ nucleus comprised of two protons and a neutron. Assuming a compact state (which it has been shown to be at the quark masses used in this work through calculations in multiple lattice
Ropelength and finite type invariants # Ropelength, crossing number and finite type invariants of links R. Komendarczyk Tulane University, New Orleans, Louisiana 70118  and  A. Michaelides University of South Alabama, Mobile, AL 36688 ###### Abstract. Ropelength and embedding thickness are related measures of geometric complexity of classical knots and links in Euclidean space. In their recent work, Freedman and Krushkal posed a question regarding lower bounds for embedding thickness of -component links in terms of the Milnor linking numbers. The main goal of the current paper is to provide such estimates, and thus generalizing the known linking number bound. In the process, we collect several facts about finite type invariants and ropelength/crossing number of knots. We give examples of families of knots, where such estimates outperform the well known knot–genus estimate. Supported by NSF DMS 1043009 and DARPA YFA N66001-11-1-4132 during the years 2011-2015 ## 1. Introduction Given an –component link (we assume class embeddings) in –space L:S1⊔…⊔S1⟶R3,L=(L1,L2,…,Ln),Li=L|the i'th circle, (1.1) its ropelength is the ratio of length , which is a sum of lengths of individual components of , to reach or thickness: , i.e. the largest radius of the tube embedded as a normal neighborhood of . The ropelength within the isotopy class of is defined as Rop(L)=infL′∈[L]rop(L′),rop(L′)=ℓ(L′)r(L′), (1.2) (in [7] it is shown that the infimum is achieved within and the minimizer is of class ). A related measure of complexity, called embedding thickness was introduced recently in [16], in the general context of embeddings’ complexity. For links, the embedding thickness of is given by a value of its reach assuming that is a subset of the unit ball in (note that any embedding can be scaled and translated to fit in ). Again, the embedding thickness of the isotopy class is given by T(L)=supL′∈[L]τ(L′). (1.3) For a link , the volume of the embedded tube of radius is , [20] and the tube is contained in the ball of radius , yielding rop(L)=πℓ(L)τ(L)2πτ(L)3≤43π23πτ(L)3,⇒τ(L)≤(11rop(L))13. (1.4) In turn a lower bound for gives an upper bound for and vice versa. For other measures of complexity of embeddings such as distortion or Gromov-Guth thickness see e.g. [21], [22]. Bounds for the ropelength of knots, and in particular the lower bounds, have been studied by many researchers, we only list a small fraction of these works here [4, 5, 7, 12, 11, 10, 14, 29, 37, 39, 30, 38]. Many of the results are applicable directly to links, but the case of links is treated in more detail by Cantarella, Kusner and Sullivan [7] and in the earlier work of Diao, Ernst, and Janse Van Rensburg [13] concerning the estimates in terms of the pairwise linking number. In [7], the authors introduce a cone surface technique and show the following estimate, for a link (defined as in (1.1)) and a given component [7, Theorem 11]: rop(Li)≥2π+2π√Lk(Li,L), (1.5) where is the maximal total linking number between and the other components of . A stronger estimate was obtained in [7] by combining the Freedman and He [17] asymptotic crossing number bound for energy of divergence free fields and the cone surface technique as follows rop(Li)≥2π+2π√Ac(Li,L),rop(Li)≥2π+2π√2g(Li,L)−1, (1.6) where is the asymptotic crossing number (c.f. [17]) and the second inequality is a consequence of the estimate , where is a minimal genus among surfaces embedded in , [17, p. 220] (in fact, Estimate (1.6) subsumes Estimate (1.5) since ). A relation between and the higher linking numbers of Milnor, [32, 33] is unknown and appears difficult. The following question, concerning the embedding thickness, is stated in [16, p. 1424]: ###### Question A. Let be an -component link which is Brunnian (i.e. almost trivial in the sense of Milnor [32]). Let be the maximum value among Milnor’s -invariants with distinct indices i.e. among . Is there a bound τ(L)≤cnM−1n, (1.7) for some constant , independent of the link ? Is there a bound on the crossing number in terms of ? Recall that the Milnor –invariants of , are indexed by a subset of component indexes111where the last index plays a special role c.f. [33]. . These are a well known link homotopy invariants (if all the indexes in are different in ) of –component links are often referred to simply as Milnor linking numbers or higher linking numbers, [32, 33]. The –invariants are defined as certain residue classes ¯¯¯μI;j(L)≡μI;j(L)mod Δμ(I;j),Δμ(I;j)=gcd(Γμ(I;j)). (1.8) where are coefficients of the Magnus expansion of the th longitude of in , and is a certain subset of lower order Milnor invariants, c.f. [33]. Regarding as an element of , (or , whenever ) let us define [¯μI;j(L)]:=⎧⎨⎩min(¯μI;j,d−¯μI;j) for d>0,|¯μI;j| for d=0. (1.9) Our main result addresses Question A for general –component links (without the Brunnian assumption) as follows. ###### Theorem A. Let be an -component link and one of its top Milnor linking numbers, then rop(L)≥4√n(n−1√[¯μ(L)])34,Cr(L)≥13(n−1)n−1√[¯μ(L)]. (1.10) In the context of Question A, the estimate of Theorem A transforms, using (1.4), as follows τ(L)≤(11n4)13M−14(n−1). Naturally, Question A can be asked for knots and links and lower bounds in terms of finite type invariants in general. Such questions have been raised for instance in [6], where the Bott-Taubes integrals [3, 40] have been suggested as a tool for obtaining estimates. ###### Question B. Can we find estimates for ropelength of knots/links, in terms of their finite type invariants? In the remaining part of this introduction let us sketch the basic idea behind our approach to Question B, which relies on the relation between the finite type invariants and the crossing number. Note that since is scale invariant, it suffices to consider unit thickness knots, i.e. together with the unit radius tube neighborhood (i.e. ). In this setting, just equals the length of . From now on we assume unit thickness, unless stated otherwise. In [4], Buck and Simon gave the following estimates for , in terms of the crossing number of : ℓ(K)≥(4π11Cr(K))34,ℓ(K)≥4√πCr(K). (1.11) Clearly, the first estimate is better for knots with large crossing number, while the second one can be sharper for low crossing number knots (which manifests itself for instance in the case of the trefoil). Recall that is a minimal crossing number over all possible knot diagrams of within the isotopy class of . The estimates in (1.11) are a direct consequence of the ropelength bound for the average crossing number222i.e. an average of the crossing numbers of diagrams of over all projections of , see Equation (4.2). of (proven in [4, Corollary 2.1]) i.e. ℓ(K)43≥4π11acr(K),ℓ(K)2≥16πacr(K). (1.12) In Section 4, we obtain an analog of (1.11) for –component links () in terms of the pairwise crossing number333see (4.10) and Corollary G, generally , as the individual components can be knotted. , as follows ℓ(L)≥1√n−1(32PCr(L))34,ℓ(L)≥n√16π√n2−1(PCr(L))12. (1.13) For low crossing number knots, the Buck and Simon bound (1.11) was further improved by Diao [10] as follows:444More precisely: [10]. (1.14) On the other hand, there are well known estimates for in terms of finite type invariants of knots. For instance, 14Cr(K)(Cr(K)−1)+124≥|c2(K)|,18(Cr(K))2≥|c2(K)|. (1.15) Lin and Wang [28] considered the second coefficient of the Conway polynomial (i.e. the first nontrivial type invariant of knots) and proved the first bound in (1.15). The second estimate of (1.15) can be found in Polyak–Viro’s work [36]. Further, Willerton, in his thesis [41] obtained estimates for the “second”, after , finite type invariant of type , as 14Cr(K)(Cr(K)−1)(Cr(K)−2)≥|V3(K)|. (1.16) In the general setting, Bar-Natan [2] shows that if is a type invariant then . All these results rely on the arrow diagrammatic formulas for Vassiliev invariants developed in the work of Goussarov, Polyak and Viro [19]. Clearly, combining (1.15) and (1.16) with (1.11) or (1.14), immediately yields lower bounds for ropelength in terms of the Vassiliev invariant. One may take these considerations one step further and extend the above estimates to the case of the th coefficient of the Conway polynomial , with the help of arrow diagram formulas for , obtained recently in [8, 9]. In