input
stringlengths
2.56k
275k
output
stringclasses
1 value
$Q$ of $Q^\vee$, the counter part of the zigzag path $a_0 \cdots a_k$ and $\textnormal{opp}_i$ ($i=1,2$) in $Q^\vee$ have the following configuration. The positive cycle $\tilde{F}^+_i$ in $Q$ mirror to each positive cycle $F_i^{+}$ (for even $i$) contains the vertex $v$, while no two such cycles in $Q$ share an arrow. The same happens for the mirror cycles $\tilde{F}^-_i$ to $F_i^-$ (for odd $i$). If we walk around the vertex $v$ in $Q$ counterclockwise, we meet $\tilde{F}^+_{k-1}$, $\tilde{F}^-_{k-2}$, $\tilde{F}^+_{k-3}$, $\cdots$, $\tilde{F}^+_2$, $\tilde{F}^-_1$, $\tilde{F}^+_{0}$ in order. The arrow $a_i$ in $Q$ sit between two consecutive cycles $\tilde{F}^{\pm}_{i}$, $\tilde{F}^{\mp}_{i-1}$ in this sequence. (See right side of Figure \ref{fig:imagewrapgen}). Now, the path $\textnormal{opp}_1$ in $Q^\vee$ gives rise to a piecewise smooth curve $\gamma$ contained in $\mathbb{L}$ whose corners lie only on arrows of $\tilde{F}_{i}^+$ for even $i$. More precisely, $\gamma$ turns at each middle point of arrows in $\tilde{F}_{i}^+$ except $a_{i}$ and $a_{i+1}$. (See the dotted curve in upper right diagram in Figure \ref{fig:imagewrapgen}.) Observe that $\gamma$ intersects both $L_a$ and $L_b$ once. Checking the degrees of intersections, we see that they should be $v_{t(a)}$ and $v_{h(b)}$, respectively. Therefore, we have a bounded domain in $\Sigma$ enclosed by $\gamma$ and $L_a$ and $L_b$. Since $\alpha$ is a minimal Hamiltonian chord from $L_a$ to $L_b$ around the puncture $v$ in $\Sigma$, it lies in this bounded domain. Thus, after chopping off the corner around $v$ along $\alpha$, the domain become a holomorphic disc $T_1$ contributing to $m_2^{b,0,0} (v_{t(a)}, \alpha)$, and obviously there are no other contributions due to punctures. Note that the corners of $T_1$ along $\mathbb{L}$ is precisely $\textnormal{opp}_1$. Therefore, this disc defines a map \begin{equation}\label{eqn:m2discspunctured1} \eta : v_{t(a)} \mapsto \pm \textnormal{opp}_1 \cdot v_{h(b)}. \end{equation} Similar procedure produces a holomorphic disc $T_2$ which gives rise to \begin{equation}\label{eqn:m2discspunctured2} \eta(v_{h(a)}) = \pm \textnormal{opp}_2 \cdot v_{t(b)} (=\pm \zeta (h(a)) ). \end{equation} We finally show that the signs in \eqref{eqn:m2discspunctured1} and \eqref{eqn:m2discspunctured2} are the same. We divide the computation for the sign difference in the following two steps. \begin{enumerate} \item[(i)] Ignoring special points, the sign of $T_1$ is positive since its boundary orientation agrees with that of $L_a$, $L_b$ and $\mathbb{L}$. The boundary orientation of $T_2$ agree with those of Lagrangians only along $L_a$ and $L_b$. So, the sign of $T_2$ is $(-1)^{|\textnormal{opp}_2|}$ where $|\textnormal{opp}_2|$ is the number of arrows contained in $\textnormal{opp}_2$. Observe that \begin{equation}\label{eqn:morlevelsigndiff1} |\textnormal{opp}_2| \equiv \sum_i |\tilde{F}_i^{-}| \mod 2 \end{equation} where $|\tilde{F}_i^{-}|$ is the number of edges of $\tilde{F}_i^{-}$. \item[(ii)] Now, we take the special points into account. Recall that precisely one special point is placed on the boundary of each negatively oriented polygon (for the potential) with the odd number of edges. Therefore, the sign difference between $T_1$ and $T_2$ due to special points is \begin{equation}\label{eqn:morlevelsigndiff2} (-1)^{\sum_{|\tilde{F}_i^{-}| \equiv 1 } 1} \end{equation} where the sum in the exponent is taken over all $\tilde{F}_i^{-}$ with the odd number of edges. \end{enumerate} It is easy to see that the sign differences \eqref{eqn:morlevelsigndiff1} and \eqref{eqn:morlevelsigndiff2} cancel with each other, which completes the proof. \end{proof} In what follows we show that the functor is an embedding assuming that $Q^\vee$ is zigzag consistent using the following result of Bocklandt on matrix factorization category. We first recall the zigzag consistency condition introduced by Bocklandt. He showed that the Jacobi algebra of a zigzag consistent dimer is 3 Calabi-Yau \cite[Theorem 4.4]{Bocklandt}. Similar results were obtained by Issii and Ueda \cite{Ishii-Ueda}. For the construction of generalized mirrors and functors we do not need this condition. However, the condition is needed in the B-side to ensure the functor is an embedding (see Section \ref{sec:punctured-fctor}). \begin{defn}[Zigzag consistency \cite{B-consistency}] \label{def:consistent} A dimer model is called zigzag consistent if for each arrow $e$, the zig and the zag rays at $e$ in the universal cover only meet at $e$. \end{defn} \begin{prop}[\cite{Bocklandt}] \label{prop:basis-hom} Suppose $Q^\vee$ is zigzag consistent. Then for every pair of arrows $a$ and $b$, the set $\{\zeta^{(i)}: i \in \mathbb{Z}\}$ defined in Definition \ref{def:zeta} forms a basis of the cohomology $H^*(\mathrm{Hom}(P_a,P_b),\delta)$. In particular $H^*(\mathrm{Hom}(P_a,P_b),\delta)$ vanishes when $a$ and $b$ do not share a common vertex in $Q$. \end{prop} As a consequence, when $a$ and $b$ share a common vertex in $Q$, the induced map on cohomologies $\mathrm{Hom}(U_a,U_b) \to H^*(\mathrm{Hom}(P_a,P_b),\delta)$ is an isomorphism. When $a$ and $b$ do not share a common vertex in $Q$, the statement remains true since both sides are zero. Hence we have \begin{cor} Suppose $Q^\vee$ is zigzag consistent. Then the functor $\mathcal{F}: \mathrm{Fuk}(\Sigma - Q_0) \to \mathrm{MF}(\mathrm{Jac}(Q^\vee),W)$ is an embedding. \end{cor} \section{Mirrors of Calabi-Yau threefolds associated with quadratic differentials} \label{sec:CY3} The mirror construction and the natural functor formulated in this paper has important applications to Calabi-Yau threefolds associated to Hitchin systems constructed by Diaconescu-Donagi-Pantev \cite{DDP}. Using the functor, Fukaya categories of the Calabi-Yau threefolds are transformed to derived categories of certain Calabi-Yau algebras known as the Ginzburg algebras \cite{Ginzburg}. In this paper we will focus on the $\mathrm{SL}(2,\mathbb{C})$ case. Calabi-Yau algebra has the advantage of being more linear and was rather well-understood in existing literature, see for instance \cite{KS_Ainfty,KS-stab,CEG,Labardini}. In particular stability conditions and wall-crossing of Donaldson-Thomas invariants of the Fukaya categories can be understood by using Ginzburg algebras and the functor. Kontsevich-Soibelman \cite{KS-stab} developed a deep theory on the stability structures on Calabi-Yau algebras in dimension three and their relations with cluster algebras. Recent mathematical studies in this direction were largely motivated by the works of Gaiotto-Moore-Neitzke \cite{GMN10,GMN13,GMN-network}, who studied wall-crossing of BPS states over the moduli spaces of Hitchin systems and the relations with hyper-K\"ahler metrics. Mathematically the BPS states are stable objects of Fukaya categories of the associated (non-compact) Calabi-Yau threefolds. Later Smith \cite{Smith} computed the Fukaya categories for $\mathrm{SL}(2)$ systems, and constructed an embedding by hand from the Fukaya categories to derived categories of Ginzburg algebras. Moreover Bridgeland-Smith \cite{BS} provided a detailed study of stability conditions of derived categories of Ginzburg algebras. Furthermore Fock-Goncharov \cite{FG1,FG2} provided a combinatorial approach to study $\mathrm{SL}(m)$ (or $\mathbb{P} \mathrm{GL}(m)$) systems for general $m$ using techniques of cluster algebras. The approach taken in this paper is constructive and functorial. We apply our general construction to produce the noncommutative mirror $\mathcal{A} = \widehat{\Lambda Q} / (\partial \Phi)$. Then it automatically follows from our general theory that there exists a god-given injective functor from the Fukaya category to the mirror derived category (which is in the reverse direction of the embedding by Smith). The functor is the crucial object for studying stability conditions and Donaldson-Thomas invariants for the Fukaya category. The computations in this section rely essentially on \cite{Smith}. From the viewpoint of this paper, the quiver algebra $\mathcal{A}$ can be regarded as a mirror of $X$, in the sense that Lagrangian submanifolds of $X$ (symplectic geometry) are transformed to quiver representations (algebraic geometry). \begin{remark} The notations for the superpotentials here are different from that in \cite{Smith}. The spacetime superpotential $\Phi$ here was denoted by $W$ in \cite{Smith}. In this paper $W$ denotes the worldsheet superpotential, which vanishes in this case. \end{remark} \subsection{Non-compact Calabi-Yau threefolds associated with quadratic differentials} We first briefly recall the Calabi-Yau geometry from \cite{DDP, Smith}. Let $S$ be a closed Riemann surface of genus $g(S) > 0$, with a set $M \not= \emptyset$ of marked points. An $\mathrm{SL}(m,\mathbb{C})$ Hitchin system on $(S,P)$ is the data $(E,D,\phi)$ where $E$ is a holomorphic vector bundle with a Hermitian metric, $D$ is the metric connection, and $\phi \in \Omega^1(\mathrm{End} E)$ satisfies that $D + \phi$ is a flat connection (with certain asymptotic behavior at the marked points). The set of eigenvalues of the one-form valued skew-Hermitian matrix $\phi$ produce a spectral curve $\Sigma \subset K_S$, whose defining equation takes the form \begin{equation} \label{eq:spectral} \eta^m + \sum_{r=2}^m \phi_r \cdot \eta^{m-r} = 0 \end{equation} for $\eta \in K_S$, where $\phi_r$ are meromorphic $r$-differentials, that is sections of $K_S^{\otimes r}$, with poles at $P$. The projection $K_S \to S$ restricts to a ramified $m:1$ covering map $\Sigma \to S$. The \emph{associated (non-compact) Calabi-Yau threefold $X$ is a conic fibration} over the total space of $K_S$ with discriminant locus being $\Sigma
subband delay transform of closure phases. Accepted values for the string are 'rect' or 'RECT' (for rectangular), 'bnw' and 'BNW' (for Blackman-Nuttall), and 'bhw' or 'BHW' (for Blackman-Harris). Default=None sets it to 'rect' (rectangular window) fftpow [scalar] the power to which the FFT of the window will be raised. The value must be a positive scalar. Default = 1.0 pad [scalar] padding fraction relative to the number of frequency channels for closure phases. Value must be a non-negative scalar. For e.g., a pad of 1.0 pads the frequency axis with zeros of the same width as the number of channels. After the delay transform, the transformed closure phases are downsampled by a factor of 1+pad. If a negative value is specified, delay transform will be performed with no padding. Default=None sets to padding factor to 1.0 datapool [string] Specifies which data set is to be Fourier transformed visscaleinfo [dictionary] Dictionary containing reference visibilities based on which the closure phases will be scaled to units of visibilities. It contains the following keys and values: 'vis' [numpy array or instance of class InterferometerArray] Reference visibilities from the baselines that form the triad. It can be an instance of class RI.InterferometerArray or a numpy array. If an instance of class InterferometerArray, the baseline triplet must be set in key 'bltriplet' and value in key 'lst' will be ignored. If the value under this key 'vis' is set to a numpy array, it must be of shape (nbl=3, nlst_vis, nchan). In this case the value under key 'bltriplet' will be ignored. The nearest LST will be looked up and applied after smoothing along LST based on the smoothing parameter 'smooth' 'bltriplet' [Numpy array] Will be used in searching for matches to these three baseline vectors if the value under key 'vis' is set to an instance of class InterferometerArray. However, if value under key 'vis' is a numpy array, this key 'bltriplet' will be ignored. 'lst' [numpy array] Reference LST (in hours). It is of shape (nlst_vis,). It will be used only if value under key 'vis' is a numpy array, otherwise it will be ignored and read from the instance of class InterferometerArray passed under key 'vis'. If the specified LST range does not cover the data LST range, those LST will contain NaN in the delay spectrum 'smoothinfo' [dictionary] Dictionary specifying smoothing and/or interpolation parameters. It has the following keys and values: 'op_type' [string] Specifies the interpolating operation. Must be specified (no default). Accepted values are 'interp1d' (scipy.interpolate), 'median' (skimage.filters), 'tophat' (astropy.convolution) and 'gaussian' (astropy.convolution) 'interp_kind' [string (optional)] Specifies the interpolation kind (if 'op_type' is set to 'interp1d'). For accepted values, see scipy.interpolate.interp1d() 'window_size' [integer (optional)] Specifies the size of the interpolating/smoothing kernel. Only applies when 'op_type' is set to 'median', 'tophat' or 'gaussian' The kernel is a tophat function when 'op_type' is set to 'median' or 'tophat'. If refers to FWHM when 'op_type' is set to 'gaussian' resample [boolean] If set to True (default), resample the delay spectrum axis to independent samples along delay axis. If set to False, return the results as is even if they may be be oversampled and not all samples may be independent method [string] Specifies the Fourier transform method to be used. Accepted values are 'fft' (default) for FFT and 'nufft' for non-uniform FFT apply_flags [boolean] If set to True (default), weights determined from flags will be applied. If False, no weights from flagging will be applied, and thus even flagged data will be included Outputs: A dictionary that contains the oversampled (if resample=False) or resampled (if resample=True) delay spectrum information. It has the following keys and values: 'freq_center' [numpy array] contains the center frequencies (in Hz) of the frequency subbands of the subband delay spectra. It is of size n_win. It is roughly equivalent to redshift(s) 'freq_wts' [numpy array] Contains frequency weights applied on each frequency sub-band during the subband delay transform. It is of size n_win x nchan. 'bw_eff' [numpy array] contains the effective bandwidths (in Hz) of the subbands being delay transformed. It is of size n_win. It is roughly equivalent to width in redshift or along line-of-sight 'shape' [string] shape of the window function applied. Accepted values are 'rect' (rectangular), 'bhw' (Blackman-Harris), 'bnw' (Blackman-Nuttall). 'fftpow' [scalar] the power to which the FFT of the window was raised. The value is be a positive scalar with default = 1.0 'npad' [scalar] Numbber of zero-padded channels before performing the subband delay transform. 'lags' [numpy array] lags of the subband delay spectra after padding in frequency during the transform. It is of size nlags=nchan+npad if resample=True, where npad is the number of frequency channels padded specified under the key 'npad'. If resample=False, nlags = number of delays after resampling only independent delays. The lags roughly correspond to k_parallel. 'lag_kernel' [numpy array] delay transform of the frequency weights under the key 'freq_wts'. It is of size n_win x nlst x ndays x ntriads x nlags. nlags=nchan+npad if resample=True, where npad is the number of frequency channels padded specified under the key 'npad'. If resample=False, nlags = number of delays after resampling only independent delays. 'lag_corr_length' [numpy array] It is the correlation timescale (in pixels) of the subband delay spectra. It is proportional to inverse of effective bandwidth. It is of size n_win. The unit size of a pixel is determined by the difference between adjacent pixels in lags under key 'lags' which in turn is effectively inverse of the effective bandwidth of the subband specified in bw_eff 'whole' [dictionary] Delay spectrum results corresponding to bispectrum phase in 'prelim' key of attribute cpinfo. Contains the following keys and values: 'dspec' [dictionary] Contains the following keys and values: 'twts' [numpy array] Weights from time-based flags that went into time-averaging. Shape=(nlst,ndays,ntriads,nchan) 'mean' [numpy array] Delay spectrum of closure phases based on their mean across time intervals. Shape=(nspw,nlst,ndays,ntriads,nlags) 'median' [numpy array] Delay spectrum of closure phases based on their median across time intervals. Shape=(nspw,nlst,ndays,ntriads,nlags) 'submodel' [dictionary] Delay spectrum results corresponding to bispectrum phase in 'submodel' key of attribute cpinfo. Contains the following keys and values: 'dspec' [numpy array] Delay spectrum of closure phases Shape=(nspw,nlst,ndays,ntriads,nlags) 'residual' [dictionary] Delay spectrum results corresponding to bispectrum phase in 'residual' key of attribute cpinfo after subtracting 'submodel' bispectrum phase from that of 'prelim'. It contains the following keys and values: 'dspec' [dictionary] Contains the following keys and values: 'twts' [numpy array] Weights from time-based flags that went into time-averaging. Shape=(nlst,ndays,ntriads,nchan) 'mean' [numpy array] Delay spectrum of closure phases based on their mean across time intervals. Shape=(nspw,nlst,ndays,ntriads,nlags) 'median' [numpy array] Delay spectrum of closure phases based on their median across time intervals. Shape=(nspw,nlst,ndays,ntriads,nlags) 'errinfo' [dictionary] It has two keys 'dspec0' and 'dspec1' each of which are dictionaries with the following keys and values: 'twts' [numpy array] Weights for the subsample difference. It is of shape (nlst, ndays, ntriads, nchan) 'mean' [numpy array] Delay spectrum of the subsample difference obtained by using the mean statistic. It is of shape (nspw, nlst, ndays, ntriads, nlags) 'median' [numpy array] Delay spectrum of the subsample difference obtained by using the median statistic. It is of shape (nspw,
out synthetic modeling of static $$\triangle$$CFS upon varying earthquake stress drop and regional stress using COULOMB3.3. In accord with preceding studies, the results show positive $$\triangle$$CFS along the fault when stress drop is comparable to regional stress. And yet, positive $$\triangle$$CFS would take place at the top and at the base of the fault, expanding to the center of the fault -where the hypocenter is assumed- as the stress drop reaching regional stress in magnitude. This matter might explain separated clusters of aftershocks in different depth for some cases: such as M6.5 2016 Pidie Jaya aftershocks, which we have found. Topic: Earth and Planetary Sciences 75 Utilization of Double-Difference Tomography for Geothermal Exploration: 3D Velocity Structure Interpretation and Fluid Type DeterminationArifa Hijriani (a*), David P. Sahara (b), Andri D. Nugraha (b), Irvan Ramadhan (c), Ridwan P. Sidik (c) a) Geophysical Engineering, Institut Teknologi Bandung, Ganeca 10, Bandung 40132, Jawa Barat, Indonesia *arifahijriani[at]gmail.com b) Global Geophysics Research Group, Institut Teknologi Bandung, Ganeca 10, Bandung 40132, Jawa Barat, Indonesia c) PT. Supreme Energy, Menara Sentraya floor 23th, Jakarta, Indonesia Abstract Geothermal surface exploration entails a multi-geoscientific process, which is aimed to define the geometry and characteristics of the geothermal reservoir prior to drilling. Lately, microseismic event monitoring is becoming a standard procedure in inferring the structure of the potential geothermal reservoir. However, a good coverage of seismic station and abundant seismic events must be fulfilled in order to map the subsurface structure. Taking advantage of the well-designed seismic station deployed at the ARD geothermal field prior its first drilling, a study of microearthquake tomography for 3D reservoir structure delineation is performed in this field. A seismic network of 26 stations was set up for more than eight months from August 2011 within 20 km radius from the center of the expected-reservoir. There are 637 local events were detected and located, which is very high number of seismicity for a region that is not yet under geothermal development. The purpose of this study is to construct 3D seismic velocity structure using double-difference tomography and to infer fluid properties, i.e. steam and brine, from the ratio of the P- and S-wave velocity. An advance waveform cross-correlation technique is applied to improve the quality of the arrival time picking. Double-difference tomography is used due to its ability to reduce uncertainties of the model associated with picking and velocity structure. As a result, a clearer image of the three-dimensional P and S velocity structure and ration within the geothermal reservoir is expected. This image could be a key information for defining the strategy in order to further develop the geothermal field. Topic: Earth and Planetary Sciences 76 3D SEISMIC TOMOGRAPHY TO IMAGE THE SUBSURFACE STRUCTURE OF IY GEOTHERMAL FIELD USING DOUBLE DIFFERENCE METHOD AND WAVEFORM CROSS-CORRELATION: PRELIMINARY RESULTSIndriani Yunitasari1, Andri Dian Nugraha2, Mohammad Rachmat Sule3 1Geophysical Engineering, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jl. Ganeca 10, Bandung 40132, Jawa Barat, Indonesia 2Global Geophysics Research Group, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jl. Ganeca 10, Bandung 40132, Jawa Barat, Indonesia 3Exploration and Engineering Seismology Research Group, Institut Teknologi Bandung, Jl. Ganeca 10, Bandung 40132, Jawa Barat, Indonesia Abstract IY Geothermal Field is located in southern part of Bandung, West Java, Indonesia. This geothermal field is dominated by liquid and vapour conditions. In this study, we used microearthquake waveform and catalog data around the geothermal field to determine 3-D seismic velocity through tomography inversion. The data was recorded by 15 stations around IY geothermal field area. During ten months of recording, there were 926 microearthquake events have been identified. We conducted subsurface 3D seismic velocity by using double difference tomographic (tomoDD) in order to delineate structure around the reservoir. To improve the distribution of initial hypocenter from data catalog, we relocated the hypocenters using double difference method and using waveform cross-correlation to enhance P-and S-wave arrival times as input for tomographic inversion. We updated data catalog and applying threshold of coefficient value that we used is around 0.732-0.822, depends on updated waveforms from cross-correlation method. From 15 stations, there are two stations used threshold value above 0.8 for P-wave and two stations for S-wave, whereas other stations used value of 0.732-0.8. The ongoing process, is to determine seismic velocity structure by applying tomographic inversion. We also know the clustering events near well in geothermal field. The result shows the relocated hypocenters with 3-D velocity model to determine geothermal zones, such as brine and steam, depends on the value of Vp, Vs, and Vp/Vs ratio. Topic: Earth and Planetary Sciences 77 Seismic Tomography under Mt. Sinabung Using Waveform Cross-Correlation Arrival Time Data from October 2010 - December 2011: Preliminary ResultsZakaria S. Laksmana (a*), Andri Dian Nugraha(b), Sri Hidayati (c) a) Geophysical Engineering, Institut Teknologi Bandung, Jl. Ganeca 10, Bandung 40132, West Java, Indonesia *nivalyx[at]gmail.com b) Global Geophysics Research Group, Institut Teknologi Bandung, Jl. Ganeca 10, Bandung 40132, West Java, Indonesia c) Center of Volcanology and Geological Hazard Mitigation, Jl. Diponegoro 57, Bandung 40122, Indonesia Abstract Sinabung Volcano is an active stratovolcano located in the North Sumatera, Indonesia with a frequent volcanic-related activity over the past years. Based on seismicity monitoring which was conducted from October 2010 ? December 2011, a high frequency of microseismic events (especially volcano-tectonic events) was observed in which our data catalog records a total of more than 400 volcano-tectonic events spanning over this period. Throughout our study, so far we have observed that Sinabung Volcano?s volcano-tectonic events were concentrated northeast from the volcano?s peak, with a cluster of events being observed 2 kilometers beneath sea level. In regards to this we have also been conducting a waveform cross-correlation process in order to improve the data catalog?s picking results in order to relocate and improve the events? hypocentral location so that it can be more accurately represented. The updated data are then used as an input for a delay-time tomography by using SIMULPS method, in which the output tomogram is then interpreted according to Sinabung Volcano?s properties so that we can better understand the evolution of Sinabung Volcano?s sub-surface features over the aforementioned period in order to get a better understanding of Sinabung Volcano?s characteristics for future disaster mitigation purposes. Topic: Earth and Planetary Sciences 78 3D Seismic Velocity around Source Region of Mw 6.5, 2016 Pidie Jaya Earthquake from Double Difference Tomography and Waveform Cross Correlation: Preliminary Results1Rianty Kusuma Dewi, 1Andri Dian Nugraha, 2Rexha Verdhora Ry 1.Geophysical Engineering, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesa No 10, Bandung, 40132 2.Global Geophysics Research Group, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesa No 10, Bandung, 40132 Abstract Pidie Jaya earthquake occured on 7th December 2016. It causes a lot of damage and 102 victims reported by National Disaster Management Authority (BNPB). According to Indonesian Agency for Meteorological, Climatological and Geophysics (BMKG), the main shock happened at 05:03:36 WIB in 5.42 north latitude and 96.24 east longitude with magnitude of 6.5 (depth of 15 km). A week after, research collaboration of ITB, Unsyiah, BMKG and GFZ Postdam Germany installed 9 seismometers around Pidie Jaya since 14th December 2017 to 16th January 2017. It has been identified that there were 302 of aftershock events. In this study, we have been conducting (i) determination of location of hypocenter using probabilistic grid searching method with NLLoc program, (ii) updating of 1D seismic velocity model using Joint Hypocenter-Velocity Determination (JHD) with Velest program, (iii) enhance of picked of P-and S-wave arrival time using Waveform Cross Correlation Method, and (iv) determining 3D Vp, Vs, and Vp/Vs structure in the region using tomography double difference method with program TomoDD. Tomography double-difference inversion gives the clearest image in 15 km depth.
base graph and the number of links $m_{k}$ in the $k-$th subgraph is independent of $k$. This allows us to write $m_{k}=\gamma g(k)$, being $\gamma$ a constat value which is the same for every vertex of the base graph $\mathcal{G}_0$. Then, the summation in Eq. (\ref{eq: H ramificato}) becomes: \begin{equation} \sum_{k\in\mathcal{G}_{0}}m_k\left(R_{r_f,r_z}-R_{r_i,r_z}\right)=2\gamma\left[H_{0}\left(r_{i},r_{f}\right)-m_{0}R_{r_{f},r_{i}}\right], \label{eq: futuranullaa} \end{equation} and, using this result, Eq. (\ref{eq: H ramificato}) becomes: \begin{eqnarray} H\left(i,f\right) & = & H_{I}\left(i,r_{i}\right)+2\gamma H_{0}\left(r_{i},r_{f}\right)+H_{F}\left(r_{f},f\right)+\label{eq: per pettine chiuso}\\ & & +2\left(m-m_{f}\right)R\left(f,r_{f}\right)+\left(m-2\gamma m_{0}\right)R_{r_{i},r_{f}},\nonumber \end{eqnarray} where $ H_{I}\left(a,b\right)$ and $ H_{F}\left(a,b\right)$ represent the mean time taken by a walker to first go from a node $a$ to a node $b$ (being $a,b\in\mathcal{I}$ and $a,b\in\mathcal{F}$ respectively) without ever eciting out of $\mathcal{I}$ and $\mathcal{F}$ respectively; analogously $ H_0\left(a,b\right)$ represent the mean time to first reach $b$ starting from $a$ (being $a,b\in\mathcal{G}_0$) and moving only on the base graph. In this case the mean time to go from $i$ to $f$ can be expressed by a sum of three hitting times, and some constant value. Another interesting case appears when the position of the root of the starting vertex $r_i$ and the root of the ending one $r_f$ displays a symmetry such that \begin{equation} H\left(r_i,r_f\right)=H\left(r_f,r_i\right) \label{Condicio} \end{equation} in this particular case Eq. (\ref{eq: H ramificato}) becomes \begin{eqnarray} H(i,f)& = & H_{i}\left(i,r_{i}\right)+H_{f}\left(r_{f},f\right)+ \nonumber\\ &+&2\left(m-m_{f}\right)R\left(f,r_{f}\right)+mR_{r_{i},r_{f}}. \label{eq: Piusemplice} \end{eqnarray} This formula allows us to calculate the hitting times for any "symmetric structure" (according to Eq. (\ref{Condicio})), like a Sierpinski fibrated with linear chains, (see Fig \ref{fig: Sierp}). Let us consider a Sierpinski gasket of generation $n$ and as fibers linear chains with $L$ vertices. The total number of vertices in the base-graph is \cite{Sierpinsky} $|V|^{(n)}_0= \frac{3 \left(3^n +1 \right)}{2}$ and the total number of links is \cite{Sierpinsky} $m^{(n)}_0= 3^{n+1}$. For instance let us try to evaluate the mean time to first reach the node $f$ belonging to a tooth stemming from a corner, starting from a node $i$ belonging to a tooth stemming from another corner (see Fig \ref{fig: Sierp}). Neri et al. in \cite{Sierpinsky2} showed that $R_{r_i,r_f}=\frac{2\cdot 5^n}{3^{n+1}}$. Using Eq. (\ref{eq: Piusemplice}) it is straightforward to show that $H\left(i,f\right)$ reads as \begin{equation*} H_{\textrm{Max}} = 3L^2+\left(1+\frac{L}{2}\right)\left(3^{n+1}+2\cdot 5^{n}\right)+\left(\frac{5}{3}\right)^n L . \end{equation*} \begin{figure} \includegraphics[width=\linewidth]{SierpinksiFibrato.pdf} \caption{Example of a Sierpinski gasket of generation 3 fibrated with linear chains with $5$ vertices.} \label{fig: Sierp} \end{figure} \subsection{Bidimensional Combs} \label{sub:UNICUM} Bidimensional combs are branched structures where the base-graph $\mathcal{G}_{0}$ is a monodimensional graph, like a linear chain or a ring, usually called backbone, and fiber graphs are linear chains usually called teeth, see Fig. \ref{fig: Pettini}. Here, we consider two different cases according to the the boundary conditions applied to the backbone: we call ``Bidimensional Open Combs'' those combs whose backbone is a linear chain, and ``Bidimensional Closed Combs'' those whose backbone is a ring; teeth are always taken as open (i.e. reflecting at boundaries). As shown in Fig. \ref{fig: Pettini}, the size of the backbone is $2L+1$, and each linear chain departing from the backbone counts $\alpha L$ vertices, in such a way that $\alpha$ measures the degree of inhomogeneity along the two directions (e.g., when $\alpha=1$ the comb is square). Using Eq. (\ref{eq: H ramificato}) we are now able to calculate the value of $H\left(i,f\right)$ for those graphs. In the following, exploiting the fact that combs are embedded in $2-$dimensional lattices, the position of the arbitrary point $k$ will be denoted as $\left(x_k,y_k\right)$ where $x_k$ indicates the projection of k on the backbone, and $y_k$ its length along the related tooth. Let us start with open combs for which we apply Eq. (\ref{eq: H ramificato}), with some algebra we get \footnote{What we really find from Eq. (\ref{eq: H ramificato}) is $H\left(i,f\right)-2L\left(1+\alpha+2\alpha L\right)\left[-1+|y_{i}|+|y_{f}|-|y_{i}-y_{f}|\right]\delta_{x_{i},x_{f}}$. This difference is due to the hypotesis that $x_i\neq x_f$, namely the starting point and the final point belong to different fiber graphs. If we use Eq. (\ref{eq: per pettine chiuso}) instead of Eq. (\ref{eq: H ramificato}), we introduce an additional error of $\left(x_{i}-x_{f}\right)\left(x_{i}+x_{f}\right)$ due the different value of $g\left(x,0\right)$ between $x=\pm L$, where $g\left(\pm L,0\right)=3$, and $x\neq\pm L$, where $g\left(x,0\right)=4$.}: \begin{eqnarray} \nonumber H\left(i,f\right) & = & mR_{i,f}+\left(y_{f}^{2}-y_{i}^{2}\right)+\left(2\alpha L+1\right)\left(x_{f}^{2}-x_{i}^{2}\right)+\\ \label{eq: Comb 2D Open} & +& \left(|y_{2}|-|y_{1}|\right)\left(L^{2}4\alpha+2L\right), \end{eqnarray} where the resistance between two generic points $a=\left(x_a,y_a\right)$ and $b=\left(x_b,y_b\right)$ for open combs is $$R_{a,b}=\delta_{x_a,x_b}|y_a-y_b|+\left(1-\delta_{x_a,x_b}\right)\left(|x_a-x_b|+|y_a|+|y_b|\right).$$ From this equation we can extract important information, for example the mean time to cross the whole backbone and the mean time to ``climb'' a tooth. These quantities are, respectively, \[ H\left({ -L,0}\rightarrow {L,0} \right)=L^{3}\left(8\alpha\right)+L^{2}\left(4+4\alpha\right), \] and \[ H\left(\left\{ 0,0\right\}\rightarrow\left\{ 0,\alpha L\right\} \right)=L^{3}\left(8\alpha^{2}\right)+L^{2}\left(4\alpha+3\alpha^{2}\right). \] Using Eq. (\ref{eq: H ramificato}) we can also calculate the Hitting Time for closed combs, $H_{2}^{\circlearrowleft}\left(i,f\right)$, where we used the apex $^\circlearrowleft$ to emphasize that this quantity is reffered to a closed comb \footnote{Once again what we really find using Eq. (\ref{eq: H ramificato}) is $H_{2}^{\circlearrowleft}\left(i,f\right)-\left(1+2L\right)\left(1+2\alpha L\right)\left[|y_{i}|+|y_{f}|-|y_{i}-y_{f}|\right]\delta_{x_{i},x_{f}}$. This difference is due to the hypotesis that $x_i\neq x_f$. This time if we use Eq. (\ref{eq: per pettine chiuso}) instead of Eq. (\ref{eq: H ramificato}), we do not introduce any additional error.}: \begin{equation} H_{2}^{\circlearrowleft}\left(i,f\right)=mR_{if}^\circlearrowleft+\left(|y_{f}|-|y_{i}|\right)\left[4\alpha L^{2}+2L+1\right]+\left(y_{f}^{2}-y_{i}^{2}\right),\label{eq: Closed Comb 2d} \end{equation} where the resistance between two generic points $a=\left(x_a,y_a\right)$ and $b=\left(x_b,y_b\right)$ for close combs is \begin{eqnarray*} R_{a,b}^{\circlearrowleft} & = &\delta_{x_a,x_b}|y_a-y_b|+\left(1-\delta_{x_a,x_b}\right)\times\\ & & \times\left(|x_a-x_b|+|y_a|+|y_b|-\frac{\left(x_a-x_b\right)^2}{2L+1}\right). \end{eqnarray*}From Eq. (\ref{eq: Closed Comb 2d}) we can extract the time needed to cross half backbone as \[ H_{2}^{\circlearrowleft}\left(\left\{ 0,0\right\} \rightarrow\left\{ L,0\right\} \right)=L^{3}\left(6\alpha\right)+L^{2}\left(3+2\alpha\right)+L, \] and the time to climb a thoot as \[ H_{2}^{\circlearrowleft}\left(\left\{ 0,0\right\} \rightarrow\left\{ 0,\alpha L\right\} \right)=L^{3}\left(8\alpha^{2}\right)+L^{2}\left(4\alpha+3\alpha^{2}\right)+L\left(2\alpha\right). \] We can notice that the leading order of the mean time needed to cross the backbone is proportional to $\alpha$ (consistent with results in \cite{Redner1}) and the leading order of the mean time to climb a tooth is proportional to $\alpha^{2}$. \subsection{$d$-dimensional Open Combs} \begin{figure} \includegraphics[width=\linewidth]{Pettine3D.pdf} \caption{Example of a 3-dimensional comb. Vertices which belong to the first generation are green, those which belong to the second are lilac, and those which belong to the third are red.} \label{fig: TRIDI} \end{figure} We define $d$-dimensional open combs recursively (see also Fig \ref{fig: TRIDI}): \begin{itemize} \item a $1-$dimensional open comb is a linear chain; \item a $2-$dimensional open comb is a branched graph $\mathcal{G}$ whose base graph is a linear chain and whose fiber graphs are linear chain; \item \ldots \item a $d-$dimensional open comb is a branched graph $\mathcal{G}$ whose base graph is a $(d-1)-$dimensional combs and whose fibers are linear chains. \end{itemize} As shown in Sec. \ref{sub:UNICUM}, a finite $2$-dimensional comb can be defined by fixing two parameters: the length of the backbone $L$ and the ratio $\alpha$ between the length of a tooth and the length of the backbone, in such a way that the thermodynamic limit $L \rightarrow \infty$ is well defined. Now, to fix a $d-$dimensional comb we introduce $d$ parameters: $\left(L,\alpha_2,\ldots,\alpha_d \right)$, being $\alpha_i$ the ratio between the length of the tooth in the $i$-th direction (i.e. added at the $i-$th iteration) and the length of the backbone: $\alpha_i=\frac{L_i}{L}$. We label every vertices with $d$ coordinates and call $i=\left(x_{1},\ldots,x_{d}\right)$ and $f=\left(y_{1},\ldots,y_{d}\right)$, where the first coordinate labels the vertices on the base graph and it takes value from $-L$ to $L$, the second one goes from $-\alpha_{2}L$ to $\alpha_{2}L$ and labels the base of the fiber graphs, the third one goes from $-\alpha_{3}L$ to $\alpha_{3}L$ and so on. We define $H^{\left(d \right)}\left(i,f \right)$, the Hitting Time from $i$ to $f$ in a $d-$dimensional comb and we will use Eq. (\ref{eq: H ramificato}) to calculate its leading order. Using Tetali's equation it is straightforward to observe that $H^{\left(d \right)}\left(i,f \right) \sim \mathcal{O}\left(L^{d+1}\right)$. In fact, the first term of the right hand side in Eq. (\ref{eq: Tetali}) is $m_{d}R_{i,f}$ and one can see that in the $d-$dimensional comb the number of links $m_{d}$ is \begin{equation} m_{d}\sim\left(L^{d}2^{d}\prod_{k=2}^{d}\alpha_{k}\right),\label{eq: M_n} \end{equation} while the maximum value of $R$ is \[ R=2L\left(1+\sum_{k=2}^{d}\alpha_{k}\right)+1, \] thus, the leading order of this first term is $L^{d+1}$. Also the second term in the right hand side of the Eq. (\ref{eq: Tetali}), that is $\frac{1}{2}\sum_{z\in V}g\left(z\right)\left(R_{f,k}-R_{k,i}\right)$, where $g\left(z\right)\leq2d=\mathcal{O}\left(L^{0}\right)$, $V=\mathcal{O}\left(L^{d}\right)$ and $R=\mathcal{O}\left(L\right)$, is order $L^{d+1}$. Using this observation we can simplify Eq. (\ref{eq: H ramificato}), in fact $H_{i}$, $H_{f}$ and $m_{d}R\left(f,r_{f}\right)$ are $\mathcal{O}\left(L^{d}\right)$. So the leading order of Eq. (\ref{eq: H ramificato}) is given only by \begin{eqnarray} H\left(i,f\right) & \sim & \frac{1}{2}\sum_{k=0}^{L}\left\{ |k-y_{1}|-|k-x_{1}|\right\} m_{k}+\nonumber \\ & + & 2m_{d}R\left(f,r_{f}\right)+m_{d}R_{r_{i},r_{f}}. \end{eqnarray} We have just pointed out that $m_{d}\sim\left(L^{d}2^{d-1}\prod_{k=2}^{d}\alpha_{k}\right)$, where
correct shape" assert np.all(n >= 0), \ "eta returned has negative values" assert np.all(n[:,0] == tree.tree_.weighted_n_node_samples[tree.tree_.children_left == -1]),\ "eta structure doesn't match up with number of observes per leaf" # static check # tree structure: # ~upper: left, lower: right~ # num obs class 1 class 2 # |--1 10 5 5 # -0-| 34 21 13 # | |--3 9 9 0 # |-2-| 24 16 8 # | |--5 8 7 1 # |-4-| 15 7 8 # |--6 7 0 7 # eta # (1) 10 | 24 | 0 | 0 # (3) 9 | 15 | 10 | 0 # (5) 8 | 7 | 9 | 10 # (6) 7 | 8 | 9 | 10 # Gamma (class 1) # (1) 5 | 9+7 = 16| 0 | 0 # (3) 9 | 7 | 5 | 0 # (5) 7 | 0 | 9 | 5 # (6) 0 | 7 | 9 | 5 # Gamma (class 2) # (1) 5 | 1+7 = 8| 0 | 0 # (3) 0 | 8 | 5 | 0 # (5) 1 | 7 | 0 | 5 # (6) 7 | 1 | 0 | 5 class inner_fake_tree(): def __init__(self, nn, cl, cr, v): self.weighted_n_node_samples = nn self.children_left = cl self.children_right = cr self.value = v class fake_tree(): def __init__(self, nn, cl, cr, v): self.tree_ = inner_fake_tree(nn, cl, cr, v) self.__class__ = sklearn.tree.tree.DecisionTreeClassifier weighted_n_node_samples = np.array([34,10,24,9,15,8,7], dtype = np.int) children_left = np.array([2,-1,4,-1,6,-1,-1], dtype = np.int) children_right = np.array([1,-1,3,-1,5,-1,-1], dtype = np.int) value = np.array([[21, 13], [5, 5], [16, 8], [9, 0], [7, 8], [7, 1], [0, 7]], dtype = np.float).reshape((-1,1,2)) test = fake_tree(weighted_n_node_samples, children_left, children_right, value) n_leaf = 4 g_static, n_static = smooth_rf.create_Gamma_eta_tree(test) n_expected = np.array([[10,24,0,0], [9,15,10,0], [8,7,9,10], [7,8,9,10]]) g_expected = np.array([[[5,16,0,0], [9,7,5,0], [7,0,9,5], [0,7,9,5]], [[5,8,0,0], [0,8,5,0], [1,7,0,5], [7,1,0,5]]]) assert np.all(g_static == g_expected), \ "static test's Gamma failed to reproduce correct solutions" assert np.all(n_static == n_expected), \ "static test's eta failed to reproduce correct solutions" def test_create_Gamma_eta_tree_classification_impurity(): """ test for create_Gamma_eta_tree, classification tree -standard,impurity only Both static and random tests (random tests are more relative to structure than exact answers) """ # random - structure output check # data creation n = 200 min_size_leaf = 1 X = np.random.uniform(size = (n, 510), low = -1,high = 1) y = 10 * np.sin(np.pi * X[:,0]*X[:,1]) + 20 * ( X[:,2] - .5)**2 +\ 10 * X[:,3] + 5 * X[:,4] + np.random.normal(size = n) y_cat = np.array( pd.cut(y, bins = 5, labels = np.arange(5, dtype = np.int)), dtype = np.int) y = y_cat num_classes = len(Counter(y_cat).keys()) rf_class = sklearn.ensemble.RandomForestClassifier(n_estimators = 2, min_samples_leaf = min_size_leaf) random_forest = rf_class.fit(X = X, y = y.ravel()) tree = random_forest.estimators_[0] max_depth_range = np.max(smooth_rf.depth_per_node(tree)) + 1 G, n = smooth_rf.create_Gamma_eta_tree(tree, distance_style = "impurity", levels = 10) # given we don't actually know the number of levels in each tree assert G.shape[0] == num_classes and \ G.shape[1] == np.sum(tree.tree_.children_left == -1) and \ len(G.shape) == 3, \ "Gamma returned does not have the correct shape" assert n.shape == G.shape[1:3], \ "eta returned does not have the correct shape" assert np.all(n >= 0), \ "eta returned has negative values" assert np.all(n[:,0] == tree.tree_.weighted_n_node_samples[tree.tree_.children_left == -1]),\ "eta structure doesn't match up with number of observes per leaf" # static check # tree structure: # ~upper: left, lower: right~ # num obs class 1 class 2 # |--1 10 5 5 # -0-| 34 21 13 # | |--3 9 9 0 # |-2-| 24 16 8 # | |--5 8 7 1 # |-4-| 15 7 8 # |--6 7 0 7 # eta # (1) 10 | 24 | 0 | 0 # (3) 9 | 15 | 10 | 0 # (5) 8 | 7 | 9 | 10 # (6) 7 | 8 | 9 | 10 # Gamma (class 1) # (1) 5 | 9+7 = 16| 0 | 0 # (3) 9 | 7 | 5 | 0 # (5) 7 | 0 | 9 | 5 # (6) 0 | 7 | 9 | 5 # Gamma (class 2) # (1) 5 | 1+7 = 8| 0 | 0 # (3) 0 | 8 | 5 | 0 # (5) 1 | 7 | 0 | 5 # (6) 7 | 1 | 0 | 5 class inner_fake_tree(): def __init__(self, nn, cl, cr, v, impurity): self.weighted_n_node_samples = nn self.children_left = cl self.children_right = cr self.value = v self.impurity = impurity class fake_tree(): def __init__(self, nn, cl, cr, v, impurity): self.tree_ = inner_fake_tree(nn, cl, cr, v, impurity) self.__class__ = sklearn.tree.tree.DecisionTreeClassifier weighted_n_node_samples = np.array([34,10,24,9,15,8,7], dtype = np.int) children_left = np.array([2,-1,4,-1,6,-1,-1], dtype = np.int) children_right = np.array([1,-1,3,-1,5,-1,-1], dtype = np.int) value = np.array([[21, 13], [5, 5], [16, 8], [9, 0], [7, 8], [7, 1], [0, 7]], dtype = np.float).reshape((-1,1,2)) impurity = np.array([4,3,3,2,2,1,1]) test = fake_tree(weighted_n_node_samples, children_left, children_right, value, impurity) n_leaf = 4 g_static, n_static = smooth_rf.create_Gamma_eta_tree(test, distance_style = "impurity", levels = np.array([0,1,2,3,4])) n_expected = np.array([[10,24,0,0], [9,15,10,0], [8,7,9,10], [7,8,9,10]]) g_expected = np.array([[[5,16,0,0], [9,7,5,0], [7,0,9,5], [0,7,9,5]], [[5,8,0,0], [0,8,5,0], [1,7,0,5], [7,1,0,5]]]) assert np.all(g_static == g_expected), \ "static test's Gamma failed to reproduce correct solutions" assert np.all(n_static == n_expected), \ "static test's eta failed to reproduce correct solutions" g_static, n_static = smooth_rf.create_Gamma_eta_tree(test, distance_style = "impurity", levels = np.array([0,1,2,2.1,2.5,3,4])) n_expected = np.array([[10,24, 0, 0,0, 0 ], [9 ,15,10, 0,0, 0 ], [8 ,7 , 9, 0,0, 10], [7 ,8 , 9, 0,0, 10]]) g_expected = np.array([[[5,16,0, 0,0, 0], [9,7 ,5, 0,0, 0], [7,0 ,9, 0,0, 5], [0,7 ,9, 0,0, 5]], [[5,8 ,0, 0,0, 0], [0,8 ,5, 0,0, 0], [1,7 ,0, 0,0, 5], [7,1 ,0, 0,0, 5]]]) assert np.all(g_static == g_expected), \ "static test's Gamma failed to reproduce correct solutions "+\ "(extra levels)" assert np.all(n_static == n_expected), \ "static test's eta failed to reproduce correct solutions "+\ "(extra levels)" def test_create_Gamma_eta_forest_regression(): """ test create_Gamma_eta_forest, regression forests - standard depth only compares to what is expected to be returned from create_Gamma_eta_tree - mostly just structurally """ # parent = F n = 200 n_tree = 10 min_size_leaf = 1 X = np.random.uniform(size = (n, 510), low = -1,high = 1) y = 10 * np.sin(np.pi * X[:,0]*X[:,1]) + 20 * ( X[:,2] - .5)**2 +\ 10 * X[:,3] + 5 * X[:,4] + np.random.normal(size = n) rf_class = sklearn.ensemble.RandomForestRegressor(n_estimators = n_tree, min_samples_leaf = min_size_leaf) random_forest = rf_class.fit(X = X, y = y.ravel()) g, n, t = smooth_rf.create_Gamma_eta_forest(random_forest) assert g.shape == n.shape, \ "Gamma and eta matrices are not the correct shared size "+\ "(parents_all = False)" assert g.shape[0] == t.shape[0], \ "the tree index vector doesn't have the correct number of observations "+\ "(parents_all = False)" assert np.all( np.array(list(dict(Counter(t)).keys())) == np.arange(n_tree)),\ "tree index doesn't contain expected tree index values "+\ "(parents_all = False)" for t_idx, tree in enumerate(random_forest.estimators_): max_depth_range = np.int(np.max(smooth_rf.depth_per_node(tree)) + 1) G_tree, n_tree = smooth_rf.create_Gamma_eta_tree(tree) assert G_tree.shape[0] == np.sum(t == t_idx), \ "shape of single Gamma from create_Gamma_eta_tree" +\ "does not match structure from t_idx output "+\ "(parents_all = False)" assert np.all(G_tree == g[t==t_idx,:][:,:max_depth_range]), \ "doesn't match create_Gamma_eta_tree function for Gamma "+\ "(parents_all = False)" if max_depth_range != g.shape[1]: assert np.all(g[t==t_idx,][:,max_depth_range:] == 0), \ "extra dimensions, based on the global forest having larger" +\ "depth than the individual tree (num %d) in Gamma are "+\ "non-zero (parents_all = False)" %t_idx assert np.all(n_tree == n[t==t_idx,:][:,:max_depth_range]), \ "doesn't match create_Gamma_eta_tree function for eta " +\ "(parents_all = False" if max_depth_range != g.shape[1]: assert np.all(n[t==t_idx,][:,max_depth_range:] == 0), \ "extra dimensions, based on the global forest having larger" +\ "depth than the individual tree (num %d) in eta are "+\ "non-zero (parents_all = False)" %t_idx # parent = T n = 200 n_tree = 10 min_size_leaf = 1 X = np.random.uniform(size = (n, 510), low = -1,high = 1) y
and this data may also be needed by CPU cores. If \chiii{PIM processing logic is} coherent with the processor, the PIM programming model is relatively simple, as it remains similar to conventional \ch{shared memory} multithreaded programming\ch{, which makes PIM architectures easier to adopt in general-purpose systems}. Thus, allowing \chiii{PIM processing logic} to maintain such a simple and traditional shared memory programming model can facilitate the \ch{widespread} adoption of PIM. However, employing traditional fine-grained cache coherence {(e.g., a cache-block based MESI protocol~\cite{mesi1984})} for PIM forces a large number of coherence messages to traverse {the narrow processor-memory bus}, potentially undoing the benefits of high-bandwidth \chvi{and low-latency} PIM \chv{execution}. \chv{Unfortunately,} solutions for coherence proposed by prior PIM works~\cite{ahn.pei.isca15,hsieh.isca16,ahn.tesseract.isca15} either place some restrictions on the programming model (by eliminating coherence and requiring message passing based programming) or limit the performance \ch{and energy} gains achievable by a PIM architecture. We have developed a new coherence protocol, \emph{CoNDA}~\cite{boroumand2019conda,boroumand2016pim,boroumand.arxiv17}, that maintains cache coherence between PIM processing logic and CPU cores \emph{without} sending coherence requests for every memory access. Instead, as shown in Figure~\ref{fig:conda}, CoNDA enables efficient coherence by having the PIM logic: \begin{enumerate} \item \emph{speculatively} acquire coherence permissions for multiple memory operations over a given period of time (which we call \emph{optimistic execution}; \circled{1} in the figure); \item \emph{batch} the coherence requests from the multiple memory operations into a set of compressed coherence \emph{signatures} (\circled{2} and \circled{3}); \item send the signatures to the CPU to determine whether the speculation violated any coherence semantics. \end{enumerate} Whenever the CPU receives compressed signatures from the PIM core (e.g., when the PIM kernel finishes), the CPU performs \emph{coherence resolution} (\circled{4}), where it checks if any coherence conflicts occurred. If a conflict exists, any dirty cache line in the CPU that caused the conflict is flushed, and the PIM core rolls back and re-executes the code that was optimistically executed. \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{figures/conda-operation.pdf} \vspace{-5mm} \caption{High-level operation of CoNDA, a new coherence mechanism for near-data accelerators, including PNM and PUM. Reproduced from~\cite{ghose2019arxiv}. Originally presented in~\cite{boroumand2019conda}.} \label{fig:conda} \end{figure} As a result {of this "lazy" checking of coherence violations}, CoNDA approaches near-ideal coherence behavior: {the performance and energy consumption of a PIM architecture with CoNDA are, respectively, within 10.4\% and 4.4\% the performance and energy consumption of} a system where coherence is performed at zero latency and energy cost. Despite the leap that CoNDA~\cite{boroumand2019conda,boroumand2016pim,boroumand.arxiv17} represents for memory coherence in computing systems with PIM support, we believe that it is still necessary to explore other solutions for memory coherence that can efficiently deal with all types of workloads and PIM offloading granularities as well as different approaches to PIM. \juanr{In this direction, the design of new interfaces featuring memory coherence support across devices and memory (e.g., CXL~\cite{van2019hoti}, OpenCAPI~\cite{openCAPI, singh2020nero, singh2021fpga, singh2021accelerating}, OMI~\cite{coughlin2021higher}) can enable faster adoption of PIM by providing} \juanrr{a communication substrate on top of which efficient coherence and programming support can be built.} \subsection{Virtual Memory Support} \label{sec:virtualmemory} When an application needs to access its data inside the main memory, the CPU core must first perform an \emph{address translation}, which converts the data's virtual address into a \emph{physical} address within main memory. If the translation {metadata} is not available in the CPU's translation lookaside buffer (TLB), the CPU must invoke the page table walker in order to perform a long-latency page table walk that involves multiple \emph{sequential} reads to the main memory and lowers the application's performance. In modern systems, the virtual memory system also provides access protection mechanisms. A naive solution to reducing the overhead of page walks is to utilize PIM engines to perform page table walks. This can be done by duplicating the content of the TLB and {moving} the page walker {to} the PIM processing logic in main memory. Unfortunately, this is either difficult or expensive for three reasons. First, coherence {has} to be maintained between the CPU's TLBs and the memory-side TLBs. This introduces extra complexity and off-chip requests. Second, duplicating the TLBs increases the storage and complexity overheads {on the memory side, which should be carefully contained}. Third, if main memory is shared across {CPUs with} different types of architectures, page table structures and the implementation of address translations can be different across {the different} architectures. Ensuring compatibility between the in-memory TLB/page walker and all possible types of {virtual memory} architecture designs can be complicated and often not even practically feasible. To address these concerns and reduce the overhead of virtual memory, we explore a tractable solution for PIM address translation as part of our in-memory pointer chasing accelerator, IMPICA~\cite{impica}. IMPICA exploits the high bandwidth available within 3D-stacked memory to traverse a chain of virtual memory pointers within DRAM, \emph{without} having to look up virtual-to-physical address translations in the CPU translation lookaside buffer (TLB) and without using the page walkers within the CPU. {IMPICA's key ideas are 1) to use a region-based page table, which is optimized for PIM acceleration, and 2) to decouple address calculation and memory access with two specialized engines. IMPICA improves the performance of pointer chasing operations in three commonly-used linked data structures (linked lists, hash tables, and B-trees) by 92\%, 29\%, and 18\%, respectively. On a real database application, DBx1000, IMPICA improves transaction throughput and response time by 16\% and 13\%, respectively. IMPICA also reduces overall system energy consumption (by 41\%, 23\%, and 10\% for the three commonly-used data structures, and by 6\% for DBx1000).} Beyond pointer chasing operations that are tackled by IMPICA~\cite{impica}, providing efficient mechanisms for PIM-based virtual-to-physical address translation (as well as access protection) remains a challenge for the generality of applications, especially those that access large amounts of virtual memory~\cite{mask,mosaic,mosaic-osr}. Looking forward, we recently introduced a fundamentally-new virtual memory framework, the Virtual Block Interface (VBI)~\cite{hajinazar2020virtual}, which proposes to delegate physical memory management duties completely to the memory controller hardware as well as other specialized hardware. Figure~\ref{fig:vbi} compares VBI to conventional virtual memory at a very high level. Designing VBI-based PIM units that manage memory allocation and address translation can help fundamentally overcome this important virtual memory challenge of PIM systems. We refer the reader to our VBI work~\cite{hajinazar2020virtual} for details. \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{figures/vbi.pdf} \vspace{-5mm} \caption{The Virtual Block Interface versus conventional virtual memory. Reproduced from~\cite{hajinazar2020virtual}.} \label{fig:vbi} \end{figure} \subsection{Data Structures for PIM} \label{sec:datastructures} Current systems with many cores run applications with concurrent data structures to achieve high performance and scalability, with significant benefits over sequential data structures. Such concurrent data structures \sg{are often used in} heavily-optimized server systems today, where high performance is critical. To enable the adoption of PIM in such many-core systems, it is necessary to develop concurrent data structures that are specifically tailored to take advantage of PIM. \emph{Pointer chasing data structures} and \emph{contended data structures} require careful analysis and design to leverage the high bandwidth and low latency of 3D-stacked memories~\cite{liu-spaa17}. First, pointer chasing data structures, such as linked-lists and skip-lists, have a high degree of inherent parallelism and low contention, but a naive implementation in PIM cores is burdened by hard-to-predict memory access patterns. By combining and partitioning the data across 3D-stacked memory vaults, it is possible to fully exploit the inherent parallelism of these data structures. Second, contended data structures, such as FIFO queues, are a good fit for CPU caches because they expose high locality. However, they suffer from high contention when many threads access them concurrently. {Their performance on traditional CPU systems can be improved using a new PIM-based FIFO queue~\cite{liu-spaa17}. The proposed PIM-based FIFO queue uses a PIM core to perform enqueue and dequeue operations requested by CPU cores. The PIM core can pipeline requests from different CPU cores for improved performance.} {As \sg{recent work~\cite{liu-spaa17}} shows, PIM-managed concurrent data structures can outperform state-of-the-art concurrent data structures that \sg{are designed} for and executed on multiple cores. We believe and hope that future work will enable other types of data structures (e.g., hash tables, search trees, priority queues) to benefit from PIM-managed designs.} \subsection{Benchmarks and Simulation Infrastructures} \label{sec:simulation} To ease the adoption of PIM, it is critical that we {accurately} assess the benefits and shortcomings of PIM. Accurate assessment of PIM requires (1)~a {preferably large} set of real-world memory-intensive applications that have the potential to benefit significantly when executed near memory, (2)~a rigorous methodology to (automatically) identify PIM offloading candidates, and (3)~simulation/evaluation infrastructures that allow architects and system designers to {accurately} analyze the benefits and overheads of adding PIM processing logic to memory and executing code on this processing logic. In order
# Electric Field In Capacitor: What, How, Types, When, Why And Detailed Facts A capacitor is a device that stores the electric charge as the potential difference between the two plates and the electric field in the capacitor is on the application of the voltage source. The potential difference is created by the transportation of electrons from the positive terminal to the negative terminal of the capacitor and establishing the electric field in a capacitor. This charge difference stores the electric energy in the form of the potential of the charge and is proportional to the charge density on each plate. ## Electric Field in Capacitor Formula Like positive and negative charges, the capacitor plate also behaves as an acceptor and donor plate when the source is passed through the capacitor plates. The positive terminal of the capacitor will donate the electron and these free electrons will be accepted by the negative terminal of the capacitor. Due to the mobility of the free charges, the electric flux will be introduced within the capacitor and the total electric field in the capacitor will be E=δ/∈0 The charge density of each capacitor plate is called the surface density which is stated as the charge present on the surface of the plate per unit area and is given as σ =Q/A. Hence, This equation gives the electric field produced between the two plates of the capacitor. ## Electric Field Inside a Capacitor The capacitor has two plates having two different charge densities. The electric flux passes through both the surfaces of each plate hence the Area = 2A. Consider two plates having a positive surface charge density and a negative surface charge density separated by distance ‘d’. Let A be the area of the plates. The electric flux line is running from the positively charged plate to the plate with a majority of the negative carriers as depicted in the below figure. Let P be any point in the middle of the two charged plates of the capacitor. By applying Gauss law, φ =EA The electric field due to one charged plate of the capacitor is E.2A= q/ε0 We know that σ =Q/A Using this in the above equation Hence, the resultant electric field at any point between the plates of the capacitor will add up. Inserting value for σ, we get This is the total electric field inside a capacitor due to two parallel plates. ## What is the electric field produced by the parallel plate capacitor having a surface area of 0.3m2 and carrying a charge of 1.8C? Given: q=1.8C A= 0.3m2 We have =0.68 x 1012 V/m The electric field produced by the parallel capacitor carrying a charge of 1.8 C is 0.68 x 1012 V/m. ## Electric Field Outside a Capacitor Now, if point P lies outside the capacitor then the electric field at point P due to the plate having a positively charged surface density is Whereas, the electric field at point P due to negative charge surface density plate of the capacitor is Hence, the net electric field due to both the plates of the capacitor is E=E1+E2 E=0 At any point outside the capacitor, the electric field is always zero. Because, on supplying the electric current through the capacitor, one terminal of the capacitor will have a positive surface charge density and another will have a negative surface charge density. ## Electric Field in Capacitor With Dielectric Now we know that in presence of vacuum, the electric field inside a capacitor is E=σ/ε0 , the potential difference between the two plates is V=Ed where d is a distance of separation of two plates and hence the capacitance in this case is C= Q/V = ε0A/d Now if we place dielectric between the two plates of the capacitor on polarization, occupying the complete space between the two plates, the surface charge densities of the two plates are +σp and –σn. Thus the net surface charge density of both the plates is Hence, the electric field through the capacitor is Thus, the potential difference becomes For linear dielectrics, So, Where k is a dielectric constant and is greater than 1 i.e. k>1. Hence, the potential difference now becomes Inserting value for surface charge density V= Qd/Aε0k Hence, the capacitance of the capacitor is C= Q/V = ε0kA/d ε0k is a permittivity of medium and is denoted as ε Therefore the equation now becomes C= εA/d ## What is the electric field and the potential difference of a capacitor in presence of a dielectric medium of permeability 6× 10-12 C2N-1m-2 of width 3cm if surface charge densities are 6 C/m2 and -5.8 C/m2? Given: σp=6 C/m2 σn=-5.8 C/m2 ε0= 6 x 10-12 C2N-1m-2 d=3cm=0.03m The electric field of the capacitor is The electric field of the capacitor is found to be 3.3 x 1010 V/m, thus the potential difference between the capacitor plates is V=Ed =3.3 x 1010 x 0.03 =0.099 x 1010 V =0.1 x 1010 V The potential difference between the two capacitor plates is 0.1 x 1010 V. ## Electric Field Capacitor in Series When capacitors are connected in series, the potential difference between the plates adds up. If we have two capacitors C1 and C2 connected in series, and the potential difference across the plates is V1 and V2 respectively, then the net potential difference becomes V=V1+V2 The capacitance is C= Q/V Hence, V=Q/C Using this in the above equation we get V=Q/C1 + Q/C2 Solving this further The potential difference is also equal to V=Ed Hence the electric field due to capacitors in series we can calculate as E= V/d If there are ‘n’ numbers of capacitors connected in series then the electric field across the n capacitors will be ## Electric Field in Cylindrical Capacitor A cylindrical capacitor consists of two cylindrical plates. The inner cylinder has a positive surface charge density +σ of radius ‘r’ and the outer cylinder has a negative surface charge density –σ having a radius ‘R’. The electric flux runs from the surface of the inner cylinder to the outer cylinder as shown in the above figure. Fig (b) shows the cross-sectional view of the cylindrical capacitor. Let ds be the Gaussian surface at the middle of the two charged cylinders. The electric field inside the inner cylinder is zero as there is no electric flux through this region and as well as outside the cylinder of radius ‘R’ is also zero. The electric flux is running between the two cylinders at a distance s from the center. The electric flux through the Gaussian surface ds is given by Therefore, This equation gives the electric field produced by the cylindrical capacitor. ## What is the electric field at a point 0.6 cm away from the center of a cylindrical capacitor of the height of 2cm having an outer radius of 0.8 cm and the inner radius of 0.35 cm carrying a charge of 5C? Given: r= 0.35cm= 0.0035m R= 0.8cm= 0.08m S= 0.6cm= 0.06m h=2cm= 0.02m Q=5C We have, =74.62 x 1012 V/m The electric field of the capacitor at a distance of 0.6cm from the center of the cylindrical capacitor is 74.62 x 1012 V/m. ## Electric Field Intensity in Capacitor The electric field intensity outside the charged capacitor region is always zero as the charge carriers are present on the surface of the capacitor. In the inner region of the capacitor, the electric field is equal to the ratio of the density of the surface charge carriers, and the permeability of the medium in this region is the same at all the points inside the capacitor. Where σ is the surface charge density of the charge carriers present on the plate of the capacitor and ε0 is the permeability of the medium Also, the electric field can be calculated by measuring the potential difference between the two plates and finding the distance of separation of plates as E=V/d Where V is a potential difference between the plates of the capacitor and d is the distance between the two plates ## Electric Field in Spherical Capacitor Like a cylindrical capacitor, the spherical capacitor also consists of two spheres having oppositely charge carriers on the surfaces of each sphere. Consider a sphere of radius ‘R2’ having a surface charge density as +σ
# Identification of advanced spin-driven thermoelectric materials via interpretable machine learning ## Abstract Machine learning is becoming a valuable tool for scientific discovery. Particularly attractive is the application of machine learning methods to the field of materials development, which enables innovations by discovering new and better functional materials. To apply machine learning to actual materials development, close collaboration between scientists and machine learning tools is necessary. However, such collaboration has been so far impeded by the black box nature of many machine learning algorithms. It is often difficult for scientists to interpret the data-driven models from the viewpoint of material science and physics. Here, we demonstrate the development of spin-driven thermoelectric materials with anomalous Nernst effect by using an interpretable machine learning method called factorized asymptotic Bayesian inference hierarchical mixture of experts (FAB/HMEs). Based on prior knowledge of material science and physics, we were able to extract from the interpretable machine learning some surprising correlations and new knowledge about spin-driven thermoelectric materials. Guided by this, we carried out an actual material synthesis that led to the identification of a novel spin-driven thermoelectric material. This material shows the largest thermopower to date. ## Introduction Recent progresses of materials science technologies enable the collection of large volumes of materials data in a short time1,2,3,4. Accordingly, the development of tools to process such big data sets is becoming necessary. Machine learning technologies are extremely promising, not only due to their ability to rapidly analyze data5,6,7,8, but also for their potential to discover novel knowledge, not rooted in conventional theories. To apply machine learning for actual materials development, cooperation between scientists and machine learning tools is necessary. Materials scientists often try to understand the rationale behind the data-driven models in order to obtain some actionable information to guide materials development. However, such attempts have been impeded so far by the low interpretability of many machine learning methods. For example, it is difficult for a human to understand the models constructed by a deep neural network9, expressed as the connections between large numbers of perceptrons (neurons). Therefore, the notion of interpretable machine learning (explainable or transparent machine learning), which has not only high predictive ability but also high interpretability, has recently seen a resurgence10,11, especially in the field of scientific discovery. Here, we show an actual material development by using state-of-the-art interpretable machine learning called factorized asymptotic Bayesian inference hierarchical mixture of experts (FAB/HMEs)12,13. The development demonstrated the synergy between the FAB/HMEs and the materials scientists. In the field of material development, the machine learning algorithm must meet the following three requirements: “sparse modeling”; “prediction accuracy”; and “interpretability”, as shown in Fig. 1. The seizes of materials-related data sets are often quite small compared with the data sets in other scientific fields (e.g., astrophysics or particle physics), due to the time necessary to carry out the experiments and calculations/simulations. This results in a significant data sparsity in a material space; therefore, the sparse modeling approach, which automatically selects only the important descriptors (attributes) and reduces the dimension of the search space, is extremely useful. One of the most popular sparse modeling methods is LASSO, which is a linear model with L1 regularization14. However, such linear models do not always have high prediction accuracy, because material data often includes non-linear relationships (for example, due to proximity to phase transitions). To achieve high prediction accuracy, non-linear models, such as support vector machine (SVM), deep neural network (NN), or random forest (RF), are often required14. However, such non-linear models commonly lack interpretability. Although they can tell us which descriptors are important for the machine learning model, they rarely clarify how the descriptors actually contribute to it. Extracting actionable information from such non-linear models is not easy. Interpretable machine learning FAB/HMEs constructs a piecewise sparse linear modeling12 that meets the three requirements of “sparse modeling”, “prediction accuracy”, and “interpretability”. Therefore, the actionable information from the data-driven model provided by the FAB/HMEs can leads us to discoveries of novel materials. For the case study, we applied the interpretable machine learning for the development of a new thermoelectric material. Thermoelectric technologies are becoming indispensable in the quest for a sustainable future15,16. In particular, the emerging spin-driven thermoelectric (STE) materials, which employ spin-Seebeck effect (SSE)17,18 and anomalous Nernst effect (ANE)19,20,21, has garnered much attention as a promising path toward low cost and versatile thermoelectric technology with easily scalable manufacturing. In contrast to the conventional thermoelectric (TE) devices, the STE devices consist of simple layered structures, and can be manufactured with straightforward processes, such as sputtering, coating, and plating, resulting in lower fabrication costs. An added advantage of the STE devices is that they can double their function as heat-flow sensors, owing to their lower thermal resistance22. However, STE material development is hampered by the lack of understanding of the fundamental mechanism of STE material. Research on the STE phenomena, studying the complicated relationship between spin, heat and charge currents and called spin caloritronics, is at the cutting edge of materials science and physics23. A data-driven approach utilizing machine learning can exhibit its full potential in such rapidly developing scientific field. Figure 2 shows the general configuration of one of the STE devices using the anomalous Nernst effect (ANE). It is composed of a magnetic layer, and a single crystal substrate. When a temperature difference ΔT and a magnetic field H are applied along the z and the x direction, respectively, a heat current is converted into an electric current by ANE due to spin–orbit interaction, then one can detect the thermopower SSTE (one of the most important figures of merit for thermoelectric phenomena24) along the y direction. By searching for the STE materials using the ANE with improved thermopower SSTE, we have used the interpretable machine learning to discover non-trivial behaviors of material parameters governing the STE phenomena. We have successfully leveraged this machine-learning-informed knowledge to discover a high-performance STE material, whose thermopower is greater than that of the best known STE material to date25. ## Results ### Material data Figure 3a–c shows the material data of STE devices using a M100−xPtx binary alloy, where M = Fe, Co, and Ni. The thermopower SSTE is experimental data on different experimental conditions (different substrates) C ≡ {Si, AlN, GGG}. A temperature difference ΔT was applied between the top and bottom of the STE tips shown in Fig. 2 by sandwiching the tips between copper heat baths at 300 K and 300 + ΔT K. Magnetic field H was applied along the x direction. Under these conditions, the thermopower SSTE can be detected along the y direction. The details about the experimental conditions are in Supplementary Methods. The material parameters X ≡ {X1, X2, X3X14}, whose simple descriptions are shown in Fig. 3d, were obtained by density function theory (DFT) calculation based on composition data experimentally obtained from X-ray fluorescence (XRF) measurement. However, it is difficult to simulate disordered (random) phases by using common DFT methods such as projector-augmented wave (PAW) method. For example, to simulate Fe50.1Pt49.9 binary alloy, we have to make a very large unit cell, calculation of which is not feasible. Therefore, a Greens-function-based ab initio method, Korringa-Kohn-Rostoker coherent-potential approximation (KKR-CPA) method26, was employed to calculate the disordered M100−xPtx binary alloys. The KKR-CPA, where the CPA deal with random (disordered) material systems and allows us to simulate band structures of multicomponent materials with a single unit cell, is known for its good agreement with experimental results, especially in disordered alloy systems27,28,29. The details about the data, experiments, DFT (KKR-CPA) calculations, data-preprocessing, and the reason we use these material parameters X are shown in Supplementary Methods. ### Machine learning modeling by the interpretable machine learning We used the interpretable machine learning to construct the following data-driven model $$S_{{\mathrm{STE}}} = f\left( {{\mathbf{X}},{\mathbf{I}},{\mathbf{S}},{\mathbf{C}}} \right),$$ (1) where X are the material parameters (with X ≡ {X1, X2, X3X14}), their interaction terms I ≡ {X1X2, X1X2, X1X3X13X14},
measuring the risk/reward level of the OD&D dungeon wandering monster tables (conclusion: as written, in total they're murderously lethal; even Gygax in AD&D massively ramped down the danger level). It recently occurred to me to ask a similar question about the OD&D wilderness encounter tables. A somewhat theoretical difference is that while the dungeon tables have "levels" which theoretically relate to level of power of the monsters there, and suggested level of PCs adventuring there, the wilderness table don't come with that same packaging. Instead (obviously) it comes distinguished by "terrain types". We might assume that plains are designed to be safer than woods, and woods less dangerous than mountains, etc., but are they really? What I did was go through all the entries in those tables and compute average Equivalent Hit Dice (EHD) from each type of encounter, using EHDs estimated algorithmically as shown in the OED Monster Database (code on GitHub) For example: Working bottom-up, here's the sub-table for "Typical Men" (i.e., the default for men in most terrain types): OD&D Wilderness Subtable: Typical Men The "grand average" of all the encounter averages in the rightmost column is 122, but that masks the bimodal structure of the table. These encounters split neatly into two halves: six are with mass groups of men in the range of hundreds (with a host of leader-types, including a Superhero or Lord for any group 100+; I estimated the EHDs for this by adding 25%), while the other six are with small parties of 2-12 men and a single NPC (like a Superhero or Lord). The average total EHD for the first category is in the 200's, while for the second category it's in the 20's. Clearly there's a big difference between meeting one Lord and his 200 soldiers, versus another Lord and his 10 soldiers. This was done for all the different sub-tables, and "grand averages" compute for each resulting in the following: OD&D Wilderness Subtables: Average EHD Somewhat similarly, there's a big bifurcation in the danger levels of some of these subtables. For the Men and Giants (i.e., humanoids) tables, average EHDs are the range of 100+ -- specifically those tables which can produce bands of men, goblins, etc., grouped in the hundreds (30-300 men/orcs, or 40-400 for kobolds/goblins, etc.). For the other tables, average EHDs are only in the range of 20-50 or so. Finally, we can turn to the top-level table, which serves as a function from terrain type to the different subtables, and see on average how dangerous each terrain type is on a per-encounter basis. We get this: OD&D Wilderness Encounters: Terrain Table What I've done there is compute the average encounter danger across all results for a given terrain type, and then divide by 8 (an assumed large PC party size?) to come up with a rough "suggested PC level" for adventuring in that terrain. Some of those assumptions can be easily debated, but at least it gives us a normalized basis by which to compare different terrain types. The result is that on average, there isn't that much difference between the various terrain types. Rounded to the nearest integer, the Clear type suggests maybe 9th-level PCs, while Woods, River, Swamp, and Mountains are only one pip up from that, at 10th level. The City is 11th (because technically it generates more bands of 100s of bandits and brigands from the "Typical Men" table, however unreasonable that may seem), and the Desert table is 12th level, somewhat more dangerous (again because the "Desert Men" table skews more towards 100s of nomads and dervishes). So this series of averages is a somewhat rougher analysis than I've done for dungeons (which have been given a complete simulation in software at the level of individual fighters adventuring and gaining experience in separate encounters). The overall distribution of encounters is not entirely clear, although it's trivial to guess that the tables with fewer Men and Giants encounters (River and Swamp) will have less variation than the other tables. Here are some other factors abstracted out by this rough analysis: • No modifiers are made for parties with special equipment (horses, ships, underwater, etc.) • No distinction is made for parties that may be parleyed with and turn out to be friendly (likely dependent on alignment). • Dragons and lycanthropes do not have family/pack structure simulated (which mandates presence of some immature figures, but also makes adults fight more fiercely). Another thing that the numbers above overlook is that while the average encounter is roughly equivalent across different terrain types, the rate of those encounters is not. E.g.: Compared to Clear terrain, in Woods the party takes twice as long to cover a distance and has double chance for encounter each day; and in Mountains both time and encounter chances are tripled, etc. That is, for every 1 expected encounter for a given distance in the Clear, in Woods you'll expect 4 encounters, and in Mountains 9 encounters, for the same distance traveled. Ultimately that's where the real difference in danger levels comes from in this system. (On the other hand, with only one encounter per day, casters can unload their entire firepower capacity on each one, giving some buffer against that added danger.) Finally, this project suggests a significant limitation to the overall attempt at using our EHD values in sum to balance against total PC levels. Here we've come up with a rough suggestion that OD&D wilderness encounters are, on average, a fair fight for a party of eight 9th- or 10th-level PCs. However, we can look back to our experiences in Outdoor Spoliation games using this system, which we've run with fairly large parties of around the 8th level; at least four times we've documented battles with groups of men and goblins in the size of 200+, and not had a single PC fatality in those encounters. (By the numbers a group of 200 bandits should be ~250 EHD, so for a eight-man party we would have suggested they be 250/8 ~ 30th level? That's clearly not right.) This points to a likely breakdown in simply summing EHDs, especially for very large groups of low-level monsters, versus PCs with high-level magic (not currently simulated in our program), very low armor classes for fighters, etc. It may be interesting to reflect on the exact magic used by players in those mass battles in Outdoor Spoliation sessions One, Two, and Three. Full spreadsheet available here for the tables and calculations shown above. Edit: Consider Arneson's rule in First Fantasy Campaign that (as I read it) wilderness encounter numbers are really for full lairs only, and encounters outside will only be 10-60% of those numbers (average 35%). If we take the charts above and multiply everything by 0.35 for expected outsiders, then the equated PC level (parties of 8) becomes 3 or 4 in each terrain. Which is kind of interesting, because reportedly at the start of Arneson's games everyone got Heroes from Chainmail -- fight as 4 men, D&D 4th level -- or else Wizards (I presume low-level, likely 4th-level equivalent?). ## Sunday, July 28, 2019 ### Sunday Survey: Wizard Armor A while back on the Facebook 1E AD&D group, a discussion occurred that had me quite surprised by the direction it was going. Intrigued, I asked the following poll question: This was surprising to me, because given the context, the top result ("elven chain only") is clearly counter to the 1E AD&D rules text. Of course, when we say multiclass fighter/magic-users in 1E, we're just talking about elves and half-elves (the only races allowed for that multiclass). Under Elves on PHB p. 16, it says that they can "operate freely with the benefits of armor, weapons, and magical items available to the classes the character is operating in", with
\section{Introduction} \setcounter{section}{1}\setcounter{equation}{0} Lie conformal (super)algebras, originally introduced by Kac in \cite{K1,K2}, encode an axiomatic description for the singular part of the operator product expansion of chiral fields in two-dimensional conformal field theory. They are very closely related to vertex algebras (cf. \cite{B,R}) by the same way as Lie algebras correspond to their universal enveloping algebras. On the other hand, the theory of Lie conformal (super)algebras give us powerful tools for the study of infinite-dimensional Lie (super)algebras and associative algebras satisfying the locality property described in \cite{K}. The conformal (super)algebras have drawn much attention in branches of physics and mathematics since the introduction. The structure theory, representation theory and cohomology theory of finite (i.e., finitely generated as $\C[\pa]$-modules) Lie conformal algebras have been well developed (cf. \cite{BK, CK,CKW,DK}), and finite simple Lie conformal superalgebras were classified in \cite{FK} and their representation theories were developed in \cite{BKL1,BKL2,KO}. The object investigated in this paper is a Lie conformal superalgebra closed related to the loop super-Virasoro algebra $\mathfrak{sl}$ whose structures were studied in \cite{DHS}. It is defined as a infinite-dimensional Lie superalgebra with a basis $\{L_{\al,i}, G_{\mu,j}\mid \al,i,j\in\Z, \mu\in \frac12+\Z\}$ satisfying the following commutation relations: \begin{eqnarray}\label{p2.1} [L_{\al,i},L_{\beta,j}]=(\al-\beta)L_{\al+\beta,i+j},\ [L_{\al,i},G_{\mu,j}]=(\frac{\al}{2}-\mu)G_{\al+\mu,i+j},\ [G_{\mu,i},G_{\nu,j}]=2L_{\mu+\nu,i+j}, \end{eqnarray} the even and odd parts of which are $\mathfrak{sl}^{\bar{0}}={\rm span}\{L_{\al,i}\mid\al,i\in\Z\}$ and $\mathfrak{sl}^{\bar{1}}={\rm span}\{G_{\mu,j}\mid j\in\Z,\mu\in \frac12+\Z\}$, respectively. Clearly, $\mathfrak{sl}^{\bar{0}}$ is just the loop Virasoro algebra (cf. \cite{GLZ}) and $ \mathfrak{sl}^{\bar{1}}$ is its module. Hence, the loop super-Virasoro algebra can be seen as a super extension of the loop Virasoro algebra. Motivated by the idea from \cite{K2} we associate a Lie conformal superalgebra with the loop super-Virasoro algebra. It is called the \textit{loop super-Virasoro conformal superalgebra}, denoted by $\mathfrak{cls}$, which is a $\C[\pa]$-module $\mathfrak{cls}=(\bigoplus_{i\in\Z}\C[\pa]L_i)\oplus(\bigoplus_{i\in\Z}\C[\pa]G_i)$ with a $\C[\partial]$-basis $\{L_i,G_i\mid i\in\Z\}$ satisfying the following $\lambda$-brackets: \begin{eqnarray} &&[L_{i\ \lambda} L_j]=(\partial+2\lambda) L_{i+j},\ [L_{i\ \lambda} G_j]=(\partial+\frac32\lambda) G_{i+j},\label{d1.2}\\ \ &&[G_{i\ \lambda} L_j]=(\frac12\partial+\frac32\lambda) G_{i+j}, \ [G_{i\ \lambda} G_j]=2L_{i+j}, \forall i,j\in\Z. \label{d1.3} \end{eqnarray} Note that this is an infinite simple Lie conformal superalgebra, containing the loop Virasoro Lie conformal algebra $\mathfrak{clv}=\bigoplus_{i\in\Z}\C[\pa]L_i$ (see \cite{WCY}) as its subalgebra. As pointed out previously, the theory of finite simple Lie conformal (super)algebras were well developed, but so far there is no systematic theory for the infinite case. So it is interesting and necessary to develop the theory for infinite simple Lie conformal superalgebras. This is one of our motivations for studying the loop super-Virasoro conformal superalgebra. We shall study the superderivation algebra of $\mathfrak{cls}$ and free ($\Z$-graded) $\mathfrak{cls}$-modules of rank $\leq2$. One interesting aspect is that free ($\Z$-graded) $\mathfrak{cls}$-modules of rank 1 are trivial, which is totally different from the loop Virasoro Lie conformal algebra case (all $V_{a,b}$ and $V_{A,b}$ are its nontrivial conformal modules of rank one); and the other interesting aspect is that the even or odd part of a $\Z$-grade free $\mathfrak{cls}$-module of rank two has the form $V_{A,b}$ if and only if the other part is $A_{\frac12,b}$. We remark that an important strategy frequently used in the present paper is to pass modules over $\mathfrak{cls}$ to modules over $\mathfrak{clv}$. This paper is organized as follows. In Sect. 2, we collect some facts and notions related to Lie conformal superalgebras. In Sect. 3, we determinate conformal superderivations of $\mathfrak{cls}$. The Sect. 4 is devoted to giving the classification of all nontrivial free $\mathfrak{cls}$-modules of rank less than two. We also determine the irreducibility of these modules and therefore classify all inequivalent irreducible free $\mathfrak{cls}$-modules of rank two. And free $\Z$-graded $\mathfrak{cls}$-modules of rank less than or equal to two are classified in Sect. 5. Moreover, all irreducible submodules of free $\Z$-graded $\mathfrak{cls}$-modules of rank two are completely determined. Throughout this paper, all vector spaces are assumed to be over the complex field $\C$ and all linear maps are $\C$-linear. The main results of this paper are summarized in Theorems \ref{main3}, \ref{freeofranktwo}, \ref{main4-2} and \ref{main5}. \section{Preliminaries} \setcounter{section}{2}\setcounter{equation}{0} In this section, we recall some notions related to Lie conformal superalgebras and conformal modules from\cite{DK,K1,K2}. We say that a vector space $U$ is $\Z_2$-graded if $U=U^{\bar0}\oplus U^{\bar1}$, and $x\in U^{\bar i}$ is called $\Z_2$-homogenous and write $|x|= i$. For any two $\Z_2$-graded vector spaces $U$ and $V$, a linear map $f: U\rightarrow V$ is called homogenous of degree $\bar i\in\Z_2$ if $f(U^{\bar j})\subseteq V^{\overline{i+j}}$ for all $\bar j\in\Z_2$. \begin{defi} A Lie conformal superalgebra is a $\Z_2$-graded $\C[\partial]$-module $\mathcal{A}$ endowed with a linear map $\mathcal{A}\otimes\mathcal{A}\rightarrow \C[\lambda]\otimes \mathcal{A}, a\otimes b\mapsto [a_\lambda b]$, called the $\lambda$-bracket, and satisfying the following axioms $(a, b, c\in \mathcal{A}):$ \begin{eqnarray} &&[(\partial a)_\lambda b]=-\lambda[a_\lambda b],\quad [a_\lambda b]=-(-1)^{|a||b|}[b_{-\lambda-\partial} a],\label{m2.2}\\ &&[a_\lambda [b_\mu c]]=[[a_\lambda b]_{\lambda+\mu}c]+(-1)^{|a||b|}[b_\mu [a_\lambda c]]. \end{eqnarray} \end{defi} It follows from the axioms in (\ref{m2.2}) that \begin{eqnarray*} [(f(\partial)a)_\lambda b]=f(-\lambda)[a_\lambda b]\ \mbox{and}\ [a_\lambda (f(\partial)b)]=f(\partial+\lambda)[a_\lambda b],\quad \forall f(\pa)\in\C[\pa]. \end{eqnarray*} \begin{defi} A conformal module over a Lie conformal superalgebra $\mathcal{A}$ or an $\mathcal A$-module is a $\Z_2$-graded $\C[\partial]$-module $V$ endowed with a $\lambda$-action $\mathcal{A}\otimes V\rightarrow \C[\lambda]\otimes V, a\otimes v\mapsto a_\lambda v$, and satisfying the following axioms $(a, b \in \mathcal{A}, v\in V):$ \begin{eqnarray*}&&(\partial a)_\lambda v=-\lambda a_\lambda v,\quad a_\lambda (\partial v)=(\partial +\lambda)a_\lambda v,\\ &&[a_\lambda b]_{\lambda+\mu}v=a_\lambda (b_\mu v)-(-1)^{|a||b|}b_\mu (a_\lambda v). \end{eqnarray*} \end{defi} The rank of an $\mathcal A$-module $V$ is defined to be the rank of $V$ as $\C[\partial]$-module. \begin{defi}\label{intermod}Let $\mathcal A$ be a Lie conformal superalgebra. \begin{itemize}\parskip-3pt\item[{\rm (1)}] $\mathcal{A}$ is called {\it $\Z$-graded} if $\mathcal{A}=\oplus_{i\in \Z}{\mathcal{A}}^i$, each ${\mathcal{A}}^i$ is a $\C[\partial]$-submodule and $[{\mathcal{A}}^i\,{}_\lambda\, {\mathcal{A}}^j]\subseteq \mathcal{A}^{i+j}[\lambda]$ for any $i,j\in\Z$. \item[{\rm (2)}] A conformal module $V$ over $\mathcal A$ is {\it $\Z$-graded} if $V=\oplus_{i\in\Z}V_i$, each $V_i$ is a $\C[\partial]$-submodule and $\mathcal (\mathcal A^i)_\lambda V_j\subseteq V_{i+j}[\lambda]$ for any $i,j\in\Z$. If each $V_i$ is a free $\C[\partial]$-module of rank $n$, then $V$ is called a free $\Z$-graded $\mathcal A$-module of rank $n$. \end{itemize} \end{defi} Note that the loop super-Virasoro conformal superalgebra $\mathfrak{cls}=(\mathfrak{cls})^{\bar0}\oplus(\mathfrak{cls})^{\bar1}$ is $\Z_2$-graded with $(\mathfrak{cls})^{\bar 0}=\oplus_{i\in\Z}\C[\partial]L_i$ and $(\mathfrak{cls})^{\bar1}=\oplus_{i\in\Z}\C[\partial]G_i$ such that $[{(\mathfrak{cls})^{\alpha}} \ _\lambda (\mathfrak{cls})^{\beta}]\subseteq (\mathfrak{cls})^{\alpha+\beta}[\lambda]$ for any $\alpha,\beta\in\Z_2$, and on the other hand $\mathfrak{cls}=\oplus_{i\in\Z}(\mathfrak{cls})_i$ is also $\Z$-graded with $(\mathfrak{cls})_i=\C[\partial]L_i\oplus\C[\partial]G_i$ for each $i$ in the sense of Definition \ref{intermod}. \section{Conformal superderivations } \setcounter{theo}{0} A homogenous linear map $D_\lambda:\mathcal{A}\rightarrow \mathcal{A}[\lambda]$ of degree $\bar i\in\Z_2$ is called a\textit{ homogeneous conformal superderivation of degree} $\bar{i}$ if the following conditions hold: \begin{eqnarray*} D_\lambda(\partial a)=(\partial+\lambda)D_\lambda a,\ D_\lambda[a_\mu b]=[[D_\lambda a]_{\lambda+\mu} b]+(-1)^{i|a|}[a_\mu [D_\lambda b]],\quad \forall a,b\in \mathcal A. \end{eqnarray*} And we write $D$ instead of $D_\lambda$ for simplicity. Denote the set of all conformal superderivations of degree $\alpha\in\Z_2$ by ${\rm CDer}^\alpha(\mathcal{A})$. Then we call ${\rm CDer(\mathcal{A})}={\rm CDer^{\bar0}(\mathcal{A})}\oplus {\rm CDer^{\bar1}(\mathcal{A})}$ the conformal superdirivation algebra of $\mathcal A$ and each element of ${\rm CDer(\mathcal A)}$ a superderivation of $\mathcal A$. For any $a\in \mathcal{A}$, one can see easily that the linear map $({\rm ad}_a)_\lambda: \mathcal{A}\rightarrow \mathcal{A}[\lambda]$ given by $({\rm ad}_a)_\lambda b=[a_\lambda b]$ for $b\in \mathcal{A}$ is a conformal superderivation, which is called an \textit{inner conformal superderivation} of $\mathcal A$. Denote the set of all inner conformal superdirivations of $\mathcal A$ by ${\rm CInn(\mathcal A)}$. Now we are ready to give the main result of this section, which establishes the equality between the two sets ${\rm CDer}(\mathfrak{cls})$ and ${\rm CInn}(\mathfrak{cls})$. \begin{theo}\label{main3} Every conformal superderivation of $\mathfrak{cls}$ is inner, i.e., {\rm CDer}$(\mathfrak{cls})={\rm CInn}(\mathfrak{cls})$. \end{theo} \begin{proof} Take any $D\in {\rm CDer}(\mathfrak{cls})$. For this fixed superderivation $D$ and any $c\in\Z$, define $D^c(x_j)=\pi_{c+j}D(x_j)$ for any $j\in\Z$ and $x_j\in\mathcal {(CL)}_j$, where $\pi_c$ is the natural projection from $\C[\lambda]\otimes \mathfrak{cls}$ to $\C[\lambda]\otimes \mathfrak {(cls)}_c$. Then it is easy to check that $D^c$ is a conformal superderivation of $\mathfrak{cls}$. We assert that each $D^c$ is inner. For this, we only need to consider the case that $D^c$ is $\Z_2$-homogenous. \begin{case} $D^c\in {\rm CDer}^{\bar{0}}(\mathfrak{cls})$. \end{case} In this case, assume that $$D_\lambda^c(L_i)=f_i(\partial,\lambda)L_{i+c}\quad {\rm and}\quad D_\lambda^c(G_i)=g_i(\partial,\lambda)G_{i+c}$$for some $f(\partial, \lambda), g_i(\partial, \lambda)\in\C[\partial,\lambda]$. Applying $D^c_{\lambda}$ to $[L_{0\ \mu} L_i]=(\partial+2\mu) L_{i}$ and comparing the coefficients of $L_{i+c}$ give \begin{equation*} (\partial+\lambda+2\mu)f_i(\partial,\lambda)=(\partial +2\lambda+2\mu)f_0(-\lambda-\mu,\lambda)+(\partial+2\mu)f_i(\partial+\mu,\lambda) \end{equation*} Setting $\mu=0$ in the formula above, we get \begin{equation*} f_i(\partial,i)=(\partial+2\lambda)\frac{f_0(-\lambda,\lambda)}\lambda. \end{equation*} Similarly, it follows from $[L_{0\ \mu} G_j]=(\partial+\frac32\mu) G_{j}$ that \begin{equation}\label{m4.1} (\partial+\lambda+\frac32\mu)g_{j}(\partial,\lambda)=(\partial+\frac{3(\lambda+\mu)}2)f_0(-\lambda-\mu,\lambda) +(\partial+\frac{3\mu}2)g_j(\partial+\mu,\lambda). \end{equation} Setting $\mu=0$ in (\ref{m4.1}), one has \begin{equation*} g_j(\partial,\lambda)=(\partial+\frac{3\lambda}2)\frac{f_0(-\lambda,\lambda)}{\lambda}. \end{equation*} Thus, $D^c_\lambda={\rm ad}_{\frac{f_0(\partial,-\partial)}{-\partial}L_c}$. \begin{case} $D^c\in {\rm CDer}^{\bar{1}}(\mathfrak{cls})$. \end{case} Assume that $D^c_\lambda(L_i)=g_i(\partial,\lambda)G_{i+c}$, $D^c_\lambda(G_i)=f_i(\partial,\lambda)L_{i+c}$. It follows from applying $D^c_{\lambda}$ to $[L_{0\ \mu} L_i]=(\partial+2\mu) L_{i}$ that \begin{eqnarray*} (\partial+\lambda+2\mu)g_i(\partial,\lambda) =g_0(-\lambda-\mu,\lambda)(\frac12\partial+\frac32(\lambda+\mu))G_{i+c}+g_i(\partial+\mu,\lambda)(\partial+\frac{3\mu}2), \end{eqnarray*} from which by setting $\mu=0$ one has \begin{equation*} g_i(\partial,\lambda)=(\frac12\partial+\frac32\lambda)\frac{g_0(-\lambda,\lambda)}{\lambda}. \end{equation*} Using $[{L_0}\ _\mu G_i]=(\partial+\frac32\mu)G_i$, one has \begin{eqnarray*} (\partial+\lambda+\frac{3\mu}2)f_i(\partial,\lambda) =\big(2g_0(-\lambda-\mu,\lambda)-f_i(\partial+\mu,\lambda)(\partial+2\mu)\big). \end{eqnarray*} from which by choosing $\mu=0$ gives \begin{equation*} f_i(\partial,\lambda)=\frac{2g_0(-\lambda,\lambda)}{\lambda}. \end{equation*} Whence one can see that \begin{equation*} D^c_\lambda={\rm ad}_{\frac{g_0(\partial,-\partial)}{-\partial}G_c}. \end{equation*} So in either case, we see that $D^c={\rm ad}_{x_c}$ for some $x_c\in (\mathfrak{cls})_c$, is inner, completing the assertion. Note from the definition of $D^c$ we see that $D=\sum_{c\in\Z}D^c$. In particular, $$D(L_0)=\sum_{c\in\Z}{\rm ad}_{x_c}(L_0)=\sum_{c\in\Z, x_c\neq 0}{\rm ad}_{x_c}(L_0)=\sum_{c\in\Z, x_c\neq 0}{\rm ad}_{x_c}(L_0),$$ which must be a finite sum by the fact that
# 1BiocCheck BiocCheck encapsulates Bioconductor package guidelines and best practices, analyzing packages and reporting three categories of issues: • ERROR. This means the package is missing something critical and it cannot be accepted into Bioconductor until the issue is fixed. (BiocCheck will continue past an ERROR, thus it is possible to have more than one, but it will exit with an error code if run from the OS command line.) • WARNING. These issues almost always need to be addressed before the package is added to Bioconductor. In the weeks leading up to a Bioconductor release we will ask package authors to fix these issues. • NOTE: Not necessarily something bad, just something we wanted to point out. package authors don’t need to take action on these, but they can. # 2 Using BiocCheck Most commonly you will use BiocCheck from your operating system command line, as R CMD BiocCheck package Where package is either a directory containing an R package, or a source tarball (.tar.gz file). BiocCheck can also be run interactively: library(BiocCheck) BiocCheck("packageDirOrTarball") R CMD BiocCheck takes options which can be seen by running R CMD BiocCheck --help ## Usage: R CMD BiocCheck [options] package ## ## ## Options: ## --no-check-vignettes ## disable vignette checks ## ## --new-package ## enable checks specific to new packages ## ## --no-check-bioc-views ## disable biocViews-specific checks (for non-BioC packages) ## ## --build-output-file=BUILD-OUTPUT-FILE ## file containing R CMD build output, for additional analysis ## ## -h, --help ## Show this help message and exit Note that the --new-package option is turned on in the package builder attached to the Bioconductor package tracker, since this is almost always used to build new packages that have been submitted. # 3 When should BiocCheck be run Run BiocCheck after running R CMD check. Note that BiocCheck is not a replacement for R CMD check; it is complementary. It should be run after R CMD check completes successfully. # 4 Installing BiocCheck BiocCheck should be installed as follows: source("https://bioconductor.org/biocLite.R") biocLite("BiocCheck") library(BiocCheck) The package loading process attempts to install a script called BiocCheck (BiocCheck.bat on Windows) into the bin directory of your R installation. If it fails to do that (most likely due to insufficient permissions), it will tell you, saying something like: Failed to copy the "script/BiocCheck" script to /Library/Frameworks/R.framework/Resources/bin. If you want to be able to run 'R CMD BiocCheck' you'll need to copy it yourself to a directory on your PATH, making sure it is executable. See the BiocCheck vignette for more information. You can fix the problem by following these instructions (noting that R may live in a different directory on your system than what is shown above). If you don’t have permission to copy this file to the bin directory of your R installation, you can, as noted, copy it to any directory that’s in your PATH. For assistance modifying your PATH, see this link (Windows) or this one (Mac/Unix). If you manually copy this file to a directory in your PATH that is not your R bin directory, you’ll continue to see the above message when (re-)installing BiocCheck but you can safely ignore it. # 5 Interpreting BiocCheck output Actual BiocCheck output is shown below in bold. ## 5.1 Dependency Checks Checking if other packages can import this one… • Checks to make sure that there will be no import problems if another package imports your package (ERROR). Checking to see if we understand object initialization…. Reports if it can’t figure out how objects were initialized (NOTE). ## 5.2 Vignette Checks Can be disabled with --no-check-vignettes. Checking vignette directory… Only run if your package is a software package (as determined by your biocViews), or if package type cannot be determined. • Checks that the vignettes directory exists (ERROR). • Checks that the vignettes directory only contains vignette sources (.Rmd, .Rnw, .Rrst, .Rhtml, *.Rtex) (ERROR). • Checks whether, while checking a directory (not a tarball), vignette sources exist in inst/doc (ERROR). • Checks wheher the number of eval=FALSE chunks is more than 50% of the total (ERROR). Checking whether vignette is built with ‘R CMD build’… Only run when --build-output-file is specified. Analyzes the output of R CMD build to see if vignettes are built. It simply looks for a line that starts: * creating vignettes ... If this line is not present, it means R has not detected that a vignette needs to be built (ERROR). If you have vignette sources yet still get this message, there could be several causes: • Missing or invalid VignetteBuilder line in the DESCRIPTION file. • Missing or invalid VignetteEngine line in the vignette source. See knitr’s package vignette page, or the Non-Sweave vignettes section of “Writing R Extensions” for more information. ## 5.3 Version Checks • Checking new package version number… Checks that the version number is valid for a new Bioconductor package. This is only done if the --new-package option is supplied (ERROR). • Checking version number validity… Checks for a valid version, that format is correct and that version number is appropriate for this version of Bioconductor (ERROR). • Checking R Version dependency… If you specify an R version in the Depends: field of your DESCRIPTION file, BiocCheck checks to make sure that the R version specified matches the version currently used by Bioconductor. This prevents the package from being used in earlier versions of R, which is not recommended and is a frequent cause of user confusion (WARNING). • Checking for version number mismatch… Checks that the package version specified in your package tarball (if you are checking a tarball) matches the value of the Version: field in your DESCRIPTION file. If it doesn’t, it usually means you did not build the tarball with R CMD build. (ERROR) For more information on package versions, see the Version Numbering HOWTO. ## 5.4 biocViews Checks These can be disabled with the --no-check-bioc-views option, which might be useful when checking non-Bioconductor packages (since biocViews is a concept unique to Bioconductor). Checking biocViews… • Checking that biocViews are present… Checks that a biocViews field is present in the DESCRIPTION file (ERROR). • Checking for non-trivial biocViews… Checks that biocViews are more specific than the top-level terms Software, AnnotationData, or ExperimentData (ERROR). • Checking biocViews validity… Checks for valid views and displays invalid ones. Note that biocViews are case-sensitive (WARNING). • Checking that biocViews come from the same category… Checks that all views come from the same parent (one of Software, AnnotationData, ExperimentData) (WARNING). • Checking for recommended biocViews… Uses the recommendBiocViews() function from biocViews to automatically suggest some biocViews for your package. More information about biocViews is available in the Using biocViews HOWTO. ## 5.5 Build System Compatibility Checks The Bioconductor Build System (BBS) is our nightly build system and it has certain requirements. Packages which don’t meet these requirements can be silently skipped by BBS, so it’s important to make sure that every package meets the requirements. Checking build system compatibility… • Checking for blank lines in DESCRIPTION… Checks to make sure there are no blank lines in the DESCRIPTION file (ERROR). • Checking for whitespace in DESCRIPTION field names… Checks to make sure there is no whitespace in DESCRIPTION file field names (ERROR). • Checking that Package field matches dir/tarball name… Checks to make sure that Package field of DESCRIPTION file matches directory or tarball name (ERROR). • Checking for Version field… Checks to make sure a Version field is present in the DESCRIPTION file (ERROR). • Checking for valid maintainer… Checks to make sure the DESCRIPTION file has either a Maintainer field, or a valid Authors@R field which resolves to a valid Maintainer (ERROR). A valid Authors@R field consists of: * A valid R object of class person. * Only one person with the cre (creator) role. * That person must have a syntactically valid email address. ## 5.6 Unit Test Checks Checking unit tests… We strongly recommend unit tests, though we do not at present require them. For more on what unit tests are, why they are helpful, and how to implement them, read our Unit Testing HOWTO. At present we just check to see whether unit tests are present, and if not, urge you to add them (NOTE). ## 5.7 Native Routine Registration Checks Checking native routine registration… Calls to native (C or Fortran) code
expansion available. • Redesign the case so it's easier to clean out the CPU fan. • Move the headphone audio jack to the rear. • Switch to full HDMI port instead on mini HDMI. • Offer all black case again, instead of two-tone. • Get Intel and Google together to make an official Chrome OS installer for NUCs. I'd say Intel needs to keep on keeping on. I'm not sure how this isn't powerful enough to replace a low-powered desktop. I've been using one of the 3rd generation i3 models as an HTPC since they were released. Besides watching stuff, I've on many occasions used it to play 4 player split screen racing games (Blur, Sonic & Sega All-stars, Bang Bang Racing) without hitch. I use Microsoft's wireless Xbox controller for Windows, which utilizes only 1 USB port per 4 controllers, along with a logitech TK820 for wireless mouse and keyboard on the couch. For what this system is, its gaming prowess is extremely impressive to me. As a barebones kit, I found it also very well made and easy to open and install pieces. This is a great product. It's very small and also light enough to easily transport around. Easy to hook up at a friend's place and start playing. I comfortably carry it in my laptop bag with my Xbox controllers when I want to transport it. With portable gaming consoles like the nvidia shield out there, you can't really call this thing bulky =P 1) Fanless. I mean seriously. I know there are Akasa cases, but they aren't so easy to get, and moving it from NUC case is not something intended for average user. 2) Optional form factor to support full 3.5 inch hard drive? May be even SoHo NAS friendly version with 2 SATA and 2 HDDs? 3) eSata port? 4) Full size HDMI, or at least cheap adapter from mini-HDMI included? 5) Better QA? A lot of complains (by high Intel standarts) on malfunctioning device. 6) Price. After one buys RAM, storage, wireless and Windows license, you may question if getting notebook, which has screen, keyboard, and batteries wouldn't make a better option moneywise, if size is not so critical. And meanwhile I use the i3 version running openLEC xbmc for my home theater. Everything is perfect in the setup beating raspberry pi in terms of response and speed. Only missing feature is the CEC.... I have 3 nucs. Two i3's running XBMC which I'm very happy with and one i5 with 1tb SSD running Windows Media Center. The i5 one does an "ok" job of being a steam streaming front end, but could use a little more oomph. I think having options for powerful external video cards would be useful. Which i5 do you have? Although I don't personally have the non-Haswell one I have heard that the difference in graphics/video performance is night and day. It's why I waited for the i5-4250U and went with it. Also agree that discrete video cards would be great support but doubt you're going to see it. When I asked Gigabyte and Intel about it the biggest issue mentioned is cooling in that form factor. Wonder if that has changed with Devil's Canyon as they've been overclocked to 4.6GHz fanless. I have the Core i5-4250U What I found - the dynamic allocation of VRAM leaves some to be desired. If you have 6+ GB installed I would allocate the full 1 GB to Video that you can in the BIOS. Should fix any video issues you were having. I went with a BRIX Pro i5. Main reason was the quad core CPU. It's great for a mini lab box: running a few Hyper-V VMs with 2 EVO SSDs (one mSATA). I upgraded the wifi to AC. In the next version of these boxes I would want them to support 32GB of RAM. How's the cooling? #1 concern i hear is that when the fan runs at full speed it's noticeable. The Haswell NUC (i5-4250U) wasn't. I purchased one of the i3's when they initially came out to be used as a replacement home theater machine. It has been the best purchase i have ever made. the machine works flawlessly without any issues and quite as mouse. Intel skimped on a few things though. NO Second USB header was ever installed even though the port is there. NO CEC support on the HDMI, Big Drawback. NO internal Hard Drive. Big Drawback! And What is up with no Power cord! You have got to be kidding me! As Intel has stated before, the kits are designed to be sold globally. No power cord means they can ship anywhere without any regional concerns. (SKUs with the 1 at the end should have it though.) Mine came with a power cord from Amazon. I tried to get a review unit for the website l work with (Popzara) a while ago mostly out of long-term curiosity, but I never believed that the performance/specs matched the consumer value for money when they were introduced with Ivy Bridge processors, It's a bit of shame because the NUC works for people adamant about HTPC/XBMC duty or workstation/Hackintosh use, or at least that's the impression I got with a brief hands-on impression at CES earlier this year. I wish I had more than an couple hours time playing with the NUC because with the right combinations of ports installed (ideally DisplayPort 1.2/Thunderbolt/USB 3.0) it would definitely be more of a standout. In fact, most tech people I know aren't even aware that Intel makes something like the NUC. They can insert two projectors into it. Suppose that the first of them will project screen, and the second (the other side) will project the keyboard. Also on the top panel can be placed the touchpad. I have purchased three of these as XBMC/OpenElec machines. They work great. I also have a Raspberry Pi XBMC/OpenElec machine and it also works great. Big difference between the two is performance. As the article states I think the NUC is a nitch market. It would probably make a good replacement desktop machine but by the time you add the memory, drive, wireless card ,etc then you can probably get an equivalent or even cheaper machine from Dell, HP, etc. The big thing for me is the size. Since its small and very quiet it makes a great machine for the media room/family room, etc. If this can run linux + java + mono, I would hands down buy this for a small build server for my open source projects. I posted this in the NUC forum but haven't received any bites so far :) NUC DC3217BY HTPC with 12TB Parity Storage Space - Please bring back Thunderbolt!! Configuration:- NUC DC3217BY with 16GB Ram Windows 8.1 with iTunes (yes, I have had iTunes since the beginning) Thunderbolt daisy chain >> LaCie 5big with 5x3TB drives >> LaCie Little Big Disk with 2x32GB Intel X25E SSD's >> Apple Thunderbolt to Gigabit Ethernet Adaptor We have done a lot of interesting things with NUC's over the years at my work but this recent addition to my home was a fun exercise using leftover "bits" from around the office. We never found a use for the LaCie 5Big because Mac's have given up on Raid 5. The 5x3TB Hitachi drives were left over after we upgraded our NAS to 4TB drives. The Little Big Disk was my travelling editing drive. The two old (and once VERY expensive) Intel X25E 32GB SLC SSD's have been gathering dust for what seems like ages. For those of you who have never used Microsoft Storage Spaces, now is probably a good time to start. With the release of Windows 8.1 (and Server 2012 r2) Microsoft introduced the concept of using SSD’s as write-back cache which has improved the write performance of their Parity Spaces considerably. All I had to do
import dash import dash_core_components as dcc import dash_html_components as html import numpy as np import pandas as pd import plotly.express as px from datetime import datetime as dt import plotly.graph_objects as go from dateutil.relativedelta import * from database import fetch_all_crime_as_df # Definitions of constants. This projects uses extra CSS stylesheet at `./assets/style.css` COLORS = ['rgb(67,67,67)', 'rgb(115,115,115)', 'rgb(49,130,189)', 'rgb(189,189,189)', 'rgb(240,240,240)'] external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css', '/assets/style.css'] # Define the dash app first app = dash.Dash(__name__, external_stylesheets=external_stylesheets) # Define component functions def page_header(): """ Returns the page header as a dash `html.Div` """ return html.Div(id='header', children=[ html.Div([html.H3('Visualization with datashader and Plotly')], className="ten columns"), html.A([html.Img(id='logo', src=app.get_asset_url('github.png'), style={'height': '35px', 'paddingTop': '7%'}), html.Span('MLEers', style={'fontSize': '2rem', 'height': '35px', 'bottom': 0, 'paddingLeft': '4px', 'color': '#a3a7b0', 'textDecoration': 'none'})], className="two columns row", href='https://github.com/LACrimeMap/LACrimeMap') ], className="row") def description(): """ Returns overall project description in markdown """ return html.Div(children=[dcc.Markdown(''' # Crime rates in Los Angeles [About Page](https://docs.google.com/document/d/1zEYKkCu6WQKGqgfMlu1XxCjlVlqwyZc5B_qrF5GeYf8/edit?usp=sharing) ## Team: Weihao Zhou, Kaiwen Yang, Xu Han, Laura McCallion. Predicting criminal activity is a fundamental challenge to police across the country. Attempting to adjust policy to crime rates haphazardly can lead to innumerable issues, including over-policing of disadvantaged neighborhoods, failure to protect citizens, or a loss of trust between citizens and the police force. When using algorithmic methods to asses crime rates, clear and well-understood data is critical to avoiding the pitfalls that, when they occur in an institution as significant as criminal justice, can cause significant harms. To this end, **LA Crime Map is an exploratory tool that can be used to visualize trends in LA Crime data.** The data can be explored using the quantity of crime, the type of crime, specific areas, and geographic data. ## Data Source LA Crime Rate analysis uses data from [Los Angeles Open Data](https://data.lacity.org/). The [data source](https://data.lacity.org/A-Safe-City/Arrest-Data-from-2010-to-Present/yru6-6re4) **updates weekly**. ''', className='eleven columns', style={'paddingLeft': '5%'}), html.Div(children=[ html.Img(id ='angel' , src="https://i.pinimg.com/originals/e3/a8/cb/e3a8cbd6c88c3134209b95e999860ce8.jpg", className='five columns' # style={'height': '100px', 'width': '70px', 'paddingTop': '7%'} ), html.Img(id = 'cat', src=app.get_asset_url('p1.png'), className='six columns'), ], className='row', style={'textAlign': 'center', 'paddingLeft': '5%'}) ], className="row") def what_if_description(): """ Returns description of top five crime incidences in different timeframe - the interactive component """ return html.Div(children=[ dcc.Markdown(''' ## Explore crime rates trend in the last two years Does a particular month/season/year see more crimes than others? Use this tool to explore how the nunber of top five crimes changes from month to month. The total count of crime for each month will be displayed. Day of the month is not considered. The return result will include the start month and end month. ''', className='eleven columns', style={'paddingLeft': '5%'}) ], className="row") def what_if_tool(): """ Returns the What-If tool as a dash `html.Div`. The view is a 8:3 division between demand-supply plot and rescale sliders. """ return html.Div(children=[ html.Div(children=[dcc.Graph(id='what-if-figure')], className="row"), html.Div(children=[ html.H5("Choose a time frame"), html.Div(children=[ dcc.DatePickerRange(id='my-date-picker-range', min_date_allowed=dt(2018, 1, 1), max_date_allowed=dt(2019, 12, 13), initial_visible_month=dt(2019, 10, 1), start_date = dt(2018,12,1), end_date=dt(2019, 8, 1)) ], style={'width':'40%'}), ], className='row', style={'marginLeft': 5}), #'marginTop': '5%' ], className='row eleven columns') def crime_map_description(): """ Returns the description of crime map. """ return html.Div(children=[ dcc.Markdown(''' ## Crime map This map visualizes crime rates in each district. User can choose crime type (violent crimes or non-violent crimes) and time range. Also, user can enlarge the map to see locations of crimes. Crimes in different districts have different colors. The darker the shade, the more crimes in that region. ''', className='eleven columns', style={'paddingLeft': '5%', 'marginTop': '1%'}) ], className="row") def crime_map_tool(): """ Returns the What-If tool as a dash `html.Div`. The view is a 8:3 division between demand-supply plot and rescale sliders. """ return html.Div(children=[ html.Div(children=[dcc.Graph(id='what-if-crime')], className='ten columns'), html.Div(children=[ html.H5("Choose a time frame"), html.Div(children=[ dcc.DatePickerRange(id='crime-date-picker-range', min_date_allowed=dt(2018, 1, 1), max_date_allowed=dt(2019, 12, 13), initial_visible_month=dt(2019, 10, 1), start_date = dt(2019,11,1), end_date=dt(2019, 11, 30)) ], style={'width':'40%'}), html.Div(children=[ dcc.Dropdown( id='crime-dropdown', options=[ {'label': 'Non-Violent Crimes', 'value': 'non_violent'}, {'label': 'Violent Crimes', 'value': 'violent'}], value='non_violent') ,html.Div(id='dd-output-container')], style={'width':'100%'}), ], className='three columns', style={'marginLeft': 5, 'marginTop': '5%'}), ], className='row eleven columns') def development_summary(): """ Returns the text of architecture summary of the project. """ return html.Div(children=[ dcc.Markdown(''' ## Development Process This project uses MongoDB as the database. All data acquired are stored in raw form to the database (with de-duplication). An abstract layer is built in `database.py` so all queries can be done via function call. For a more complicated app, the layer will also be responsible for schema consistency. A `plot.ly` & `dash` app is serving this web page through. Actions on responsive components on the page is redirected to `app.py` which will then update certain components on the page. ''', className='row eleven columns', style={'paddingLeft': '5%','marginTop': '5%'}), dcc.Markdown(''' ''') ], className='row') def data_acquisition_summary(): """ Returns the text of data acquisition technique of the project. """ return html.Div(children=[ dcc.Markdown(''' ## Data Acquisition Steps: * Use API to query history records from January 1st, 2018 and load it into MongoDB (load_data.py) * data_acquire.py will check every 15 seconds and call the function upsert_crime from database.py * Before the app server is up, fetch_all_crime_as_df will be called and the results will be cached to reduce access latency * app server will run alongside with data_acquire.py to capture real-time updates, also providing an interface for the user to explore crime rate trend Links: * [ETL_EDA notebook](https://github.com/LACrimeMap/LACrimeMap/blob/master/ETL_EDA.ipynb) * [ETL_EDA pdf](https://drive.google.com/open?id=1-OkSnCJjlHQU7cF4zV9CqeO8U-dKfXDv) ''', className='row eleven columns', style={'paddingLeft': '5%','marginTop': '5%'}), dcc.Markdown(''' ''') ], className='row') def enhancement_summary(): """ Returns the text of time series model. """ return html.Div(children=[ dcc.Markdown(''' ## Enhancement #### Prediction Develop time series models to predict the number of crimes in a particular district or whole city. Web-users could change inputs (a particular district or whole city) to visualize the prediction result. In our notebook where we used the whole city as an example, the baseline MSE is 7017, while our model achieves 4644, which is better than the baseline model. * The [enhancement notebook](https://github.com/LACrimeMap/LACrimeMap/blob/master/Enhancement.ipynb) * The [enhancement pdf](https://drive.google.com/open?id=1V77GMJvCvnekd5IfGeu3yt6cAzufEJSM) #### GCP Deployment Website: http://34.67.248.169:1050/ ''', className='row eleven columns', style={'paddingLeft': '5%','marginTop': '5%'}), dcc.Markdown(''' ''') ], className='row') def next_step(): """ Returns the text of possible next improvement. """ return html.Div(children=[ dcc.Markdown(''' ## Next steps * Possibly create index for faster query and decrease the website loading latency * Add exception handling and consider situations when interactive plots may break * Add another interactive dashboard for crime numbers prediction, allows the users to choose area (1-21) or whole city for prediction ''', className='row eleven columns', style={'paddingLeft': '5%','marginTop': '5%'}), dcc.Markdown(''' ''') ], className='row') def reference(): """ Returns the text of possible next improvement. """ return html.Div(children=[ dcc.Markdown(''' ## Reference * [Kaggle time series forecast](https://www.kaggle.com/ambarish/eda-lacrimes-maps-timeseriesforecasts-xgboost) * [Kaggle global spatial autocorrelation](https://www.kaggle.com/ghannay/spatial-autocorrelation-of-la-crime) ''', className='row eleven columns', style={'paddingLeft': '5%','marginTop': '5%'}), dcc.Markdown(''' ''') ], className='row') # Sequentially add page components to the app's layout def dynamic_layout(): return html.Div([ page_header(), html.Hr(), description(), #dcc.Graph(id='stacked-trend-graph', figure=static_stacked_trend_graph(stack=False)), what_if_description(), what_if_tool(), crime_map_description(), crime_map_tool(), development_summary(), data_acquisition_summary(), enhancement_summary(), next_step(), reference() ], className='row', id='content') # set layout to a function which updates upon reloading app.layout = dynamic_layout @app.callback( dash.dependencies.Output('what-if-figure', 'figure'), [dash.dependencies.Input('my-date-picker-range', 'start_date'), dash.dependencies.Input('my-date-picker-range', 'end_date')]) def what_if_handler(startdate, enddate): """Changes the display graph of crime rates""" df = fetch_all_crime_as_df(allow_cached=True) if df is None: return go.Figure() c = df.groupby(['grp_description','month']).count() crime = ['Miscellaneous Other Violations', 'Narcotic Drug Laws', 'Aggravated Assault', 'Driving Under Influence', 'Other Assaults'] start = pd.Timestamp(startdate) end = pd.Timestamp(enddate) start = pd.Timestamp(dt(start.year, start.month, 1)) end = pd.Timestamp(dt(end.year, end.month, 1)) month_range_num = round(((end - start).days)/30) test_axis = [start + relativedelta(months=+i) for i in range(month_range_num + 1)] title = 'Crime counts of top five categories' fig = go.Figure() for i, s in enumerate(crime): count_array = c.loc[s]['rpt_id'] #print(count_array) count = [count_array[x] for x in test_axis] fig.add_trace(go.Scatter(x=test_axis, y=count, mode='lines', name=s, line={'width': 2, 'color': COLORS[i]}, stackgroup=False)) fig.update_layout(template='plotly_dark', title=title, plot_bgcolor='#23272c', paper_bgcolor='#23272c', yaxis_title='Number of crimes', xaxis_title='Month') return fig @app.callback( dash.dependencies.Output('what-if-crime', 'figure'), [dash.dependencies.Input('crime-date-picker-range', 'start_date'), dash.dependencies.Input('crime-date-picker-range', 'end_date'), dash.dependencies.Input('crime-dropdown', 'value'),]) def crime_handler(startdate, enddate, crimetype): """Changes the display graph of crime rates"""
\section{Preliminaries}\label{s.pre} Consider a 4-manifold $M$ and a principal bundle $P$ over $M$, with structure group $G$. We assume that $G$ is compact and semi-simple. Without loss of generality we will assume that $G$ is a Lie subgroup of $U(\bar{N})$, $\bar{N} \in \mathbb{N}$ and $P \rightarrow M$ is a trivial bundle. We will identify the Lie algebra $\mathfrak{g}$ of $G$ with a Lie subalgebra of the Lie algebra $\mathfrak{u}(\bar{N})$ of $U(\bar{N})$ throughout this article. Suppose we write $\Tr \equiv \Tr_{{\rm Mat}(\bar{N}, \mathbb{C})}$. Then we can define a positive, non-degenerate bilinear form by \beq \langle A, B \rangle = -\Tr_{{\rm Mat}(\bar{N}, \mathbb{C})}[AB] \label{e.i.2} \eeq for $A,B \in \mathfrak{g}$. Let $\{E^\alpha\}_{\alpha=1}^N$ be an orthonormal basis in $\mathfrak{g}$, which will be fixed throughout this article. The vector space of all smooth $\mathfrak{g}$-valued 1-forms on the manifold $M$ will be denoted by $\mathcal{A}_{M, \mathfrak{g}}$. Denote the group of all smooth $G$-valued mappings on $M$ by $\mathcal{G}$, called the gauge group. The gauge group induces a gauge transformation on $\mathcal{A}_{M, \mathfrak{g}}$, $\mathcal{A}_{M, \mathfrak{g}} \times \mathcal{G} \rightarrow \mathcal{A}_{M, \mathfrak{g}}$ given by \beq A \cdot \Omega := A^{\Omega} = \Omega^{-1}d\Omega + \Omega^{-1}A\Omega \nonumber \eeq for $A \in \mathcal{A}_{M, \mathfrak{g}}$, $\Omega \in \mathcal{G}$. The orbit of an element $A \in \mathcal{A}_{M, \mathfrak{g}}$ under this operation will be denoted by $[A]$ and the set of all orbits by $\mathcal{A}/\mathcal{G}$. Let $\Lambda^q(T^\ast M)$ be the $q$-th exterior power of the cotangent bundle over $M$. Fix a Riemannian metric $g$ on $M$ and this in turn defines an inner product $\langle \cdot, \cdot \rangle_q$ on $\Lambda^q(T^\ast M)$, for which we can define a volume form $d\omega$ on $M$. This allows us to define a Hodge star operator $\ast$ acting on $k$-forms, $\ast: \Lambda^k(T^\ast M) \rightarrow \Lambda^{4-k}(T^\ast M)$ such that for $u, v \in \Lambda^k(T^\ast M)$, we have \beq u \wedge \ast v = \langle u, v \rangle_q\ d\omega. \label{e.x.7} \eeq An inner product on the set of smooth sections $\Gamma(\Lambda^k(T^\ast M))$ is then defined as \beq \langle u, v \rangle = \int_M \langle u, v \rangle_q\ d\omega. \label{e.x.7a} \eeq Given $u \otimes E \in \Lambda^q(T^\ast M) \otimes \mathfrak{g}$, we write \beq |u \otimes E|^2 = -\Tr\left[ E \cdot E \right]\ u \wedge \ast u = -\Tr\left[ E \cdot E \right]\langle u, u \rangle_q\ d\omega. \nonumber \eeq Hence for $A \in \mathcal{A}_{M, \mathfrak{g}}$, the Yang-Mills action is given by \beq S_{YM}(A) = \int_{M} \left|dA + A \wedge A \right|^2. \label{e.c.1} \eeq Note that this action is invariant under gauge transformations. Let $C$ be a simple closed curve in the manifold $M$. The holonomy operator of $A$, computed along the curve $C$, is given by \beq \mathcal{T} \exp \left[ \int_{C} A\right], \nonumber \eeq whereby $\mathcal{T}$ is the time ordering operator. See Definition \ref{d.t.3} for the definition of $\mathcal{T}$. It is of interest to make sense of the following path integral, \beq \frac{1}{Z}\Tr\int_{A \in \mathcal{A}_{M, \mathfrak{g}} /\mathcal{G}} \mathcal{T} \exp \left[ \int_{C} A\right] e^{-\frac{1}{2}S_{YM}(A)}\ DA, \label{e.ym.1} \eeq whereby $DA$ is some Lebesgue type of measure on the space of $\mathfrak{g}$-valued 1-forms, modulo gauge transformations and \beq Z = \int_{A \in \mathcal{A}_{M, \mathfrak{g}} /\mathcal{G}}e^{-\frac{1}{2}S_{YM}(A)}\ DA. \nonumber \eeq \begin{notation} Let $\Lambda^p(\bR^n)$ be the $p$-th exterior power of the vector space $\bR^n$, $n=3,4$. We also denote the smooth sections of a bundle $P$ by $\Gamma(P)$. In this article, when $n=3$, $p=1$; and when $n=4$, $p=2$. From now on, we only consider $M = \bR^4$ and take the principal bundle $P$ over $\bR^4$ to be the trivial bundle. On $\bR^4$, fix the coordinate axis and choose global coordinates $\{x^0, x^1, x^2, x^3\}$ and let $\{e_a\}_{a=0}^3$ be the standard orthonormal basis in $\bR^4$. We will also choose the standard Riemannian metric on $\mathbb{R}^4$. Now let $T^\ast\bR^4 \rightarrow \bR^4$ denote the trivial cotangent bundle over $\bR^4$, i.e. $T^\ast \bR^4 \cong \bR^4 \times \Lambda^1(\bR^4)$ and $\Lambda^1(\bR^3)$ denote the subspace in $\Lambda^1(\bR^4)$ spanned by $\{dx^1, dx^2, dx^3\}$. There is an obvious inner product defined on $\Lambda^1(\bR^3)$, i.e. $\langle dx^i, dx^j \rangle = 0$ if $i \neq j$, 1 otherwise, which it inherits from the standard metric on $T^\ast \bR^4$. \end{notation} Using axial gauge fixing, every $A \in \mathcal{A}_{\bR^4, \mathfrak{g}} /\mathcal{G}$ can be gauge transformed into a $\mathfrak{g}$-valued 1-form, of the form $A = \sum_\alpha \sum_{j=1}^3a_{j, \alpha} \otimes dx^j \otimes E^\alpha $, subject to the conditions \beq a_{1,\alpha}(0, x^1, 0, 0) = 0,\ a_{2, \alpha}(0, x^1, x^2, 0)=0,\ a_{3,\alpha}(0, x^1, x^2, x^3) = 0. \nonumber \eeq Hence it suffices to consider the Yang-Mills integral in Expression (\ref{e.ym.1}) to be over the space of $\mathfrak{g}$-valued 1-forms of the form $A = \sum_\alpha \sum_{j=1}^3a_{j, \alpha} \otimes dx^j \otimes E^\alpha $, whereby $a_{j,\alpha}: \bR^4 \rightarrow \bR$ is smooth. Its curvature is then given by \begin{align} dA + A \wedge A =& \sum_\alpha\sum_{1\leq i<j \leq 3}a_{i;j, \alpha} \otimes dx^i \wedge dx^j \otimes E^\alpha + \sum_{\alpha,\beta}\sum_{1 \leq i<j\leq 3}a_{i,\alpha}a_{j,\beta} \otimes dx^i \wedge dx^j \otimes [E^\alpha, E^\beta] \nonumber \\ &+ \sum_\alpha\sum_{j=1}^3 a_{0:j, \alpha} \otimes dx^0 \wedge dx^j \otimes E^\alpha, \nonumber \end{align} for $a_{i;j,\alpha} := (-1)^{ij}[\partial_i a_{j, \alpha} - \partial_j a_{i, \alpha}]$, $a_{0:j, \alpha} := \partial_0 a_{i, \alpha}$. \begin{notation}\label{n.n.4} Note that $\Lambda^2(T^\ast \bR^4) \cong \bR^4 \times \Lambda^2(\bR^4)$. Using the global coordinates $\{x^0, x^1, x^2, x^3\}$, we fix an orthonormal basis $\{dx^1 \wedge dx^2, dx^3 \wedge dx^1, dx^1 \wedge dx^2, dx^0 \wedge dx^1, dx^0 \wedge dx^2, dx^0 \wedge dx^3\}$ in $\Lambda^2 (\bR^4)$. Using the standard metric on $\bR^4$, the corresponding volume form is given by $d\omega = dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3$. From the Hodge star operator and the above volume form, we can define an inner product on the set of smooth sections $\Gamma(\Lambda^2( T^\ast \bR^4))$ as in Equation (\ref{e.x.7a}). For $i,j,k = 1,2,3$, we will define $\epsilon_{ijk} = (-1)^{|\sigma(ijk)|}$ if all are distinct, $\sigma(ijk)$ is a permutation in $S_3$, and $|\sigma(ijk)|$ is the number of transpositions; 0 otherwise. \end{notation} \begin{defn} Define \beq c_{\gamma}^{\alpha \beta} = -\Tr\left[E^\gamma [E^\alpha, E^\beta] \right]. \nonumber \eeq \end{defn} Then, \begin{align} dA &+ A \wedge A \nonumber \\ =& \sum_\gamma\bigg[ \sum_{1\leq i<j \leq 3}a_{i;j, \gamma}\otimes dx^i \wedge dx^j +\sum_{\alpha<\beta}\sum_{1\leq i<j\leq 3}a_{i,\alpha}a_{j,\beta} c_\gamma^{\alpha \beta} \otimes dx^i \wedge dx^j \nonumber \\ &\hspace{8cm}+ \sum_{j=1}^3 a_{0:j, \gamma}\otimes dx^0 \wedge dx^j \bigg]\otimes E^\gamma. \nonumber \end{align} Thus, \begin{align} \int_{\bR^4}|dA + A \wedge A|^2 =& \sum_{i<j}\int_{\bR^4}\bigg[ \sum_{\alpha}a_{i;j, \alpha}^2 + \sum_\gamma\sum_{\alpha< \beta, \hat{\alpha}< \hat{\beta}}a_{i,\alpha}a_{j,\beta}a_{i,\hat{\alpha}}a_{j,\hat{\beta}}c_{\gamma}^{\alpha\beta}c_\gamma^{\hat{\alpha}\hat{\beta}} \nonumber \\ &+2\sum_{\alpha< \beta, \gamma}a_{i;j,\gamma}a_{i,\alpha}a_{j,\beta}c_\gamma^{\alpha\beta} \bigg]\ d\omega+ \sum_j\int_{\bR^4}\sum_{\alpha} a_{0:j,\alpha}^2\ d\omega. \label{e.ym.2} \end{align} It is conventional wisdom to interpret $\exp\left[ -\frac{1}{2}\sum_{i<j}\int_{\bR^4}d\omega\ \sum_{\alpha}\left( a_{i;j, \alpha}^2 + a_{0:j,\alpha}^2 \right) \right]DA$ as a Gaussian measure. \subsection{Abelian Case}\label{ss.ac} Consider first a 2-dimensional Euclidean space $\bR^2$, with Abelian group $G = U(1)$, and $\mathfrak{u}(1) \cong \bR \otimes \sqrt{-1}$. Let $S$ be a surface with boundary $C = \partial S$. Write $A = \sum_{i=1}^2 A_i \otimes dx^i \in \Gamma(\Lambda^1(T^\ast\bR^2))$, $A_i: \bR^2 \rightarrow \bR$. Using Stokes' Theorem, \begin{align} \int_C \sum_{i=1}^2 A_i \otimes dx^i =& \int_{\partial S} \sum_{i=1}^2 A_i dx^i = \int_S dA = \int_{\bR^2} dA \cdot1_S = \left\langle dA, 1_S \right\rangle. \nonumber \end{align} Here, $\langle \cdot, \cdot \rangle$ is the $L^2$ inner product on the space of Lebesgue integrable functions on $\bR^2$. Thus, the Yang-Mills path integral becomes \beq \frac{1}{\int_A e^{-|dA|^2/2}DA}\int_{A} e^{\sqrt{-1}\langle dA, 1_S \rangle}e^{-\frac{1}{2}|dA|^2}DA. \nonumber \eeq Now, make a heuristic change of variables, $A \mapsto dA$, hence we have \begin{align*} \frac{1}{\det{d^{-1}}\int_A e^{-|dA|^2/2}D[dA]}&\det{d^{-1}}\int_{A} e^{\sqrt{-1}\langle dA, 1_S \rangle}e^{-\frac{1}{2}|dA|^2}D[dA] \\ &= \frac{1}{\int_A e^{-|dA|^2/2}D[dA]}\int_{A} e^{\sqrt{-1}\langle dA, 1_S \rangle}e^{-\frac{1}{2}|dA|^2}D[dA]. \end{align*} This is a Gaussian integral of the form given in Lemma \ref{l.w.1}, hence we can define the path integral as $\exp\left[ -|1_S|^2/2\right] = \exp\left[ -|S|/2\right]$, whereby $|S|$ is the area of the surface $S$. Now, we move up to $\bR^4$, still using $G = U(1)$. The above argument still applies, except that now $1_S$ has norm 0. This will yield 1 for any surface $S$, which means we have to redefine our path integral to obtain non-trivial results. Let $\chi_x$ be the evaluation map, i.e. given a (smooth) function $f: \bR^4 \rightarrow \bR$, we will write $f(x) = \langle f, \chi_x \rangle$. To the physicists, $\chi_x$ is just the Dirac Delta function. Furthermore, given a 2-form written in the form $F \equiv \sum_{0\leq i<j \leq 3} f_{ij} dx^i \wedge dx^j$, with $f_{ij}$ being smooth functions on $\bR^4$; when we write $\langle F, \chi_x \otimes dx^a \wedge dx^b
# JAC Class 9 Science Important Questions Chapter 4 Structure of the Atom ## JAC Board Class 9th Science Important Questions Chapter 4 Structure of the Atom Multiple Choice Questions Question 1. Almost the entire mass of an atom is concentrated in the? (a) proton (b) electron (c) nucleus (d) neutrons (c) nucleus Question 2. Positive charge is carried by: (a) X – rays (b) cathode rays (c) Y – rays (d) anode rays (d) anode rays Question 3. Which one is not true for isotopes? (a) Similar mass number (b) Similar chemical properties (c) Similar atomic number (d) Similar electronic configuration (a) Similar mass number. Question 4. Electron was discovered by (b) Thomson (c) Goldstein (d) Bohr (b) Thomson. Question 5. The size of an atom is decided by (a) mass of the atom (b) number of protons (c) number of protons and neutrons (d) number of electrons (d) number of electrons. Question 6. Calcium has 20 electrons. These occupy K, L, M and N shells. Which shell or shells are incomplete? (a) L, M, N shells (b) M, N shells (c) N shell (d) K, M, L, N shells (c) N shell Question 7. The K, L and M shells of an atom are full. Its atomic number is: (a) 18 (b) 20 (c) 10 (d) 12 (a) 18. Question 8. Cathode rays are deflected towards? (a) positive electrode (b) negative electrode (c) both electrodes (d) none of the electrode (a) positive electrode. Question 9. The absolute charge of an electron is? (a) -1.6 × 10-19C (b) +1.6 × 10-19C (c) 0.16 × 10-19 C (d) 16 × 10-19 C (a) -1.6 × 10-19C. Question 10. The proton is heavier than an electron by? (a) 1850 times (b) 1840 times (c) 1000 times (d) 100 times (b) 1840 times. Question 11. Carbon – 12 atom has: (a) 6 electrons, 6 protons, 6 neutrons (b) 6 electrons, 12 protons, 6 neutrons (c) 12 electrons, 6 protons, 6 neutrons (d) 18 electrons, 6 protons and 6 neutrons (a) 6 electrons, 6 protons, 6 neutrons. Question 12. Chadwick got the Nobel prize for the discovery of? (a) protons (b) neutrons (c) electrons (d) none of these (b) neutrons. Question 13. The volume of the nucleus of an atom when compared to the extranuclear part is? (a) bigger (b) smaller (c) same size (d) unpredictable (b) smaller. Question 14. The isobars among the following are (a) $${ }_{20}^{40} \mathrm{Ca}$$, $${ }_{17}^{34} \mathrm{Cl}$$ (b) $${ }_{18}^{40} \mathrm{Ar},{ }_{19}^{40} \mathrm{Ca}$$ (c) $${ }_{8}^{16} \mathrm{O},{ }_{8}^{18} \mathrm{O}$$ (d) $${ }_{7}^{19} \mathrm{X},{ }_{7}^{13} \mathrm{Y}$$ (b) $${ }_{18}^{40} \mathrm{Ar},{ }_{19}^{40} \mathrm{Ca}$$. Question 15. Outermost shell of an atom cannot accommodate more electrons than (a) 2 (b) 8 (c) 18 (d) 16 (b) 8. Analysing & Evaluating Questions Question 16. The number of electrons in an element X is 15 and the number of neutrons is 16. Which of the following is correct representation of the element? (a) $${ }_{15}^{31} X$$ (b) $${ }_{31}^{16} X$$ (c) $${ }_{16}^{15} X$$ (d) $${ }_{15}^{17} X$$ (a) $${ }_{15}^{31} X$$. Question 17. The ion of an element has 3 positive charges. Mass number of the atom is 27 and the number of neutrons is 14. What is the number of electrons in the ion? (a) 13 (b) 10 (c) 14 (d) 16 (b) 10. Question 18. Which of the following in figures given below do not represent Bohr’s model of an atom correctly? (a) I and II (b) II and III (c) II and IV (d) I and IV (c) II and IV. Assertion Reason Questions Directions: In the following questions, the Assertions and the Reasons have been put forward Read the statements carefully and choose the correct alternative from the following: (A) Both the assertion and the reason are correct and the reason is the correct explanation of the assertion. (B) The assertion and the reason are correct but the reason is not the correct explanation of the assertion. (C) The assertion is true but the reason is false. (D) Both the statements are false. 1. Assertion : Nucleus of an atom is positively charged. Reason : Neutrons lie in the nucleus of an atom. (B) The assertion and the reason are correct but the reason is not the correct explanation of the assertion. 2. Assertion : Electrons revolving in the orbit never fall in the nucleus. Reason : Opposite charges of nucleus and electrons maintain centripetal force. (C) The assertion is true but the reason is false. 3. Assertion : Valency of aluminum is 3. Reason : Valency is the number of electrons present in the outermost shell. (C) The assertion is true but the reason is false. 4. Assertion : Mass of an atom lies in its nucleus. Reason : Nucleus of an atom is composed of neutrons and protons. (A) Both the assertion and the reason are correct and the reason is the correct explanation of the assertion. 5. Assertion : Hydrogen and deuterium have similar chemical properties. Reason : Hydrogen and deuterium have the same electronic configuration. (A) Both the assertion and the reason are correct and the reason is the correct explanation of the assertion. Question 1. What is an electron? What are its relative mass and charge? An electron is that sub – atomic particle which is negatively charged and has a mass of about 1/1840 times that of an atom of hydrogen. Question 2. What are nucleons? The sub – atomic particles (protons and neutrons) present in the nucleus of an atom are known as nucleons. Question 3. Why Bohr’s orbits are called stationary states? According to Bohr’s theory, electrons revolve around the nucleus and they have a fixed amount of energy. Thus, they are called stationary states. Question 4. What is an orbit? Orbit is the path around the nucleus in which the electron revolves. Question 5. What is meant by the electronic configuration of elements? The systematic arrangement of electrons in different orbits or shells of an atom of an element is known as the electronic configuration of that element. Question 6. Why did Rutherford select a gold foil in his α – ray scattering experiment? It is because gold has high malleability and it can be hammered into thin sheets. Question 7. Will Cl – 35 and Cl – 37 have different valencies? No. It is because these are isotopes of chlorine that have same atomic number but different mass numbers. Question 8. Calculate the number of neutrons present in the nucleus of an element X which is represented as X 15. $${ }^{31} \mathrm{X}_{15}$$indicate that no. of protons = 15 and mass number = 31 Mass number = No. of protons + No. of neutrons = 31 Number of neutrons = 31 – number of protons = 31 – 15 = 16 Question 9. Why do Helium, Neon and Argon have zero valency? Helium, Neon and Argon have 2, 8 and 8 electrons respectively in their outermost shell, so there is no need for them to gain or lose electrons. Hence, they have zero valency. Question 10. Name two elements each with same number of protons and neutrons? Carbon (protons = neutrons = 6) and oxygen (protons = neutrons = 8) Question 11. Can the value of Z be same for two different atoms? (Z = atomic number) No, two different atoms cannot have same atomic number. Question 12. Name the scientist who discovered protons and neutrons in an atom. Protons were discovered by E. Goldstein in 1866 and neutrons were discovered by J. Chadwick in 1932. Question 13. What is an octet? Why do the atoms want to complete their octet? When the outermost shell of an atom is completely filled, it is said to be an octet. Atoms want to complete their octet because they want to become stable. Question 14. Find the valency of $${ }_{7}^{14} \mathrm{~N}$$ and $${ }_{17}^{35} \mathrm{Cl}$$ The atomic number of nitrogen = 7, number of protons = 7, number of electrons = 7 Electronic configuration = K L M = 2, 5, ⇒ Valency = 3 It will either gain three electrons or share 3 electrons to complete its octet. The atomic number of chlorine = 17, number of protons = 17, number of electrons = 17 Electronic configuration = K L M = 2, 8, 7 = Valency = 1 It will gain 1 electron to complete its octet. Question 15. Give the difference between three sub-atomic particles. Three sub – atomic particles are electrons, proton and neutron: Particle Electron Proton Neutron Discovered by J.J Thomson E. Goldstein J. Chadwick Charge -1 +1 0 Symbol e P n Mass 1/1840u 1u 1u Analysing & Evaluating Questions Question 16. In a sample of ethyl ethanoate (CH3COOC2H5), the two oxygen atoms have the same number of electrons but different number of neutrons. What can be the possible reason for it? The reason for it is that the two oxygen atoms are isotopes. Question 17. In
# N Enhancement Logic mosfets #### André Ferrato Joined Apr 5, 2015 215 Thanks everyone, i'll think will go for this IRLZ44N, 10 pieces for 3.86 dollars, really cheap. #### André Ferrato Joined Apr 5, 2015 215 Yes, of course, these 10 pieces i sent the link is free shipping. Do you think the quality varies a lot, because i do not intend to use these mosfets in things that cannot fail(vital and stuff). #### #12 Joined Nov 30, 2010 18,217 Do you think the quality varies a lot, eBay is a major source of counterfeit products, especially from China. That's why we try to steer you to reputable vendors. #### André Ferrato Joined Apr 5, 2015 215 I see, but the only way for me to buy good stuff is to live in São Paulo, i could say that 80% of all good electronical shops are located there. In my city( It's like the second best city to live in Brazil and it's so hard to find electronical components), to find a mosfet, uuuh, too hard. I managed to find some IRFZ46N for another project, but it was a miracle. IC's ? I came into a shop, asked for a flip flop ic, the guy said that he didn't worked with that... so i asked to take a look at his ic's, he had A LOT of CD4013, a Dual D Rising edge Flip Flop, a very versatile ic. From now on i try to help the guy saying what each IC do. #### GopherT Joined Nov 23, 2012 8,012 I see, but the only way for me to buy good stuff is to live in São Paulo, i could say that 80% of all good electronical shops are located there. In my city( It's like the second best city to live in Brazil and it's so hard to find electronical components), to find a mosfet, uuuh, too hard. I managed to find some IRFZ46N for another project, but it was a miracle. IC's ? I came into a shop, asked for a flip flop ic, the guy said that he didn't worked with that... so i asked to take a look at his ic's, he had A LOT of CD4013, a Dual D Rising edge Flip Flop, a very versatile ic. From now on i try to help the guy saying what each IC do. Digikey offers service to Brazil http://www.digikey.com/us/en/international/global.html #12 #### André Ferrato Joined Apr 5, 2015 215 Yes i don't said these companies doesn't offer services here. The problem is the shipping taxes, this only works for companies who ship a great number of components. For a hobbyist, a nightmare. #### atferrari Joined Jan 6, 2004 4,197 I see, but the only way for me to buy good stuff is to live in São Paulo, i could say that 80% of all good electronical shops are located there. In my city( It's like the second best city to live in Brazil and it's so hard to find electronical components), to find a mosfet, uuuh, too hard. I managed to find some IRFZ46N for another project, but it was a miracle. IC's ? I came into a shop, asked for a flip flop ic, the guy said that he didn't worked with that... so i asked to take a look at his ic's, he had A LOT of CD4013, a Dual D Rising edge Flip Flop, a very versatile ic. From now on i try to help the guy saying what each IC do. Oi André, Calling Santos (the most frequent port of call in my 17 years at sea) I used to go and buy things in the so many shops along Rua Santa Ifigenia maybe not too far from where you live. There was also an alternative, Rua Republica do Líbano, no Río. That was almost 25 years ago. Is not possible nowadays? Engraçado, a primeira palavra em Português que ouvi na minha vida foi, "fechado", na Rua Cámara de Santos. #### atferrari Joined Jan 6, 2004 4,197 Yes i don't said these companies doesn't offer services here. The problem is the shipping taxes, this only works for companies who ship a great number of components. For a hobbyist, a nightmare. Even with our recent changes on those matters, the Farnell's local representative still has a minimum of 50 USD for every single item, no matter what! Go figure. I had to forget buying from them some 3 years ago. #### André Ferrato Joined Apr 5, 2015 215 Ahahaha Yeees santa Ifigênia, the paradise i would say.. there is like SO MANY shops of electronics devices, components. But unfortunately it is in são Paulo, 500 Km from were i live. You would be surprised of the taxes you pay to go there, i live in são José do rio preto, to reach são Paulo there is 13 stops that you need to pay to keep going, this costs around 140 reais, maybe 35 dollars and the fuel to go.. 180 reais, 45 dollars. Things are so expensive here.. The times i went, my stepdad took me, and because he had business there. But yes there are things that i didnt even know it existed. Santos is 70km from são Paulo, after you reach SP, its his neighboor. Also the shipping tax from são Paulo to são José do rio preto is around 18 reais, 4.5 dollars, maybe in dollars is not a big deal, but 18 reais is a big quantity. With 18 reais i could buy 2 protoboards from eBay, with 18 reais here i could buy half a protoboard ahahha #### André Ferrato Joined Apr 5, 2015 215 And the sellers here think they are so smart. Everyone think they can fool people to get more money, but this is just a reflex of the governments we have, and the governments are the reflexes of what we are. Among us there are veeeeery nice people but also there are a lot of mean people, that you think they're nice and they arent. I dont know how to describe, but to live here, you need to grow smart really soon. An example of a seller: An ultrasonic Arduíno sensor is like 4 reais in eBay and its a very good working sensor. Just because is oh an ultrasonic sensor.. The vendor thinks he can just take the price to 50 reais from a 4 reais device. Just because is a new tech( Around here of course). Another example: Its been like 8 months that im looking for a shop that sells an LDO, there may be in santa Ifigênia, but thrse guys DONT havê a website or a catalog so you can look for and if you call them and ask for an LDO, he is going to say WHAT? Or i could possibly name every LDO and he would check his system and would probably have one. Sometimes brazil is so primitive. #### dl324 Joined Mar 30, 2015 12,447 Would a sender from the USA, our you, have to pay taxes or other fees on small packages? #### André Ferrato Joined Apr 5, 2015 215 I believe that if someone from the USA sends me a really small package, let's say... less than 15 dollars or 20 dollars and physically small, it would go through the customs( i believe this is the word for that package control institute) without taxes. Because i buy small packages everytime from china, ebay and they go nicely tax free. I just don't know the about the taxes to send in the USA, because i don't know the politics around there. #### #12 Joined Nov 30, 2010 18,217 @atferrari I was expecting you to arrive in this Thread. American postal operation is that we pay for the travel service, drop the package in the box, and forget about it. #### André Ferrato Joined Apr 5, 2015 215 Yes, but how much would pay to send a package to são Paulo? #### dl324 Joined Mar 30, 2015 12,447 Yes, but how much would pay to send a package to são Paulo? Up to
these questions tend to show that you have not studied the fast-growing hierarchy : in this notation, Conway's $n\rightarrow n \rightarrow\dots\rightarrow n$ (with $n$ arrows) is (much) smaller than $f_{\omega^2+1}(n)$, and any "recursive" construction in the line of BEAF grows slower than $f_{\omega^\omega}$, while TREE grows much faster than $f_{\epsilon_0}$, or even $f_{\Gamma_0}$... </p> http://mathoverflow.net/questions/107043/is-monskys-theorem-depending-on-axiom-of-choice Is Monsky's theorem depending on axiom of choice ? Feldmann Denis 2012-09-12T20:55:41Z 2012-09-12T21:21:28Z <p>The extension of the 2-adic valuation to the reals used in the usual proof uses clearly AC. But is this really necessary ? After all, given a equidissection in $n$ triangles, it is finite, so it should be possible to construct a valuation for only the algebraic numbers, and the coordinates of the summits (with a finite number of "choices"), and then follow the proof to show that $n$ must be even. Or am I badly mistaken ? </p> http://mathoverflow.net/questions/106448/topologies-on-the-field-of-rationals Topologies on the field of rationals Feldmann Denis 2012-09-05T16:44:35Z 2012-09-06T14:06:03Z <p>Ostrowski's theorem give the answer for valuations, but is there a complete classification of (at least separated) topologies on Q (compatible with the field operations, obviously)?</p> http://mathoverflow.net/questions/105758/homeomorphism-of-the-rationals/105763#105763 Answer by Feldmann Denis for Homeomorphism of the rationals Feldmann Denis 2012-08-28T21:23:01Z 2012-08-28T21:23:01Z <p>Not a really complete answer, but look at the question mark fonction (see <a href="http://en.wikipedia.org/wiki/Minkowski%27s_question_mark_function" rel="nofollow">http://en.wikipedia.org/wiki/Minkowski%27s_question_mark_function</a>) to a counter-example satisfying 1,2,3 and 4 : $?(\sqrt2)=7/5$, for instance.</p> http://mathoverflow.net/questions/105715/functions-with-null-derivative Functions with null derivative Feldmann Denis 2012-08-28T13:26:37Z 2012-08-28T13:56:45Z <p>I am not here referring to <a href="http://en.wikipedia.org/wiki/Cantor_function" rel="nofollow">the devil staircase</a>, but to <a href="http://en.wikipedia.org/wiki/Minkowski%27s_question_mark_function" rel="nofollow">the question mark function</a>. This is a strictly increasing function from $\mathbb{Q}$ to $\mathbb{Q}$, with derivative always $0$. I have two questions: </p> <ol> <li><p>Is this a surprising result (in the same way that continuous functions nowhere differentiable were thought surprising when discovered)? </p></li> <li><p>What are the properties preventing this (obviously, we need a topological field to speak of derivative; are there exemples of connected topological fields with non constant functions having everywhere a null derivative?)</p></li> </ol> http://mathoverflow.net/questions/105555/integrating-1-xsin1-x/105563#105563 Answer by Feldmann Denis for integrating 1/(x*sin(1/x)) Feldmann Denis 2012-08-26T18:09:41Z 2012-08-26T18:09:41Z <p>Try Liouville theorem ; as the obvious $u=1/x$ lend to integration of $\frac 1{u\sin u}$, it seems impossible indeed in elementary terms ; actually, Maple isn't even able to integrate it using special functions</p> http://mathoverflow.net/questions/105222/hales-work-on-kepler-conjecture Hales work on Kepler conjecture Feldmann Denis 2012-08-22T10:28:24Z 2012-08-22T14:33:31Z <p>It seems that the Fulkerson prize has been attributed to Thomas Hales for this work. What is the present status of the conjecture, then?</p> http://mathoverflow.net/questions/103761/fast-growing-hierarchy-and-turing-machines Fast-growing hierarchy and Turing machines Feldmann Denis 2012-08-02T05:50:58Z 2012-08-13T07:10:41Z <p>Is it possible to get an estimate of the size of a Turing machine computing $f_\alpha(n)$, for a given $\alpha$ (I am especialy interested in moderately large $\alpha$ like the ordinal of Fefferman-Schütte, or the small Veblen ordinal)? The idea is to get an idea of the size of the BusyBeaver function $BB(n)$ for moderate values of $n$, as the litterature usually only mention the exact known values (for $n\le 6$) and the fact that such values will probably never be known for $n=10$, say.</p> http://mathoverflow.net/questions/95677/mandelbrot-set-and-analytic-functions-such-that-fazfz2c Mandelbrot set and analytic functions such that $f(az)=f(z)^2+c$ Feldmann Denis 2012-05-01T16:29:57Z 2012-08-03T20:06:50Z <p>It is well known that the function $f(z)=2\cos(\sqrt {-z})$ (or more accurately the entire function $f(z)=2\sum_{n=0}^\infty \frac{z^n}{(2n)!}$) satisfies such a functional equation, i.e. $f(4z)= f(z)^2-2$ ; it is not hard to show that this is the unique solution with $c=-2$ and $f'(0)=-1$. Patient calculations (see for instance <a href="http://denisfeldmann.fr/PDF/mandel.pdf" rel="nofollow"> this note in french</a>), or the classical results of Tan Lei, show that this implies that the small cardioids on the left main antenna of the Mandelbrot set (on the real axis near $-2$) are in position $-2+3\pi^2/2^{2n+1}+o(4^{-n})$. It would be easy to generalise this result to other Misiurewicz points, if similar functions were known for other values of $c$, as the small copies of the $M$-set lie similarly near the zeros of these functions (after appropriate rescaling) ; it is not hard to show their existence and unicity, but have they been studied, and what is known on their zeros?</p> http://mathoverflow.net/questions/101996/cantor-theorem-on-orders Cantor theorem on orders Feldmann Denis 2012-07-11T21:53:24Z 2012-07-18T06:44:36Z <p>It is "a well-known theorem of Cantor", said Sierpinski (circa 1920), that every countable total order can be imbedded in the rationals, and he proceeds to demonstrate that, assuming the continuum hypothesis, it is possible to construct a similar "universal order" of cardinal $\aleph_1$. I have two questions : 1) Where did Cantor prove his theorem 2) Can CH be weakened in the previous result ? For reference, Sierpinski construct an ordering on binary sequences indexed by countable ordinals (up to some ordinal $&lt;\omega_1$), having the desired property than any countable Dedekind cut is separated by some element ; this is actually the same order than the one on surreal numbers born before day $\omega_1$ (as easily shown by the Gondor construction) </p> http://mathoverflow.net/questions/101996/cantor-theorem-on-orders/101997#101997 Answer by Feldmann Denis for Cantor theorem on orders Feldmann Denis 2012-07-11T22:28:18Z 2012-07-11T22:51:43Z <p>Sorry, almost all pertinent answers (even with the exact reference of the Sierpinski article) were given in the article <a href="http://mathoverflow.net/questions/57597/universal-order-type" rel="nofollow">http://mathoverflow.net/questions/57597/universal-order-type</a> ; so the only question I have left is "where did Cantor state it ?" </p> http://mathoverflow.net/questions/99421/computational-complexity-of-calculating-the-nth-root-of-a-real-number/99430#99430 Answer by Feldmann Denis for Computational complexity of calculating the nth root of a real number Feldmann Denis 2012-06-13T11:05:23Z 2012-06-13T11:05:23Z <p>The Newton-Raphson algorithm uses, for computation of $A^{1/p}$, the sequence $u_0=A$, $u_{n+1}=u_n-\frac {u_n^p-A}{pu_n^{p-1}}$, whose speed of convergence , always quadratic, is essentially independent of $p$ (and $A$). So, mostly, it asks for $\ln p$ multiplications and 1 division at each step.</p> http://mathoverflow.net/questions/99171/is-the-sum-sinn-bounded/99172#99172 Answer by Feldmann Denis for Is the sum sin(n) bounded? Feldmann Denis 2012-06-09T11:24:34Z 2012-06-09T11:24:34Z <p>In this particular case, use $s_n(\theta)=\sum_{k=0}^n\sin(k\theta)=\Im(\sum_{k=0}^n\exp(ki\theta))=\dots=\frac{\sin(n\theta/2)\sin((n+1)\theta/2)}{\sin (\theta/2)}$</p> http://mathoverflow.net/questions/98501/faa-di-brunos-formula-for-inverse-functions Faa di Bruno's formula for inverse functions ? Feldmann Denis 2012-05-31T15:59:19Z 2012-06-02T17:17:24Z <p>It is easy to get a expression for the nth-derivative of an inverse fuction ; starting from $(f^{-1})'=\frac{1}{f'\circ f^{-1}}$, one gets things like $(f^{-1})^{(n)}=\frac{\sum a_k\prod (f^{(n_j)}\circ f^{-1})^j}{(f'\circ f^{-1})^{2n-1}}$, with reasonably easy constraints on the $n_j$. But what are the values of the $a_k$? I believe I read somewhere this was an application of umbral calculus, but I dont see how, and inverting Faa di Bruno's formula on the identity $f\circ f^{-1}=id$ dont seem to get anywhere.</p> http://mathoverflow.net/questions/98501/faa-di-brunos-formula-for-inverse-functions/98664#98664 Answer by Feldmann Denis for Faa di Bruno's formula for inverse functions ? Feldmann Denis 2012-06-02T15:47:49Z 2012-06-02T15:47:49Z <p>To precise my question, I was asking for the exact values of the $a_k$. Thanks to Tom Copeland, I could find the sequence A176740 of OEIS, giving a complete answer (with useful links) to this problem.</p> http://mathoverflow.net/questions/20765/the-problem-of-finding-the-first-digit-in-grahams-number/96295#96295 Answer by Feldmann Denis for The problem of finding the first digit in Graham's number Feldmann Denis 2012-05-08T05:21:43Z 2012-05-08T05:21:43Z <p>Similarly, it is 1 if the base is of the shape $3^n$, with $n$ a power of 3 (less than $\ln G$) ; other cases are easy for small powers of 3. On the other hand, Graham's number is much too large for the question to be really interesting ; the determination of the first digit in base 10 of $A(9,9)$ (where $A$ is the Ackermann function) should already be inaccessible. </p> http://mathoverflow.net/questions/126789/does-the-prime-number-theorem-prove-that-the-primes-cannot-be-exactly-identified Comment by Feldmann Denis Feldmann Denis 2013-04-07T16:56:44Z 2013-04-07T16:56:44Z Have you stop beating your wife ? I would appreciate at least a yes or no http://mathoverflow.net/questions/119913/what-is-the-difference-between-a-function-and-a-morphism Comment by Feldmann Denis Feldmann Denis 2013-01-26T05:59:15Z 2013-01-26T05:59:15Z Don't you really mean &quot;whose domain is $A$ and range $B$&quot; ? (then, of course, you have to think of the objects of your category as sets). But of course, the answer is negative, as different morphisms can correspond to the same function. http://mathoverflow.net/questions/119493/toroidality-testing Comment by Feldmann Denis Feldmann Denis 2013-01-21T21:14:17Z 2013-01-21T21:14:17Z Well, you can check if they dont have as minor one of the 16000 already known obstructions (for not too large graphs, this is less absurd
rectangle (2.3,2.3); \draw (0.5,0) circle (0.3mm); \node[scale=1.3] at (0.61,0.03) {$a$}; \draw (0.324,0.504) circle (0.3mm); \node[scale=1.3] at (0.27,0.63) {$b$}; \draw (2,0) circle (0.3mm); \node[scale=1.3] at (2.07,0.15) {$a^*$}; \draw (0.900,1.402) circle (0.3mm); \node[scale=1.3] at (0.97,1.55) {$b^*$}; \draw (0.943,-0.330) circle (0.3mm); \node[scale=1.3] at (1.15,-0.4) {$a_*$}; \draw (0.401,0.916) circle (0.3mm); \node[scale=1.3] at (0.25,1.07) {$b_*$}; \draw (0,0) circle (0.3mm); \node[scale=1.3] at (0,0.13) {$0$}; \draw (0,0) circle (1cm); \draw[densely dotted] (0.750,2.142) circle (2.480cm); \draw[densely dotted] (3.105,-1.360) circle (3.534cm); \draw[densely dotted] (-0.750,0.328) circle (1.292cm); \draw[densely dotted] (-0.181,-0.517) circle (1.140cm); \draw[densely dotted] (0.181,0.517) circle (1.140cm); \draw[densely dotted] (0.750,-0.328) circle (1.292cm); \draw[densely dotted] (-0.750,1.115) circle (1.675cm); \draw[densely dotted] (0.750,-1.115) circle (1.675cm); \draw[densely dotted] (-0.750,-2.142) circle (2.480cm); \draw[densely dotted] (-3.105,1.360) circle (3.534cm); \draw (-0.213,-0.143) circle (0.3mm); \node[scale=1.3] at (-0.27,0) {$k_c$}; \draw (0.541,0.364) circle (0.3mm); \node[scale=1.3] at (0.41,0.39) {$s_c$}; \draw (-0.541,-0.364) circle (0.3mm); \node[scale=1.3] at (-0.45,-0.17) {$t_c$}; \draw (0.829,0.557) circle (0.3mm); \node[scale=1.3] at (1,0.45) {$u_c$}; \draw (0.213,0.143) circle (0.3mm); \node[scale=1.3] at (0.15,0.26) {$v_c$}; \draw (-5.219,-3.509) -- (5.219,3.509); \end{tikzpicture} \caption{The intersection points of Theorem \ref{thm_5pointschordal} for $a=0.5$ and $b=0.6e^i$. The circle marked with the black solid outline is the unit circle $S^1$. All other circles and arcs are stereographic projections of great circles.} \label{fig_gcp} \end{figure} \bigskip \begin{theorem}\label{thm_5pointschordal} If $a,b\in\mathbb{B}^2$ are non-collinear with the origin, then the points $k_c,s_c,t_c,u_c,v_c$ in \eqref{equ_kstuv_c} are on a line passing through the origin. \end{theorem} \begin{proof} Since the points $k_c,s_c,t_c,u_c,v_c$ are all given as stereographic projections of intersections of two great circles, we can use the formula \eqref{equ_gcis} and its function $F$. For $t_c$, let $$ T_c:=F(z;b_{\ast},a^{\ast},b^{\ast},a_{\ast}). $$ By using the symbolic computation system Risa/Asir, we have \begin{align*} numerator(T_c)=&2|a-b|^3|1-a\overline{b}| (1-|b|^2)(1-|a|^2)(a\overline{b}-\overline{a}b) \big(|a-b|+|1-a\overline{b}|\big) \\ &\Big(\big(\overline{a}(1-|b|^2)+\overline{b}(1-|a|^2)\big)z^2 +2|1-a\overline{b}|(|a-b|-|1-a\overline{b}|)z \\ & -\big(a(1-|b|^2)+b(1-|a|^2)\big)\Big), \\ denominator(T_c)=& |a|^2|b|^2 \big|(\overline{a}b-1)|a-b|+b(\overline{b} -\overline{a})|1-a\overline{b}|\big|^2\\ & \times \big|(\overline{a}b-1)|a-b| +\overline{a}(a-b)|1-a\overline{b}|\big|^2. \end{align*} Because of the assumption that the two points $a,b$ are not collinear with the origin, the inequality $a\overline{b}-\overline{a}b\neq0$ holds. Therefore, $t_c$ can be obtained by solving the following equation for $z$: \begin{equation}\label{eq:tc} \big(\overline{a}(1-|b|^2)+\overline{b}(1-|a|^2)\big)z^2 +2|1-a\overline{b}|(|a-b|-|1-a\overline{b}|)z -\big(a(1-|b|^2)+b(1-|a|^2)\big)=0. \end{equation} \bigskip For $u_c$, we have \begin{align*} &F(z;b,a^{\ast},a,b^{\ast}) \\ & \quad =\frac{-2(a\overline{b}-\overline{a}b) \Big(\big(\overline{a}(1-|b|^2)+\overline{b}(1-|a|^2)\big)z^2 -\big(a(1-|b|^2)+b(1-|a|^2)\big)\Big)} {|ab|^2}. \end{align*} Hence, $ u_c $ can be obtained by solving the equation \begin{equation}\label{eq:uc} \big(\overline{a}(1-|b|^2)+\overline{b}(1-|a|^2)\big)z^2 -\big(a(1-|b|^2)+b(1-|a|^2)\big)=0. \end{equation} \bigskip For $s_c$, set $$ S_c:=F(z;b,a_{\ast},a,b_{\ast}). $$ By eliminating the non-zero factors from $numerator(S_c)=0$ under the assumptions of the theorem, we find that $s_c$ can be obtained as the solution of the following equation \begin{equation}\label{eq:sc} \big(\overline{a}(1-|b|^2)+\overline{b}(1-|a|^2)\big)z^2 +2|1-a\overline{b}|(|1-a\overline{b}|-|a-b|)z -\big(a(1-|b|^2)+b(1-|a|^2)\big)=0. \end{equation} For $v_c$, set $$ V_c;=F(z;b,b_{\ast},a,a_{\ast}), $$ and again, by eliminating the non-zero factors from $numerator(V_c)=0$, we have the following equation \begin{equation}\label{eq:vc} \begin{aligned} &\big(\overline{a}(1-|b|^2)+\overline{b}(1-|a|^2)\big)z^2 +2\frac{(1-|a|^2)(1-|b|^2)|1-a\overline{b}|}{|1-a\overline{b}|-|a-b|}z\\ &-\big(a(1-|b|^2)+b(1-|a|^2)\big)=0. \end{aligned} \end{equation} (Note that, if $ |1-a\overline{b}|=|a-b| $, then either $ |a|=1 $ or $ |b|=1 $ holds by \eqref{myahl}.) For $k_c$, set $$ K_c;=F(z;b^{\ast},b_{\ast},a^{\ast},a_{\ast}). $$ By eliminating the non-zero factors from $numerator(K_c)=0$, we have the following. \begin{equation}\label{eq:kc} \begin{aligned} &\big(\overline{a}(1-|b|^2)+\overline{b}(1-|a|^2)\big)z^2 +2\frac{(1-|a|^2)(1-|b|^2)|1-a\overline{b}|}{|a-b|-|1-a\overline{b}|}z\\ &-\big(a(1-|b|^2)+b(1-|a|^2)\big)=0. \end{aligned} \end{equation} Let $H=a(1-|b|^2)+b(1-|a|^2)$ as in Table \ref{t1}. Then the equations \eqref{eq:tc}, \eqref{eq:uc}, \eqref{eq:sc}, \eqref{eq:vc}, and \eqref{eq:kc} can be written in the form \begin{equation}\label{eq:QR} \overline{H}z^2+2Rz-H=0,\quad(R\in\mathbb{R}), \end{equation} for some real $R$, whose exact value can be seen from Table \ref{tgc}. Note that the solution $z$ of equation \eqref{eq:QR} has the form $$ \frac{-R+\sqrt{R^2+\vert H\vert^2}}{\overline{H}} =\frac{-R+\sqrt{R^2+\vert H\vert^2}}{\vert H\vert^2}H \qquad \mbox{or}\qquad \frac{-R-\sqrt{R^2+\vert H\vert^2}}{\vert H\vert^2}H. $$ In both cases, the solution has the form $z=MH$ for some real value $M$. Therefore, ${\rm Arg}(z)={\rm Arg}(H)$ for $M>0$ and ${\rm Arg}(z)={\rm Arg}(H)+\pi$ for $M<0$. Hence, it follows that the five points $ b_c,u_c,s_c,v_c,k_c $ are on the line passing through $H$ and the origin. \end{proof} \begin{table}[ht] \centering \begin{tabular}{|l|l|l|} \hline \multirow{2}{*}{Point} & \multirow{2}{*}{Definition} & \multirow{2}{*}{$R$}\\ &&\\ \hline \multirow{2}{*}{$k_c$} & \multirow{2}{*}{${\rm GCIS}[a_{\ast},a^{\ast},b_{\ast},b^{\ast}]$} & \multirow{3}{*}{$\dfrac{(1-|a|^2)(1-|b|^2)|1-a\overline{b}|}{|a-b|-|1-a\overline{b}|}$}\\ &&\\ &&\\ \hline \multirow{2}{*}{$s_c$} & \multirow{2}{*}{${\rm GCIS}[a,b_{\ast},b,a_{\ast}]$} & \multirow{2}{*}{$|1-a\overline{b}|(|1-a\overline{b}|-|a-b|)$}\\ &&\\ \hline \multirow{2}{*}{$t_c$} & \multirow{2}{*}{${\rm GCIS}[a_{\ast},b^{\ast},b_{\ast},a^{\ast}]$} & \multirow{2}{*}{$|1-a\overline{b}|(|a-b|-|1-a\overline{b}|)$}\\ &&\\ \hline \multirow{2}{*}{$u_c$} & \multirow{2}{*}{${\rm GCIS}[a,b^{\ast},b,a^{\ast}]$} & \multirow{2}{*}{0}\\ &&\\ \hline \multirow{2}{*}{$v_c$} & \multirow{2}{*}{${\rm GCIS}[a,a_{\ast},b,b_{\ast}]$} & \multirow{3}{*}{$\dfrac{(1-|a|^2)(1-|b|^2)|1-a\overline{b}|}{|1-a\overline{b}|-|a-b|}$}\\ &&\\ &&\\ \hline \end{tabular} \caption{The real numbers $R$ with which the formulas of the intersection points $k_c,s_c,t_c,u_c,v_c$ from \eqref{equ_kstuv_c} can be written as the roots of the equality $\overline{H}z^2+2Rz-H=0$, where $H=a(1-|b|^2)+b(1-|a|^2)$.} \label{tgc} \end{table} \begin{remark} The intersection points $k_c,s_c,t_c,u_c,v_c$ of the stereographic projections of great circles in Theorem \ref{thm_5pointschordal} are also collinear with the corresponding intersection points outside the unit circle. The proof for this claim follows the fact that these intersection points in the complement of the unit disk also fulfill the condition $F(z;a,b,c,d)=0$ in \eqref{equ_gcis}. Their collinearity can be seen from Figure \ref{fig_gcp}. \end{remark} \begin{corollary}\label{cor_11points} If $a,b\in\mathbb{B}^2$ are non-collinear with the origin, $m$ is the hyperbolic midpoint of $a$ and $b$, the five points $k,s,t,u,v$ are as in \eqref{equ_kstuv} for these choices of $a$ and $b$, and the other five $k_c,s_c,t_c,u_c,v_c$ are as in \eqref{equ_kstuv_c}, then the 11 points $k,m,s,t,u,v,k_c,s_c,t_c,u_c,v_c$ and the origin are collinear. \end{corollary} \begin{proof} As can be seen from Theorem \ref{thm_formulas} or Table \ref{t1}, the arguments of points $k,m,s,t,u,v$ are all equal to either ${\rm Arg}(H)$ or ${\rm Arg}(H)+\pi$ with $H=a(1-|b|^2)+b(1-|a|^2)$, and they therefore coincide with the arguments of points $k_c,s_c,t_c,u_c,v_c$ in Theorem \ref{thm_5pointschordal}. \end{proof} \bigskip \begin{figure} \centering \begin{tikzpicture}[scale=3] \draw (0.5,0) circle (0.3mm); \node[scale=1.3] at (0.61,0.03) {$a$}; \draw (0.324,0.504) circle (0.3mm); \node[scale=1.3] at (0.23,0.53) {$b$}; \draw (2,0) circle (0.3mm); \node[scale=1.3] at (2.17,0) {$a^*$}; \draw (0.900,1.402) circle (0.3mm); \node[scale=1.3] at (0.93,1.55) {$b^*$}; \draw (0.943,-0.330) circle (0.3mm); \node[scale=1.3] at (1.07,-0.27) {$a_*$}; \draw (0.401,0.916) circle (0.3mm); \node[scale=1.3] at (0.3,1.07) {$b_*$}; \draw (0,0) circle (0.3mm); \node[scale=1.3] at (0,0.13) {$0$}; \draw (0,0) circle (1cm); \draw[dashed] (1.25,0.544) circle (0.926cm); \draw (0.5,0) -- (0.401,0.916); \draw (2,0) -- (0.324,0.504); \draw (0.449,0.467) circle (0.3mm); \node[scale=1.3] at (0.41,0.33) {$p$}; \draw (0.5,0) -- (0.900,1.402); \draw (2,0) -- (0.401,0.916); \draw (0.710,0.738) circle (0.3mm); \node[scale=1.3] at (0.67,0.85) {$q$}; \draw (0.5,0) arc (345.2-360:27.0:1.292); \draw (2,0) arc (41.7:104.7:1.675); \draw (0.524,0.544) circle (0.3mm); \node[scale=1.3] at (0.53,0.69) {$p_c$}; \draw (0.5,0) arc (318.2-360:9.8:1.675); \draw (2,0) arc (14.7:105.6:1.292); \draw (0.917,0.953) circle (0.3mm); \node[scale=1.3] at (1,0.83) {$q_c$}; \draw (1.9,1.973) -- (-1,-1.038); \end{tikzpicture} \caption{The intersection points of Theorem \ref{thm_pqc} for $a=0.5$ and $b=0.6e^i$. The circle with solid line is the unit circle and the dashed circle is the orthogonal circle to the unit circle passing through $a,b$. All the circle arcs are arcs from the stereographic projections of great circles passing through the end points of the arc in question.} \label{fig_pqc} \end{figure} \begin{theorem}\label{thm_pqc} Choose such $a,b\in\mathbb{B}^2\backslash\{0\}$ non-collinear with the origin and let $a^*,b^*,a_*,b_*$ be as in in \eqref{eq:ab-ast} and \eqref{eq:ab-end}. Fix $p={\rm LIS}[a,b_*,a^*,b]$ and $q={\rm LIS}[a,b^*,a^*,b_*]$. Similarly, let $p_c={\rm GCIS}[a,b_*,a^*,b]$, and $q_c={\rm GCIS}[a,b^*,a^*,b_*]$. The intersection points $p,q,p_c,q_c$ and the origin are collinear. \end{theorem} \begin{proof} For $q_c$, calculating $F(z;a,b^{\ast},a^{\ast},b_{\ast})=0$, eliminating the non-desired factors, $ q_c $ is obtained by the solution of the following equation, \begin{equation}\label{eq:qc} \begin{aligned} &Qz^2+Rz-\overline{Q}=0,\quad \text{where}\\ &Q=\overline{b}(1-|a|^2)^2+\overline{a}|a-b|(|1-\overline{a}b|-|a-b|),\\ &R=(1-|a|^2)|1-\overline{a}b|(|a-b|-|1-\overline{a}b|). \end{aligned} \end{equation} Note that $R\in\mathbb{R}$ above. \bigskip Similarly for $p_c$, the point $p_c$ is given by the equation $Qz^2+Rz+\overline{Q}=0$, where $Q$ and $R$ are as in \eqref{eq:qc}, and the arguments of the solutions $q_c$ and $p_c$ are therefore ${\rm Arg}(\overline{Q})$ or ${\rm Arg}(\overline{Q})+\pi$. \bigskip For points $p$ and $q$, the following is also obtained from calculation of ${\rm LIS}[a,b^{\ast},a^{\ast},b_{\ast}]$ and ${\rm LIS}[a,b_{\ast},a^{\ast},b]$, respectively. \begin{align*} q&=\frac{b(1-|a|^2)^2+a|a-b|(|1-\overline{a}b|-|a-b|)} {|b|^2(1-|a|^2)^2+|a-b|(|1-\overline{a}b|-|a-b|)}, \\ p&=\frac{b(1-|a|^2)^2+a|a-b|(|1-\overline{a}b|-|a-b|)} {(1-|a|^2)^2+|a|^2|a-b|(|1-\overline{a}b|-|a-b|)}. \end{align*} Since the denominators of $p$ and $q$ both have positive real values, clearly, $$ {\rm Arg}(q)={\rm Arg}(p)={\rm Arg}\big(b(1-|a|^2)^2+a|a-b|(|1-\overline{a}b|-|a-b|)\big)={\rm Arg}(\overline{Q}) $$ hold. Hence, the four points $ q_c,p_c,q,p $ are on the line passing through the origin and $\overline{Q}=b(1-|a|^2)^2+a|a-b|(|1-\overline{a}b|-|a-b|)$. \end{proof} \section{Hyperbolic and chordal midpoints} In this section, we first introduce a few results related to the hyperbolic midpoint. In Theorems \ref{thm_hmpc} and \ref{thm_hmphinv}, we show new ways to construct the hyperbolic midpoint of two points in the unit disk. This geometric construction reveals the simple geometry not apparent from the explicit formula in Theorem \ref{myhmidp} for the hyperbolic midpoint. Similar constructions for the hyperbolic midpoint can be found in the book \cite{a16} by Ahara. At the end of this section, Theorem \ref{thm_cmp} offers an explicit formula for the chordal midpoint of two points in the unit disk. \begin{lemma}\label{lem_abk} If $V\subset\mathbb{B}^2$ is a lens-shaped domain symmetric with respect to the real axis bounded by two circular arcs with endpoints $-1$ and $1$, and $a,b\in\partial V$ so that ${\rm Im}\,b<0<{\rm Im}\,a$, then the hyperbolic midpoint of $a$ and $b$ with respect to the domain $\mathbb{B}^2$ is on the real axis. \end{lemma} \begin{proof} Let $k$ be the intersection point of the hyperbolic line $J^*[a,b]$ and the real axis. Trivially, if $k=0$, then $J^*[a,b]$ is a diameter of the unit disk $\mathbb{B}^2$ and, by the symmetry of the domain $V$, $a=-b$ and the hyperbolic midpoint is the origin. Let us next assume that $k\neq0$. Now, the line $J^*[a,b]$ is the arc of the circle that contains $a,b$ and is orthogonal to the unit
0a 48 44 52 20 3d 20 5c 0a iles..#.HDR = \. 9500: 20 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 62 74$(TOP)\src\bt 9510: 72 65 65 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 ree.h \. $(TOP 9520: 29 5c 73 72 63 5c 62 74 72 65 65 49 6e 74 2e 68 )\src\btreeInt.h 9530: 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 63 \.$(TOP)\src 9540: 5c 68 61 73 68 2e 68 20 5c 0a 20 20 20 24 28 54 \hash.h \. $(T 9550: 4f 50 29 5c 73 72 63 5c 68 77 74 69 6d 65 2e 68 OP)\src\hwtime.h 9560: 20 5c 0a 20 20 20 6b 65 79 77 6f 72 64 68 61 73 \. keywordhas 9570: 68 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c h.h \.$(TOP)\ 9580: 73 72 63 5c 6d 73 76 63 2e 68 20 5c 0a 20 20 20 src\msvc.h \. 9590: 24 28 54 4f 50 29 5c 73 72 63 5c 6d 75 74 65 78 $(TOP)\src\mutex 95a0: 2e 68 20 5c 0a 20 20 20 6f 70 63 6f 64 65 73 2e .h \. opcodes. 95b0: 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 h \.$(TOP)\sr 95c0: 63 5c 6f 73 2e 68 20 5c 0a 20 20 20 24 28 54 4f c\os.h \. $(TO 95d0: 50 29 5c 73 72 63 5c 6f 73 5f 63 6f 6d 6d 6f 6e P)\src\os_common 95e0: 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 .h \.$(TOP)\s 95f0: 72 63 5c 6f 73 5f 73 65 74 75 70 2e 68 20 5c 0a rc\os_setup.h \. 9600: 20 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 6f 73 $(TOP)\src\os 9610: 5f 77 69 6e 2e 68 20 5c 0a 20 20 20 24 28 54 4f _win.h \.$(TO 9620: 50 29 5c 73 72 63 5c 70 61 67 65 72 2e 68 20 5c P)\src\pager.h \ 9630: 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 70 . $(TOP)\src\p 9640: 63 61 63 68 65 2e 68 20 5c 0a 20 20 20 70 61 72 cache.h \. par 9650: 73 65 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 se.h \.$(TOP) 9660: 5c 73 72 63 5c 70 72 61 67 6d 61 2e 68 20 5c 0a \src\pragma.h \. 9670: 20 20 20 24 28 53 51 4c 49 54 45 33 48 29 20 5c $(SQLITE3H) \ 9680: 0a 20 20 20 73 71 6c 69 74 65 33 65 78 74 2e 68 . sqlite3ext.h 9690: 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 63 \.$(TOP)\src 96a0: 5c 73 71 6c 69 74 65 49 6e 74 2e 68 20 5c 0a 20 \sqliteInt.h \. 96b0: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 73 71 6c $(TOP)\src\sql 96c0: 69 74 65 4c 69 6d 69 74 2e 68 20 5c 0a 20 20 20 iteLimit.h \. 96d0: 24 28 54 4f 50 29 5c 73 72 63 5c 76 64 62 65 2e$(TOP)\src\vdbe. 96e0: 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 h \. $(TOP)\sr 96f0: 63 5c 76 64 62 65 49 6e 74 2e 68 20 5c 0a 20 20 c\vdbeInt.h \. 9700: 20 24 28 54 4f 50 29 5c 73 72 63 5c 76 78 77 6f$(TOP)\src\vxwo 9710: 72 6b 73 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 rks.h \. $(TOP 9720: 29 5c 73 72 63 5c 77 68 65 72 65 49 6e 74 2e 68 )\src\whereInt.h 9730: 0a 0a 23 20 48 65 61 64 65 72 20 66 69 6c 65 73 ..# Header files 9740: 20 75 73 65 64 20 62 79 20 65 78 74 65 6e 73 69 used by extensi 9750: 6f 6e 73 0a 23 0a 45 58 54 48 44 52 20 3d 20 24 ons.#.EXTHDR =$ 9760: 28 45 58 54 48 44 52 29 20 5c 0a 20 20 24 28 54 (EXTHDR) \. $(T 9770: 4f 50 29 5c 65 78 74 5c 66 74 73 31 5c 66 74 73 OP)\ext\fts1\fts 9780: 31 2e 68 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 1.h \.$(TOP)\e 9790: 78 74 5c 66 74 73 31 5c 66 74 73 31 5f 68 61 73 xt\fts1\fts1_has 97a0: 68 2e 68 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 h.h \. $(TOP)\e 97b0: 78 74 5c 66 74 73 31 5c 66 74 73 31 5f 74 6f 6b xt\fts1\fts1_tok 97c0: 65 6e 69 7a 65 72 2e 68 0a 45 58 54 48 44 52 20 enizer.h.EXTHDR 97d0: 3d 20 24 28 45 58 54 48 44 52 29 20 5c 0a 20 20 =$(EXTHDR) \. 97e0: 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 32 5c $(TOP)\ext\fts2\ 97f0: 66 74 73 32 2e 68 20 5c 0a 20 20 24 28 54 4f 50 fts2.h \.$(TOP 9800: 29 5c 65 78 74 5c 66 74 73 32 5c 66 74 73 32 5f )\ext\fts2\fts2_ 9810: 68 61 73 68 2e 68 20 5c 0a 20 20 24 28 54 4f 50 hash.h \. $(TOP 9820: 29 5c 65 78 74 5c 66 74 73 32 5c 66 74 73 32 5f )\ext\fts2\fts2_ 9830: 74 6f 6b 65 6e 69 7a 65 72 2e 68 0a 45 58 54 48 tokenizer.h.EXTH 9840: 44 52 20 3d 20 24 28 45 58 54 48 44 52 29 20 5c DR =$(EXTHDR) \ 9850: 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 . $(TOP)\ext\ft 9860: 73 33 5c 66 74 73 33 2e 68 20 5c 0a 20 20 24 28 s3\fts3.h \.$( 9870: 54 4f 50 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 TOP)\ext\fts3\ft 9880: 73 33 49 6e 74 2e 68 20 5c 0a 20 20 24 28 54 4f s3Int.h \. $(TO 9890: 50 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 P)\ext\fts3\fts3 98a0: 5f 68 61 73 68 2e 68 20 5c 0a 20 20 24 28 54 4f _hash.h \.$(TO 98b0: 50 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 P)\ext\fts3\fts3 98c0: 5f 74 6f 6b 65 6e 69 7a 65 72 2e 68 0a 45 58 54 _tokenizer.h.EXT 98d0: 48 44 52 20 3d 20 24 28 45 58 54 48 44 52 29 20 HDR = $(EXTHDR) 98e0: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 72 \.$(TOP)\ext\r 98f0: 74 72 65 65 5c 72 74 72 65 65 2e 68 0a 45 58 54 tree\rtree.h.EXT 9900: 48 44 52 20 3d 20 24 28 45 58 54 48 44 52 29 20 HDR = $(EXTHDR) 9910: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 69 \.$(TOP)\ext\i 9920: 63 75 5c 73 71 6c 69 74 65 69 63 75 2e 68 0a 45 cu\sqliteicu.h.E 9930: 58 54 48 44 52 20 3d 20 24 28 45 58 54 48 44 52 XTHDR = $(EXTHDR 9940: 29 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 ) \.$(TOP)\ext 9950: 5c 72
SLOCCount User's Guide August 1, 2004 Version 2.26 # Introduction SLOCCount (pronounced "sloc-count") is a suite of programs for counting physical source lines of code (SLOC) in potentially large software systems. Thus, SLOCCount is a "software metrics tool" or "software measurement tool". SLOCCount was developed by David A. Wheeler, originally to count SLOC in a GNU/Linux distribution, but it can be used for counting the SLOC of arbitrary software systems. SLOCCount is known to work on Linux systems, and has been tested on Red Hat Linux versions 6.2, 7, and 7.1. SLOCCount should run on many other Unix-like systems (if Perl is installed), in particular, I would expect a *BSD system to work well. Windows users can run sloccount by first installing Cygwin. SLOCCount is much slower on Windows/Cygwin, and it's not as easy to install or use on Windows, but it works. Of course, feel free to upgrade to an open source Unix-like system (such as Linux or *BSD) instead :-). SLOCCount can count physical SLOC for a wide number of languages. Listed alphabetically, they are Ada, Assembly (for many machines and assemblers), awk (including gawk and nawk), Bourne shell (and relatives such as bash, ksh, zsh, and pdksh), C, C++, C# (also called C-sharp or cs), C shell (including tcsh), COBOL, Expect, Fortran (including Fortran 90), Haskell, Java, lex (including flex), LISP (including Scheme), makefiles (though they aren't usually shown in final reports), Modula3, Objective-C, Pascal, Perl, PHP, Python, Ruby, sed, SQL (normally not shown), TCL, and Yacc. It can gracefully handle awkward situations in many languages, for example, it can determine the syntax used in different assembly language files and adjust appropriately, it knows about Python's use of string constants as comments, and it can handle various Perl oddities (e.g., perlpods, here documents, and Perl's _ _END_ _ marker). It even has a "generic" SLOC counter that you may be able to use count the SLOC of other languages (depending on the language's syntax). SLOCCount can also take a large list of files and automatically categorize them using a number of different heuristics. The heuristics automatically determine if a file is a source code file or not, and if so, which language it's written in. For example, it knows that ".pc" is usually a C source file for an Oracle preprocessor, but it can detect many circumstances where it's actually a file about a "PC" (personal computer). For another example, it knows that ".m" is the standard extension for Objective-C, but it will check the file contents to see if really is Objective-C. It will even examine file headers to attempt to accurately determine the file's true type. As a result, you can analyze large systems completely automatically. Finally, SLOCCount has some report-generating tools to collect the data generated, and then present it in several different formats and sorted different ways. The report-generating tool can also generate simple tab-separated files so data can be passed on to other analysis tools (such as spreadsheets and database systems). SLOCCount will try to quickly estimate development time and effort given only the lines of code it computes, using the original Basic COCOMO model. This estimate can be improved if you can give more information about the project. See the discussion below about COCOMO, including intermediate COCOMO, if you want to improve the estimates by giving additional information about the project. SLOCCount is open source software/free software (OSS/FS), released under the GNU General Public License (GPL), version 2; see the license below. The master web site for SLOCCount is http://www.dwheeler.com/sloccount. You can learn a lot about SLOCCount by reading the paper that caused its creation, available at http://www.dwheeler.com/sloc. Feel free to see my master web site at http://www.dwheeler.com, which has other material such as the Secure Programming for Linux and Unix HOWTO, my list of OSS/FS references, and my paper Why OSS/FS? Look at the Numbers! Please send improvements by email to dwheeler, at, dwheeler.com (DO NOT SEND SPAM - please remove the commas, remove the spaces, and change the word "at" into the at symbol). The following sections first give a "quick start" (discussing how to use SLOCCount once it's installed), discuss basic SLOCCount concepts, how to install it, how to set your PATH, how to install source code on RPM-based systems if you wish, and more information on how to use the "sloccount" front-end. This is followed by material for advanced users: how to use SLOCCount tools individually (for when you want more control than the "sloccount" tool gives you), designer's notes, the definition of SLOC, and miscellaneous notes. The last sections states the license used (GPL) and gives hints on how to submit changes to SLOCCount (if you decide to make changes to the program). # Quick Start Once you've installed SLOCCount (discussed below), you can measure an arbitrary program by typing everything after the dollar sign into a terminal session: $sloccount topmost-source-code-directory The directory listed and all its descendants will be examined. You'll see output while it calculates, culminating with physical SLOC totals and estimates of development time, schedule, and cost. If the directory contains a set of directories, each of which is a different project developed independently, use the "--multiproject" option so the effort estimations can correctly take this into account. You can redisplay the data different ways by using the "--cached" option, which skips the calculation stage and re-prints previously computed information. You can use other options to control what's displayed: "--filecount" shows counts of files instead of SLOC, and "--details" shows the detailed information about every source code file. So, to display all the details of every file once you've previously calculated the results, just type: sloccount --cached --details You'll notice that the default output ends with a request. If you use this data (e.g., in a report), please credit that data as being "generated using 'SLOCCount' by David A. Wheeler." I make no money from this program, so at least please give me some credit. SLOCCount tries to ignore all automatically generated files, but its heuristics to detect this are necessarily imperfect (after all, even humans sometimes have trouble determining if a file was automatically genenerated). If possible, try to clean out automatically generated files from the source directories -- in many situations "make clean" does this. There's more to SLOCCount than this, but first we'll need to explain some basic concepts, then we'll discuss other options and advanced uses of SLOCCount. # Basic Concepts SLOCCount counts physical SLOC, also called "non-blank, non-comment lines". More formally, physical SLOC is defined as follows: a physical source line of code (SLOC) is a line ending in a newline or end-of-file marker, and which contains at least one non-whitespace non-comment character.'' Comment delimiters (characters other than newlines starting and ending a comment) are considered comment characters. Data lines only including whitespace (e.g., lines with only tabs and spaces in multiline strings) are not included. In SLOCCount, there are 3 different directories: 1. The "source code directory", a directory containing the source code being measured (possibly in recursive subdirectories). The directories immediately contained in the source code directory will normally be counted separately, so it helps if your system is designed so that this top set of directories roughly represents the system's major components. If it doesn't, there are various tricks you can use to group source code into components, but it's more work. You don't need write access to the source code directory, but you do need read access to all files, and read and search (execute) access to all subdirectories. 2. The "bin directory", the directory containing the SLOCCount executables. By default, installing the program creates a subdirectory named "sloccount-VERSION" which is the bin directory. The bin directory must be part of your PATH. 3. The "data directory", which stores
that you need to worry unduly about picking every spitball off the wall. I’ve long since given up doing that and mostly just ignore unsubstantial and uninformed comments. Steve Bloom – bender’s contributions are extremely well appreciated and I request that you be on good behavior. You can vent at me if you need to vent. 128. Barclay E MacDonald Posted Aug 27, 2006 at 6:05 PM | Permalink #126 Bender you should not be the least concerned that you are being discredited. You are not! One only has to review what you have presented in this thread and Willis and Ian Castles responses and elaborations in the CMIP Control Runs thread to be absolutely pleased and grateful for what you are teaching the rest of us. 129. bender Posted Aug 27, 2006 at 6:12 PM | Permalink Apologies, then. And thanks. Being removed from the discussion (I don’t have time right now to follow every post) makes me fearful he’s getting away with pulling a fast one. I’ll try to be more patient about those spitballs. 130. Judith Curry Posted Aug 27, 2006 at 6:34 PM | Permalink Re #123: The person that has been conducting the analyses of the tropical cyclone forecasts by the European coupled seasonal prediction models (ECMWF, UKMO, METOFRANCE) is Frederic Vitart at ECMWF [email protected]. No journal publications on this yet, but if you search the web you can see the talks that he is presenting. Peter Webster came across this research accidentally while visiting ECMWF. It does not seem that these forecasts are made public yet? I will be at ECMWF the week of sept 4, i will definitely try to find out more about these forecasts (and particularly their availability and any documentation). Re # 127 this is a really interesting blog, relatively free of b.s. and full of real content. i will definitely be checking in on a regular basis (and will also try to digest more of the statistical content of some of the posts after i get back from ECMWF) 131. Steve McIntyre Posted Aug 27, 2006 at 7:37 PM | Permalink #130. Judy, thanks for visiting and look forward to future visits. I was in the audience for your presentation at the House Government Reform Committee (I’d had my turn at the House Energy & Commerce Committee the day before.) 132. David Smith Posted Aug 27, 2006 at 7:47 PM | Permalink Judith, this is an excellent website and your posts are adding to that excellence. I (a casual reader with nothing profound to offer to the dialogue) look forward to reading what you have to say. If I may, let me ask a question i asked from several days ago. It seems to me that the models show a significant warming of the mid tropical troposphere, which would make it more stable, not less. That would tend to shrink the amount of moisture-rich air in the tropics, I would think, and thus act to suppress tropical syslone formation. Is there anything I can read to help me understand more about what the GCMs say about changes in the mean tropical atmosphere? Thanks. 133. Steve Bloom Posted Aug 27, 2006 at 8:25 PM | Permalink Re #126: Steve M., I think I need to respond to this, but will do so one time only. If one reads through the entire thread, substantive contributions from me can be seen in 53, 58, 68, 71, 82, 89, and 99. These were interspersed with various nasty and mainly content-free attacks on me, the hockey stick, proxies or “global warmers” generally in 54, 55, 59, 61, 65, 72 and 74. (BTW, these drew no remonstrance from you, which I would point out has the effect of encouraging more of the same. But it’s your blog.) Of those, I only responded in kind to 54 and 55 (in 56); the rest I ignored. On the whole, I think I exhibited considerable forbearance. Then, in 101 bender wrote: “So gratifying to see the reversal in Bloom’s point of view, based on an objective analysis of the data and a sober consideration of statistical and methodological uncertainties. Maybe he is a gem after all. (You guys are all doing a great job with these data. Keep it up.)” Was it fair for me to see some attempt at insult in that? I think it was. So I responded in 103: “Apparently for some people denial is just a river in Egypt. See, bender, I’m not qualified to critique your tree-ring stuff, but I have to seriously question it given the way you approached this hurricane discussion.” The reference was to the fact that Bender had been told (by me) that the data did not support such an exercise and precisely why it didn’t. He did it anyway, and then you (Steve M.) not only featured his work in a thread (this one) but wound up your own comment on it by asking: “If Curry is unaware of these issues, what does that say? If she is aware of these issues and ignored them, what does that say?” IOW, you were inferring incompetence or dishonesty on Judy’s part. Where was: “If bender is completely wrong and has made this up from whole cloth, what does that say?” What, indeed. Tone aside, my problem with Bender isn’t that he did it but that he knew it was an invalid exercise and sought to proclaim it as something else. I think both of you owe Judy an apology. Finally, I would like to point out that this entire discussion started with Bender baiting me (twice) into a debate. He has exhibited a snarky, superior tone throughout and having received just a little bit of it back has now resorted to threatening to take his ball and go home. That seems to me a less than mature response. 134. Willis Eschenbach Posted Aug 27, 2006 at 9:03 PM | Permalink Steve (Bloom), I’ve been trying to warn you that if you continued with your unsubstantiated attacks and your lack of citations, you were going to get your vote cancelled. Let me give you an example. Several times you have attacked me for something I wrote in Coolwire 13. You have refused to say what it was. I invited you to tell me what was wrong, saying “Heck, if you think something is wrong with my Coolwire 13 article, cite us chapter and verse.” You did nothing … except that later on, you repeated your claim, this time saying I’ve caught you twice engaging in elaborate analyses (Arctic sea ice in Warwick Hughes’ Coolwire 13 and just now hurricanes on this blog) that turned out to be entirely baseless for reasons that should have been (and I think were) obvious to you at the start. I responded: I have asked you before what the problem in Coolwire 13 was, and received no reply (citation available upon request). Nor have you pointed out anything that makes my hurricane analysis “entirely baseless”, anyone can re-read the relevant pages and see that. I also said: Steve, truly, I do wish you’d put up or shut up regarding evidence for your claims. My sense is that you’re an intelligent person. But from what I read here and on other blogs, I’m not the only one who has noticed that while you are very quick to make unpleasant accusations, you are extremely slow to back them up. And again, from what I read here and on other blogs, this has resulted in your contributions being largely ignored. Unless having your vote cancelled in that fashion is what you intend, you should seriously consider finding some facts to support your claims. Your response now is to claim that you are being treated unfairly … dude, you can’t say I haven’t tried to warn you about the likely consequences of your actions. For you now to complain that
model. Zbl 1455.92125 Liu, Qun; Jiang, Daqing 2021 Threshold behavior in a stochastic SIS epidemic model with standard incidence. Zbl 1323.92208 Lin, Yuguo; Jiang, Daqing 2014 Existence, uniqueness and ergodicity of positive solution of mutualism system with stochastic perturbation. Zbl 1204.34065 Ji, Chunyan; Jiang, Daqing; Liu, Hong; Yang, Qingshan 2010 Existence and uniqueness of solutions for singular $$(k,n-k)$$ conjugate boundary value problems. Zbl 1143.34306 Lin, Xiaoning; Jiang, Daqing; Li, Xiaoyue 2006 On boundary value problems for singular second-order functional differential equations. Zbl 0952.34053 Jiang, Daqing; Wang, Junyu 2000 Dynamics of a stochastic multigroup SIQR epidemic model with standard incidence rates. Zbl 1411.92279 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2019 Stationary distribution and extinction of a stochastic SIRI epidemic model with relapse. Zbl 1395.34052 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir 2018 Singular boundary value problems for the one-dimension $$p$$-Laplacian. Zbl 1019.34022 Jiang, Daqing; Gao, Wenjie 2002 Dynamics of an avian influenza model with half-saturated incidence. Zbl 1428.92117 Shi, Zhenfeng; Zhang, Xinhong; Jiang, Daqing 2019 Persistence and non-persistence of a mutualism system with stochastic perturbation. Zbl 1233.92076 Ji, Chunyan; Jiang, Daqing 2012 Existence, uniqueness, and global attractivity of positive solutions and MLE of the parameters to the logistic equation with random perturbation. Zbl 1136.34324 Jiang, Daqing; Zhang, Baoxue; Wang, Dehui; Shi, Ningzhong 2007 Stationary distribution and density function analysis of a stochastic epidemic HBV model. Zbl 07431703 Ge, Junyan; Zuo, Wenjie; Jiang, Daqing 2022 Influence of the fear factor on the dynamics of a stochastic predator-prey model. Zbl 1455.92125 Liu, Qun; Jiang, Daqing 2021 Ergodicity and threshold behaviors of a predator-prey model in stochastic chemostat driven by regime switching. Zbl 1472.34098 Wang, Liang; Jiang, Daqing 2021 Stationary distribution and extinction for a food chain chemostat model with random perturbation. Zbl 1469.92077 Gao, Miaomiao; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2021 Impact of inlet gas turbulence on the formation, development and breakup of interfacial waves in a two-phase mixing layer. Zbl 07392418 Jiang, D.; Ling, Y. 2021 Stationary solution, extinction and density function for a high-dimensional stochastic SEI epidemic model with general distributed delay. Zbl 07424147 Han, Bingtao; Zhou, Baoquan; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2021 Ergodic stationary distribution and extinction of a hybrid stochastic SEQIHR epidemic model with media coverage, quarantine strategies and pre-existing immunity under discrete Markov switching. Zbl 07425951 Zhou, Baoquan; Han, Bingtao; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2021 Virus dynamic behavior of a stochastic HIV/AIDS infection model including two kinds of target cell infections and CTL immune responses. Zbl 07429017 Qi, Kai; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2021 Dynamics of an SIR epidemic model with varying population sizes and regime switching in a two patch setting. Zbl 07460602 Liu, Liya; Jiang, Daqing; Hayat, Tasawar 2021 Stationary distribution and ergodicity of a stochastic cholera model with multiple pathways of transmission. Zbl 1450.92075 Song, Mingyu; Zuo, Wenjie; Jiang, Daqing; Hayat, Tasawar 2020 Stationary distribution and extinction of a stochastic HIV-1 infection model with distributed delay and logistic growth. Zbl 1435.34084 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2020 Dynamical behavior of stochastic predator-prey models with distributed delay and general functional response. Zbl 1444.92092 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2020 Periodic solutions of a stochastic food-limited mutualism model. Zbl 1447.60086 Zhang, Xinhong; Jiang, Daqing 2020 Global asymptotic behavior of a multi-species stochastic chemostat model with discrete delays. Zbl 1439.92208 Wang, Liang; Jiang, Daqing; Wolkowicz, Gail S. K. 2020 Dynamics and density function analysis of a stochastic SVI epidemic model with half saturated incidence rate. Zbl 07501454 Zhou, Baoquan; Zhang, Xinhong; Jiang, Daqing 2020 Influence of stochastic perturbation on an SIRI epidemic model with relapse. Zbl 1439.34052 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2020 Analysis of a stochastic Holling type II predator-prey model under regime switching. Zbl 1441.34058 Jiang, Xiaobo; Zu, Li; Jiang, Daqing; O&rsquo;Regan, Donal 2020 Stationary distribution of a stochastic cholera model between communities linked by migration. Zbl 1433.92057 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed; Ahmad, Bashir 2020 Dynamics of a multigroup SIQS epidemic model under regime switching. Zbl 1468.92058 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed; Ahmad, Bashir 2020 The impact of virus carrier screening and actively seeking treatment on dynamical behavior of a stochastic HIV/AIDS infection model. Zbl 1481.92157 Qi, Kai; Jiang, Daqing 2020 Dynamics of a stochastic multigroup SIQR epidemic model with standard incidence rates. Zbl 1411.92279 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2019 Dynamics of an avian influenza model with half-saturated incidence. Zbl 1428.92117 Shi, Zhenfeng; Zhang, Xinhong; Jiang, Daqing 2019 Dynamical behavior of a stochastic epidemic model for cholera. Zbl 1418.92179 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2019 Stationary distribution of a stochastic food chain chemostat model with general response functions. Zbl 1411.62318 Gao, Miaomiao; Jiang, Daqing 2019 Stationary distribution of an HIV model with general nonlinear incidence rate and stochastic perturbations. Zbl 1416.92173 Wang, Yan; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2019 Threshold behavior in a stochastic delayed SIS epidemic model with vaccination and double diseases. Zbl 1418.92178 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2019 Stationary distribution and extinction of a stochastic one-prey two-predator model with Holling type II functional response. Zbl 1415.34091 Liu, Qun; Jiang, Daqing 2019 Dynamics of a stochastic SIR epidemic model with distributed delay and degenerate diffusion. Zbl 1418.92177 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2019 Dynamical behavior of a multigroup SIRS epidemic model with standard incidence rates and Markovian switching. Zbl 1426.34060 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2019 Dynamical behavior of a hybrid switching SIS epidemic model with vaccination and Lévy jumps. Zbl 1415.34092 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2019 Periodic solution for a stochastic non-autonomous predator-prey model with Holling II functional response. Zbl 1440.35351 Zu, Li; Jiang, Daqing; O&rsquo;Regan, Donal 2019 Stationary distribution and periodic solution of stochastic chemostat models with single-species growth on two nutrients. Zbl 1451.60057 Gao, Miaomiao; Jiang, Daqing; Hayat, Tasawar 2019 The stationary distribution and extinction of a double thresholds HTLV-I infection model with nonlinear CTL immune response disturbed by white noise. Zbl 1419.37085 Qi, Kai; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2019 Threshold behavior in a stochastic SIQR epidemic model with standard incidence and regime switching. Zbl 1426.92080 Liu, Qun; Jiang, Daqing; Shi, Ningzhong 2018 Analysis of a delayed vaccinated SIR epidemic model with temporary immunity and Lévy jumps. Zbl 1382.92240 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir 2018 Stationary distribution and extinction of a stochastic predator-prey model with distributed delay. Zbl 1382.92223 Liu, Qun; Jiang, Daqing 2018 Dynamics of a stochastic predator-prey model with stage structure for predator and Holling type II functional response. Zbl 1395.34053 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2018 Ergodic property of the chemostat: a stochastic model under regime switching and with general response function. Zbl 1379.92032 Wang, Liang; Jiang, Daqing 2018 Ergodic property of a Lotka-Volterra predator-prey model with white noise higher order perturbation under regime switching. Zbl 1427.37070 Zu, Li; Jiang, Daqing; O&rsquo;Regan, Donal; Hayat, Tasawar; Ahmad, Bashir 2018 Stationary distribution and extinction of a stochastic predator-prey model with additional food and nonlinear perturbation. Zbl 1426.92054 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir 2018 Stationary distribution and extinction of a stochastic SIRI epidemic model with relapse. Zbl 1395.34052 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir 2018 Periodic solution and stationary distribution of stochastic predator-prey models with higher-order perturbation. Zbl 1390.34182 Liu, Qun; Jiang, Daqing 2018 Dynamical behavior of stochastic multigroup S-DI-A epidemic models for the transmission of HIV. Zbl 1451.92299 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2018 Stationary distribution and extinction of a stochastic dengue epidemic model. Zbl 1402.92397 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2018 Threshold behavior in a stochastic HTLV-I infection model with CTL immune response and regime switching. Zbl 1405.35229 Qi, Kai; Jiang, Daqing 2018 Dynamics of a stochastic tuberculosis model with antibiotic resistance. Zbl 1390.92145 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2018 Stationary distribution and extinction of a stochastic predator-prey model with herd behavior. Zbl 1398.92209 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2018 Dynamics of a hepatitis B model with saturated incidence. Zbl 1438.34154 Liu, Liya; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir 2018 The signless Laplacian spectral radius of $$k$$-connected irregular graphs. Zbl 1391.05176 Ning, Wenjie; Lu, Mei; Wang, Kun; Jiang, Daqing 2018 Asymptotic properties of a stochastic chemostat including species death rate. Zbl 1387.92080 Wang, Liang; Jiang, Daqing 2018 The asymptotic behavior of a stochastic multigroup SIS model. Zbl 1387.34074 Ji, Chunyan; Jiang, Daqing 2018 Periodic solution and stationary distribution of stochastic S-DI-A epidemic models. Zbl 1391.92057 Zhang, Xinhong; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2018 Dynamical behavior of a stochastic model of gene expression with distributed delay and degenerate diffusion. Zbl 1400.34071 Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed 2018 The threshold of a stochastic SIS epidemic model with imperfect vaccination. Zbl 1482.92103 Liu, Qun; Jiang, Daqing; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed 2018 Dynamics of a stochastic SIS model with double epidemic diseases driven by Lévy jumps. Zbl 1400.92564 Zhang, Xinhong; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir 2017 Stationary distribution and extinction of a stochastic SIR model with nonlinear perturbation. Zbl 1377.92095 Liu, Qun; Jiang, Daqing 2017 Stationary distribution and extinction of a stochastic SIRS epidemic model with standard incidence. Zbl 1400.92521 Liu, Qun; Jiang, Daqing; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed 2017 Dynamics of a stochastic HIV-1 infection model with logistic growth. Zbl 1400.92502 Jiang, Daqing; Liu, Qun; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed; Xia, Peiyan 2017 Periodic solution for a stochastic non-autonomous competitive Lotka-Volterra model in a polluted environment. Zbl 1400.92503 Jiang, Daqing; Zhang, Qiumei; Hayat, Tasawar; Alsaedi, Ahmed 2017 A stochastic HIV infection model with T-cell proliferation and CTL immune response. Zbl 1426.92044 Wang, Yan; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir 2017 A note on the stationary distribution
a large number of alternative drug candidates are created by drug-specified generative models. These candidates could be an effective and specified library to further screen for better or cheaper drug alternatives. Therefore, in this work, we develop a generative network complex (GNC) based on the multiple-property optimization via gradient descent in the latent space to automatically generate new drug-like molecules. One workflow of our GNC consists of three following stages \begin{enumerate} \item The SMILES string of a seed molecule are encoded into a vector in the latent space by a pre-trained encoder. \item Starting with the latent vector of the seed molecule, a DNN-based drug-like molecule generator modifies the vector via gradient descent, so that new latent vectors that satisfy multiple property restraints including chemical properties and similarity scores to the desired objectives are created. \item A pre-trained decoder decodes these new latent vectors into the SMILES strings of newly generated molecules. \end{enumerate} The rest of the paper is organized as follows. Section \ref{sec:methods} introduces our new GNC framework formed by the seq2seq AE and drug-like molecule generator. Section \ref{sec:experiments} discusses its reliability test on the BACE1 target, and more importantly, presents the performance of our GNC on a variety of existing drug targets. The insight into the roles of the multiple property restraints is offered in Section \ref{sec:discussions}. The conclusion is given in Section \ref{sec:conclusion}. \section{Methods}\label{sec:methods} \subsection{The sequence to sequence antoencoder (seq2seq AE)} The seq2seq model is an autoencoder architecture originated from natural language processing \cite{sutskever2014sequence}. It has been demonstrated as a breakthrough in language translation. The basic strategy of the seq2seq AE is to map an input sequence to a fixed-sized vector in the latent space using a gated recurrent unit (GRU) \cite{cho2014learning} or a long short-term memory (LSTM) network \cite{hochreiter1997long}, and then map the vector to a target sequence with another GRU or LSTM network. Thus the latent vector is an intermediate representation containing the ``meaning" of the input sequence. In our case, input and output sequences are both SMILES strings -- a one-dimensional "language" of chemical structures \cite{weininger1988smiles}. The seq2seq AE is trained to have a high reconstruction ratio between inputs and outputs so that the latent vectors contain faithful information of the chemical structures (see the upper part of Figure \ref{fig:gen-seq2seq}). Here we utilize a pre-trained seq2seq model from a recent work \cite{winter2019learning}. \subsection{The drug-like molecule generator based on the multi-property optimization} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{gen-seq2seq.png} \caption{A schematic illustration of our generative network complex 2.} \label{fig:gen-seq2seq} \end{figure} In our new GNC, we design a drug-like molecule generator elaborately, so that generated molecules not only satisfy desired properties but also share common pharmacophores with reference compounds. From a seed molecule, one generative workflow of the GNC is depicted in Figure \ref{fig:gen-seq2seq} and described as below. \begin{enumerate} \item Randomly pick a low-binding-affinity molecule from a target-specified training set as the seed, then the SMILES string of the seed molecule is encoded by a pre-trained encoder (in our case a GRU encoder) into a latent vector. \item The latent vector of the seed is fed into our DNN molecule generator. In every epoch, the generator comes up with a new vector $X \in \mathbb{R}^n$, and the deep learning network is instructed to evaluate $X$ via the following loss function \begin{equation}\label{eq:dnn_loss} {\cal L}(X)=\frac{1}{n}\sum_{i=1}^n k_i| \hat{y}_i(X) - y_{i0}|, \end{equation} where, $k_i$ is the $i$th predefined weight serving the purpose of emphasizing or deemphasizing different property restraints, $\hat{y}_i(X)$ is the predicted $i$th property value by a pre-trained predictor ${\cal M}_i$. Additionally, $y_{i0}$ is the objective value of the $i$th property. The restrained properties can be binding affinity (BA), the similarity score (Sim) to a reference molecule or others such as partition coefficient (Log P), Lipinski's rule of five \cite{lipinski1997experimental}, etc. Some guideline for $y_{i0}$ includes, in the BA restraint, one often sets $y_{\Delta G}$ < -9.6 kcal/mol, in the Log P condition, it is common to set $y_{\rm LogP}$ < 5, etc. \item Gradient descent is used to minimize the loss function in Eq. (\ref{eq:dnn_loss}) until the maximum number of epochs is reached \item The generated latent vectors satisfying the desired restraints are decoded into SMILES strings through a pre-trained decoder, as shown in Figure \ref{fig:gen-seq2seq}. \end{enumerate} To create a variety of novel drug-like molecules originated from leads or existing drugs (reference molecules), one can adopt different seed molecules as well as different objective values to achieve desired properties and similarity scores. The ultimate purpose of our molecule generator is to keep modifying the latent vector to satisfy the multiple druggable property restraints. Figure \ref{fig:gen-mechanism} illustrates the general mechanism of our generator. Notably, the pre-trained predictor weights inside our model stay intact throughout the backpropagation of the training process to the generator. The loss function converges when all conditions are met, and then, resulting latent vectors are decoded into SMILES strings. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{mechanism-gen2.png} \caption{The illustration of the latent-space molecule generator.} \label{fig:gen-mechanism} \end{figure} \subsection{The parameters of the molecule generator} \label{sec:param-gen} In our model, the dimension of the latent space is 512, so the input and output dimensions of the DNN molecule generator are also 512. The DNN generator has two hidden layers, with 1024 neurons in each layer. The activation function is tanh, the learning rate is 0.1, and the momentum is also 0.1. In this work, we are interested in binding affinity and similarity score restraints. The regularization coefficients of these two restraints ($k_{\rm \Delta G}$ and $k_{\rm Sim}$) are set to 1 and 10, respectively. The similarity score restraints is determined via the Tanimoto coefficient between a generated latent vector and the latent vector of a reference molecule. The binding affinity restraints rely on pre-trained binding affinity predictors. A pre-trained binding affinity predictor (LV-BP) takes latent vectors as its inputs and return predicted binding affinities. Therefore, typically, the input dimension of the predictor is 512, the output dimension is 1. The DNN predictor has three hidden layers with 1024, 1536, and 1024 neurons, respectively. The ReLU activation function is applied. The learning rate is 0.001, the number of training epochs is 4000, the batch size is 4. The predictor network is trained on target-specified datasets carefully selected from public databases such as ChEMBL \cite{gaulton2011chembl}. The generator and predictor are both programmed in the framework of PyTorch (Version 1.0.0) \cite{paszke2017pytorch}. In the current work, for each generation task, we randomly pick 50 low-binding-affinity molecules from the preselected dataset as seeds. For each seed, the generative network is run in a total of 2000 epochs, which takes less than 10 minutes under the supercomputer equipped with one Nvidia K80 GPU card. In practice, to quickly fill up the potential chemical search space, one can use more seeds and run more epochs for each seed. \subsection{Binding affinity reevaluation by the 2D-fingerprint predictor} \label{sec:2DFP-BP} Besides generating new molecules, the LV-BP in our GNC also predicts their binding affinities. However, no experimental values are available to validate these predicted affinities. Therefore, we cross-validate them using alternative binding affinity predictors. In the present work, we construct machine learning predictors based on 2D fingerprints (2DFP-BPs) to reevaluate the affinities of generated compounds. The 2D fingerprints computed from their SMILES strings are inputs to these 2DFP-BPs. If the predictions from the LV-BP and 2DFP-BPs are consistent, we regard the predictions as reliable. According to our previous tests \cite{gao20202d}, the consensus of ECFP4 \cite{rogers2010extended}, Estate1\cite{hall1995electrotopological} and Estate2 \cite{hall1995electrotopological} fingerprints performs best on binding-affinity prediction tasks. Therefore, this work also makes use of this consensus. We employ the RDKit software (version 2018.09.3) \cite{landrum2006rdkit} to generate 2D fingerprints from SMILES strings. Since the training sets in our current cases are not so large, we apply gradient boosting decision tree (GBDT) \cite{schapire2003boosting} model due to its accuracy and speed when handling small datasets. This GBDT predictor is constructed using the gradient boosting regressor module in scikit-learn (version 0.20.1) \cite{pedregosa2011scikit} and the following parameters: n\_estimators=10000, max\_depth=7, min\_samples\_split=3, learning\_rate=0.01, subsample=0.3, and max\_features=sqrt. The criteria used in our reevaluation are the root mean
import numpy as np import time, yaml import itertools, glob import traceback from hera_cal import redcal from collections import OrderedDict as odict from pyuvdata import UVData, utils as uvutils from datetime import datetime import copy from scipy.interpolate import interp1d import uvtools as uvt import argparse from .conversions import Cosmo_Conversions def cov(d1, w1, d2=None, w2=None, conj_1=False, conj_2=True): """ Computes an empirical covariance matrix from data vectors. If d1 is of size (M,N), then the output is M x M. In other words, the second axis is the axis that is averaged over in forming the covariance (e.g. a time axis). If d2 is provided and d1 != d2, then this computes the cross-variance, i.e. <d1 d2^dagger> - <d1> <d2>^dagger The fact that the second copy is complex conjugated is the default behaviour, which can be altered by the conj_1 and the conj_2 kwargs. If conj_1 = False and conj_2 = False, then <d1 d2^t> is computed, whereas if conj_1 = True and conj_2 = True, then <d1^* d2^t*> is computed. (Minus the mean terms). Parameters_ ---------- d1 : array_like Data vector of size (M,N), where N is the length of the "averaging axis" w1 : integer Weights for averaging d1 d2 : array_like, optional Data vector of size (M,N), where N is the length of the "averaging axis" Default: None w2 : integer, optional Weights for averaging d1. Default: None conj_1 : boolean, optional Whether to conjugate d1 or not. Default: False conj_2 : boolean, optional Whether to conjugate d2 or not. Default: True Returns ------- cov : array_like Covariance (or cross-variance) matrix of size (M,M) """ if d2 is None: d2,w2 = d1,w1 if not np.isreal(w1).all(): raise TypeError("Weight matrices must be real") if not np.isreal(w2).all(): raise TypeError("Weight matrices must be real") if np.less(w1, 0.).any() or np.less(w2, 0.).any(): raise ValueError("Weight matrices must be positive") d1sum,d1wgt = (w1*d1).sum(axis=1), w1.sum(axis=1) d2sum,d2wgt = (w2*d2).sum(axis=1), w2.sum(axis=1) x1 = d1sum / np.where(d1wgt > 0, d1wgt, 1) x2 = d2sum / np.where(d2wgt > 0, d2wgt, 1) x1.shape = (-1,1); x2.shape = (-1,1) z1 = w1*d1 z2 = w2*d2 if conj_1: z1 = z1.conj() x1 = x1.conj() if conj_2: z2 = z2.conj() x2 = x2.conj() C = np.dot(z1, z2.T) W = np.dot(w1, w2.T) C /= np.where(W > 0, W, 1) C -= np.outer(x1, x2) return C def variance_from_auto_correlations(uvd, bl, spw_range, time_index): """ Predict noise variance on a baseline from autocorrelation amplitudes on antennas. Pick a baseline $b=(alpha,beta)$ where $alpha$ and $beta$ are antennas, The way to estimate the covariance matrix $C$ from auto-visibility is: $C_{ii}(b, LST) = | V(b_alpha, LST, nu_i) V(b_beta, LST, nu_i) | / {B Delta_t}, where $b_alpha = (alpha,alpha)$ and $b_beta = (beta,beta)$. With LST binned over days, we have $C_{ii}(b, LST) = |V(b_alpha,nu_i,t) V(b_beta, nu_i,t)| / {N_{samples} B Delta_t}$. Parameters ---------- uvd : UVData bl : tuple baseline (pol) key, in the format of (ant1, ant2, pol) spw_range : tuple Length-2 tuple of the spectral window time_index : int Returns ------- var : ndarray, (spw_Nfreqs,) """ assert isinstance(bl, tuple) and len(bl)==3, "bl must be fed as Length-3 tuple" assert isinstance(spw_range, tuple) and len(spw_range)==2, "spw_range must be fed as Length-2 tuple" dt = np.median(uvd.integration_time) # Delta_t df = uvd.channel_width # B bl1 = (bl[0],bl[0], bl[2]) # baseline b_alpha bl2 = (bl[1], bl[1], bl[2]) # baseline b_beta spw = slice(spw_range[0], spw_range[1]) x_bl1 = uvd.get_data(bl1)[time_index, spw] x_bl2 = uvd.get_data(bl2)[time_index, spw] nsample_bl = uvd.get_nsamples(bl)[time_index, spw] nsample_bl = np.where(nsample_bl>0, nsample_bl, np.median(uvd.nsample_array[:,:,spw,:])) # some impainted data have zero nsample while is not flagged, and they will be assigned the median nsample within the spectral window. var = np.abs(x_bl1*x_bl2.conj()) / dt / df / nsample_bl return var def construct_blpairs(bls, exclude_auto_bls=False, exclude_cross_bls=False, exclude_permutations=False, group=False, Nblps_per_group=1): """ Construct a list of baseline-pairs from a baseline-group. This function can be used to easily convert a single list of baselines into the input needed by PSpecData.pspec(bls1, bls2, ...). Parameters ---------- bls : list of tuple List of baseline tuples, Ex. [(1, 2), (2, 3), (3, 4)]. Baseline integers are not supported, and must first be converted to tuples using UVData.baseline_to_antnums(). exclude_auto_bls: bool, optional If True, exclude all baselines crossed with themselves from the final blpairs list. Default: False. exclude_cross_bls : bool, optional If True, exclude all bls crossed with a different baseline. Note if this and exclude_auto_bls are True then no blpairs will exist. exclude_permutations : bool, optional If True, exclude permutations and only form combinations of the bls list. For example, if bls = [1, 2, 3] (note this isn't the proper form of bls, but makes the example clearer) and exclude_permutations = False, then blpairs = [11, 12, 13, 21, 22, 23,, 31, 32, 33]. If however exclude_permutations = True, then blpairs = [11, 12, 13, 22, 23, 33]. Furthermore, if exclude_auto_bls = True then 11, 22, and 33 would also be excluded. Default: False. group : bool, optional If True, group each consecutive Nblps_per_group blpairs into sub-lists. Default: False. Nblps_per_group : int, optional Number of baseline-pairs to put into each sub-group if group = True. Default: 1. Returns (bls1, bls2, blpairs) ------- bls1, bls2 : list of tuples List of baseline tuples from the zeroth/first index of the blpair. blpairs : list of tuple List of blpair tuples. """ # assert form assert isinstance(bls, (list, np.ndarray)) and isinstance(bls[0], tuple), \ "bls must be fed as list or ndarray of baseline antnum tuples. Use " \ "UVData.baseline_to_antnums() to convert baseline integers to tuples." assert (not exclude_auto_bls) or (not exclude_cross_bls), "Can't exclude both auto and cross blpairs" # form blpairs w/o explicitly forming auto blpairs # however, if there are repeated bl in bls, there will be auto bls in blpairs if exclude_permutations: blpairs = list(itertools.combinations(bls, 2)) else: blpairs = list(itertools.permutations(bls, 2)) # explicitly add in auto baseline pairs blpairs.extend(list(zip(bls, bls))) # iterate through and eliminate all autos if desired if exclude_auto_bls: new_blpairs = [] for blp in blpairs: if blp[0] != blp[1]: new_blpairs.append(blp) blpairs = new_blpairs # same for cross if exclude_cross_bls: new_blpairs = [] for blp in blpairs: if blp[0] == blp[1]: new_blpairs.append(blp) blpairs = new_blpairs # create bls1 and bls2 list bls1 = [blp[0] for blp in blpairs] bls2 = [blp[1] for blp in blpairs] # group baseline pairs if desired if group: Nblps = len(blpairs) Ngrps = int(np.ceil(float(Nblps) / Nblps_per_group)) new_blps = [] new_bls1 = [] new_bls2 = [] for i in range(Ngrps): new_blps.append(blpairs[i*Nblps_per_group:(i+1)*Nblps_per_group]) new_bls1.append(bls1[i*Nblps_per_group:(i+1)*Nblps_per_group]) new_bls2.append(bls2[i*Nblps_per_group:(i+1)*Nblps_per_group]) bls1 = new_bls1 bls2 = new_bls2 blpairs = new_blps return bls1, bls2, blpairs def calc_blpair_reds(uvd1, uvd2, bl_tol=1.0, filter_blpairs=True, xant_flag_thresh=0.95, exclude_auto_bls=False, exclude_cross_bls=False, exclude_permutations=True, Nblps_per_group=None, bl_len_range=(0, 1e10), bl_deg_range=(0, 180), xants=None, include_autocorrs=False, include_crosscorrs=True, extra_info=False): """ Use hera_cal.redcal to get matching, redundant baseline-pair groups from uvd1 and uvd2 within the specified baseline tolerance, not including flagged ants. Parameters ---------- uvd1, uvd2 : UVData UVData instances with visibility data for the first/second visibilities in the cross-spectra that will be formed. bl_tol : float, optional Baseline-vector redundancy tolerance in meters filter_blpairs : bool, optional if True, calculate xants (based on data flags) and filter-out baseline pairs based on actual baselines in the data. xant_flag_thresh : float, optional Fraction of 2D visibility (per-waterfall) needed to be flagged to consider the entire visibility flagged. xants : list, optional Additional lilst of xants to hand flag, regardless of flags in the data. exclude_auto_bls: boolean, optional If True, exclude all bls crossed with itself from the blpairs list exclude_cross_bls : boolean, optional If True, exclude all bls crossed with a different baseline. Note if this and exclude_auto_bls are True then no blpairs will exist. exclude_permutations : boolean, optional If True, exclude permutations and only form combinations of the bls list. For example, if bls = [1, 2, 3] (note this isn't the proper form of bls, but makes this example clearer) and exclude_permutations = False, then blpairs = [11, 12, 13, 21, 22, 23, 31, 32, 33]. If however exclude_permutations = True, then blpairs = [11, 12, 13, 22, 23,
_protected_subplots( prediction, truth, protected, axes, metric=roc_auc_score, metric_name="ROC AUC", ) fpr, tpr, roc_auc = get_fpr_tpr_roc_pred(prediction, truth, labels) for key in labels: if "no_" in key and len(labels) == 2: continue labels_to_areas[key] = roc_auc[labels[key]] color = _hash_string_to_color(key) label_text = ( f"{key} area: {roc_auc[labels[key]]:.3f} n={true_sums[labels[key]]:.0f}" ) axes[-1, -1].plot( fpr[labels[key]], tpr[labels[key]], color=color, lw=lw, label=label_text, ) logging.info( f"ROC Label {label_text} Truth shape {truth.shape}, true sums {true_sums}", ) axes[-1, -1].set_title(f"ROC {title} n={truth.shape[0]:.0f}\n") axes[-1, -1].legend(loc="lower right") figure_path = os.path.join(prefix, "per_class_roc_" + title + IMAGE_EXT) if not os.path.exists(os.path.dirname(figure_path)): os.makedirs(os.path.dirname(figure_path)) plt.savefig(figure_path, bbox_inches="tight") logging.info( f"Saved ROC curve at: {figure_path} with {len(protected)} protected TensorMaps.", ) return labels_to_areas def plot_roc(prediction, truth, labels, title, prefix="./figures/"): lw = 2 labels_to_areas = {} true_sums = np.sum(truth, axis=0) plt.figure(figsize=(SUBPLOT_SIZE, SUBPLOT_SIZE)) fpr, tpr, roc_auc = get_fpr_tpr_roc_pred(prediction, truth, labels) for key in labels: if "no_" in key and len(labels) == 2: continue color = _hash_string_to_color(key) labels_to_areas[key] = roc_auc[labels[key]] label_text = f"{key} area:{roc_auc[labels[key]]:.3f} n={true_sums[labels[key]]:.0f}" plt.plot( fpr[labels[key]], tpr[labels[key]], color=color, lw=lw, label=label_text, ) logging.info(f"ROC Label {label_text}") plt.xlim([0.0, 1.0]) plt.ylim([-0.02, 1.03]) plt.ylabel(RECALL_LABEL) plt.xlabel(FALLOUT_LABEL) plt.legend(loc="lower right") plt.plot([0, 1], [0, 1], "k:", lw=0.5) plt.title(f"ROC {title} n={np.sum(true_sums):.0f}\n") figure_path = os.path.join(prefix, "per_class_roc_" + title + IMAGE_EXT) if not os.path.exists(os.path.dirname(figure_path)): os.makedirs(os.path.dirname(figure_path)) plt.savefig(figure_path) logging.info(f"Saved ROC curve at: {figure_path}") return labels_to_areas def plot_rocs(predictions, truth, labels, title, prefix="./figures/"): lw = 2 true_sums = np.sum(truth, axis=0) plt.figure(figsize=(SUBPLOT_SIZE, SUBPLOT_SIZE)) for p in predictions: fpr, tpr, roc_auc = get_fpr_tpr_roc_pred(predictions[p], truth, labels) for key in labels: if "no_" in key and len(labels) == 2: continue color = _hash_string_to_color(p + key) label_text = f"{p}_{key} area:{roc_auc[labels[key]]:.3f} n={true_sums[labels[key]]:.0f}" plt.plot( fpr[labels[key]], tpr[labels[key]], color=color, lw=lw, label=label_text, ) logging.info(f"ROC Label {label_text}") plt.xlim([0.0, 1.0]) plt.ylim([-0.02, 1.03]) plt.ylabel(RECALL_LABEL) plt.xlabel(FALLOUT_LABEL) plt.legend(loc="lower right") plt.plot([0, 1], [0, 1], "k:", lw=0.5) plt.title(f"ROC {title} n={np.sum(true_sums):.0f}\n") figure_path = os.path.join(prefix, "per_class_roc_" + title + IMAGE_EXT) if not os.path.exists(os.path.dirname(figure_path)): os.makedirs(os.path.dirname(figure_path)) plt.savefig(figure_path) logging.info(f"Saved ROC curve at: {figure_path}") def _figure_and_subplot_axes_from_total(total_plots: int): cols = max(2, int(math.ceil(math.sqrt(total_plots)))) rows = max(2, int(math.ceil(total_plots / cols))) return plt.subplots(rows, cols, figsize=(cols * SUBPLOT_SIZE, rows * SUBPLOT_SIZE)) def subplot_rocs( rocs: List[Tuple[np.ndarray, np.ndarray, Dict[str, int]]], prefix: str = "./figures/", ): """Log and tabulate AUCs given as nested dictionaries in the format '{model: {label: auc}}'""" lw = 2 fig, axes = _figure_and_subplot_axes_from_total(len(rocs)) for roc_data, ax in zip(rocs, axes.ravel()): predicted, truth, labels = roc_data true_sums = np.sum(truth, axis=0) fpr, tpr, roc_auc = get_fpr_tpr_roc_pred(predicted, truth, labels) for key in labels: if "no_" in key and len(labels) == 2: continue color = _hash_string_to_color(key) label_text = ( f"{key} area: {roc_auc[labels[key]]:.3f} n={true_sums[labels[key]]:.0f}" ) ax.plot( fpr[labels[key]], tpr[labels[key]], color=color, lw=lw, label=label_text, ) logging.info(f"ROC Label {label_text}") ax.set_xlim([0.0, 1.0]) ax.set_ylim([-0.02, 1.03]) ax.set_ylabel(RECALL_LABEL) ax.set_xlabel(FALLOUT_LABEL) ax.legend(loc="lower right") ax.plot([0, 1], [0, 1], "k:", lw=0.5) ax.set_title(f"ROC n={np.sum(true_sums):.0f}") figure_path = prefix + "rocs_together" + IMAGE_EXT if not os.path.exists(os.path.dirname(figure_path)): os.makedirs(os.path.dirname(figure_path)) plt.savefig(figure_path) def subplot_comparison_rocs( rocs: List[Tuple[Dict[str, np.ndarray], np.ndarray, Dict[str, int]]], prefix: str = "./figures/", ): """Log and tabulate AUCs given as nested dictionaries in the format '{model: {label: auc}}'""" lw = 3 fig, axes = _figure_and_subplot_axes_from_total(len(rocs)) for roc_data, ax in zip(rocs, axes.ravel()): predictions, truth, labels = roc_data true_sums = np.sum(truth, axis=0) for p in predictions: fpr, tpr, roc_auc = get_fpr_tpr_roc_pred(predictions[p], truth, labels) for key in labels: if "no_" in key and len(labels) == 2: continue color = _hash_string_to_color(p + key) label_text = f"{p}_{key} area:{roc_auc[labels[key]]:.3f} n={true_sums[labels[key]]:.0f}" ax.plot( fpr[labels[key]], tpr[labels[key]], color=color, lw=lw, label=label_text, ) logging.info(f"ROC Label {label_text}") ax.set_xlim([0.0, 1.0]) ax.set_ylim([-0.02, 1.03]) ax.set_ylabel(RECALL_LABEL) ax.set_xlabel(FALLOUT_LABEL) ax.legend(loc="lower right") ax.plot([0, 1], [0, 1], "k:", lw=0.5) ax.set_title(f"ROC n={np.sum(true_sums):.0f}\n") figure_path = os.path.join(prefix, "rocs_compared_together" + IMAGE_EXT) if not os.path.exists(os.path.dirname(figure_path)): os.makedirs(os.path.dirname(figure_path)) plt.savefig(figure_path) def plot_precision_recall_per_class( prediction, truth, labels, title, prefix="./figures/", ): # Compute Precision-Recall and plot curve lw = 2.0 labels_to_areas = {} true_sums = np.sum(truth, axis=0) plt.figure(figsize=(SUBPLOT_SIZE, SUBPLOT_SIZE)) for k in labels: c = _hash_string_to_color(k) precision, recall, _ = precision_recall_curve( truth[:, labels[k]], prediction[:, labels[k]], ) average_precision = average_precision_score( truth[:, labels[k]], prediction[:, labels[k]], ) label_text = ( f"{k} mean precision:{average_precision:.3f} n={true_sums[labels[k]]:.0f}" ) plt.plot(recall, precision, lw=lw, color=c, label=label_text) logging.info(f"prAUC Label {label_text}") labels_to_areas[k] = average_precision plt.xlim([0.0, 1.00]) plt.ylim([-0.02, 1.03]) plt.xlabel(RECALL_LABEL) plt.ylabel(PRECISION_LABEL) plt.legend(loc="lower left") plt.title(f"{title} n={np.sum(true_sums):.0f}") figure_path = os.path.join(prefix, "precision_recall_" + title + IMAGE_EXT) if not os.path.exists(os.path.dirname(figure_path)): os.makedirs(os.path.dirname(figure_path)) plt.savefig(figure_path) logging.info(f"Saved Precision Recall curve at: {figure_path}") return labels_to_areas def plot_precision_recalls(predictions, truth, labels, title, prefix="./figures/"): # Compute Precision-Recall and plot curve for each model lw = 2.0 true_sums = np.sum(truth, axis=0) plt.figure(figsize=(SUBPLOT_SIZE, SUBPLOT_SIZE)) for p in predictions: for k in labels: c = _hash_string_to_color(p + k) precision, recall, _ = precision_recall_curve( truth[:, labels[k]], predictions[p][:, labels[k]], ) average_precision = average_precision_score( truth[:, labels[k]], predictions[p][:, labels[k]], ) label_text = f"{p}_{k} mean precision:{average_precision:.3f} n={true_sums[labels[k]]:.0f}" plt.plot(recall, precision, lw=lw, color=c, label=label_text) logging.info(f"prAUC Label {label_text}") plt.xlim([0.0, 1.00]) plt.ylim([-0.02, 1.03]) plt.xlabel(RECALL_LABEL) plt.ylabel(PRECISION_LABEL) plt.legend(loc="lower left") plt.title(f"{title} n={np.sum(true_sums):.0f}") figure_path = os.path.join(prefix, "precision_recall_" + title + IMAGE_EXT) if not os.path.exists(os.path.dirname(figure_path)): os.makedirs(os.path.dirname(figure_path)) plt.savefig(figure_path) logging.info("Saved Precision Recall curve at: {}".format(figure_path)) def get_fpr_tpr_roc_pred(y_pred, test_truth, labels): # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() for k in labels: cur_idx = labels[k] aser = roc_curve(test_truth[:, cur_idx], y_pred[:, cur_idx]) fpr[labels[k]], tpr[labels[k]], _ = aser roc_auc[labels[k]] = auc(fpr[labels[k]], tpr[labels[k]]) return fpr, tpr, roc_auc def plot_waves(predicted_waves, true_waves, title, plot_path, rows=6, cols=6): f, axes = plt.subplots(rows, cols, sharex=True, figsize=(36, 36)) for i, ax in zip(range(true_waves.shape[0]), axes.ravel()): ax.plot(true_waves[i, :, 0], color="blue", label="Actual Wave") if predicted_waves is not None: ax.plot(predicted_waves[i, :, 0], color="green", label="Predicted") ax.set_xlabel("time") plt.legend(loc="lower left") figure_path = os.path.join(plot_path, title + IMAGE_EXT) if not os.path.exists(os.path.dirname(figure_path)): os.makedirs(os.path.dirname(figure_path)) plt.savefig(figure_path) logging.info("Saved waves at: {}".format(figure_path)) def plot_tsne( x_embed, categorical_labels, continuous_labels, gene_labels, label_dict, figure_path, alpha, ): x_embed = np.array(x_embed) if len(x_embed.shape) > 2: x_embed = np.reshape(x_embed, (x_embed.shape[0], np.prod(x_embed.shape[1:]))) n_components = 2 rows = max(2, len(label_dict)) perplexities = [25, 75] (fig, subplots) = plt.subplots( rows, len(perplexities), figsize=(len(perplexities) * SUBPLOT_SIZE * 2, rows * SUBPLOT_SIZE * 2), ) p2y = {} for i, p in enumerate(perplexities): tsne = manifold.TSNE( n_components=n_components, init="pca", random_state=123, perplexity=p, learning_rate=20, n_iter_without_progress=500, ) p2y[p] = tsne.fit_transform(x_embed) j = -1 for tm in label_dict: j += 1 if j == rows: break categorical_subsets = {} categorical_counts = Counter() if tm in categorical_labels + gene_labels: for c in tm.channel_map: categorical_subsets[tm.channel_map[c]] = ( label_dict[tm] == tm.channel_map[c] ) categorical_counts[tm.channel_map[c]] = np.sum( categorical_subsets[tm.channel_map[c]], ) elif tm in continuous_labels: colors = label_dict[tm] for i, p in enumerate(perplexities): ax = subplots[j, i] ax.set_title(f"{tm.name} | t-SNE perplexity:{p}") if tm in categorical_labels + gene_labels: color_labels = [] for c in tm.channel_map: channel_index = tm.channel_map[c] color = _hash_string_to_color(c) color_labels.append( f"{c} n={categorical_counts[tm.channel_map[c]]}", ) ax.scatter( p2y[p][categorical_subsets[channel_index], 0], p2y[p][categorical_subsets[channel_index], 1], c=color, alpha=alpha, ) ax.legend(color_labels, loc="lower left") elif tm in continuous_labels: points = ax.scatter( p2y[p][:, 0], p2y[p][:, 1], c=colors, alpha=alpha, cmap="jet", ) if i == len(perplexities) - 1: fig.colorbar(points, ax=ax) ax.xaxis.set_major_formatter(NullFormatter()) ax.yaxis.set_major_formatter(NullFormatter()) ax.axis("tight") figure_path += "tsne_plot" + IMAGE_EXT if not os.path.exists(os.path.dirname(figure_path)): os.makedirs(os.path.dirname(figure_path)) plt.savefig(figure_path) logging.info(f"Saved T-SNE plot at: {figure_path}") def plot_find_learning_rate( learning_rates: List[float], losses: List[float], smoothed_losses: List[float], picked_learning_rate: Optional[float], figure_path: str, ): plt.figure(figsize=(2 * SUBPLOT_SIZE, SUBPLOT_SIZE)) plt.title("Learning rate finder") cutoff = smoothed_losses[0] plt.ylim(min(smoothed_losses), cutoff * 1.05) plt.axhline( cutoff, linestyle="--", color="k", label=f"Deltas ignored above {cutoff:.2f}", ) learning_rates = np.log(learning_rates) / np.log(10) plt.plot(learning_rates, losses, label="Loss", c="r") plt.plot(learning_rates, smoothed_losses, label="Smoothed loss", c="b") if picked_learning_rate is not None: plt.axvline( np.log(picked_learning_rate) / np.log(10), label=f"Learning rate found {picked_learning_rate:.2E}", color="g", linestyle="--", ) plt.xlabel("Log_10 learning rate") plt.legend() plt.savefig(os.path.join(figure_path, f"find_learning_rate{IMAGE_EXT}")) def plot_saliency_maps( data: np.ndarray, gradients: np.ndarray, paths: List, prefix: str, ): """Plot saliency maps of a batch of input tensors. Saliency maps for each input tensor in the batch will be saved at the file path indicated by prefix. Also creates a mean saliency map across the batch 2D tensors are assumed to be ECGs and 3D tensors are plotted with each slice as an RGB image. The red channel indicates negative gradients, and the green channel positive ones. :param data: A batch of input tensors :param gradients: A corresponding batch of gradients for those inputs, must be the same shape as data :param paths: A List of paths corresponding to each input tensor :param prefix: file path prefix where saliency maps will be saved """ if data.shape[-1] == 1: data = data[..., 0] gradients = gradients[..., 0] mean_saliency = np.zeros(data.shape[1:4] + (3,)) for batch_i, path in enumerate(paths): sample_id = os.path.basename(path).replace(TENSOR_EXT, "") if len(data.shape) == 3: ecgs = {f"{sample_id}_raw": data[batch_i], "gradients": gradients[batch_i]} _plot_ecgs(ecgs, f"{prefix}_{sample_id}_saliency_{batch_i}{IMAGE_EXT}") elif len(data.shape) == 4: cols
assert hparams.quant_type == QuantType.fake_quant, ( 'we only support fake_quant style of aqt for ConvAqt.') kernel = QuantOps.create_weights_fake_quant( kernel, weight_params=QuantOps.WeightParams( prec=hparams.weight_prec, half_shift=hparams.weight_half_shift, axis=kernel_reduction_axis, expected_scale_shape=expected_scale_shape), quant_type=hparams.quant_type, quantize_weights=self.quant_context.quantize_weights) # Convolution dimension_numbers = flax.linen.linear._conv_dimension_numbers( inputs.shape) # pylint: disable=protected-access metadata_context = contextlib.suppress() # Use metadata context to annotate op metadata with quantization info act_prec = None if hparams.quant_act is None else hparams.quant_act.prec if flags.FLAGS.metadata_enabled: metadata_context = compute_cost_utils.ConvMetadataMonkeyPatch( weight_prec=hparams.weight_prec, act_prec=act_prec) with metadata_context: y = lax.conv_general_dilated( inputs, kernel, strides, self.padding, lhs_dilation=self.input_dilation, rhs_dilation=self.kernel_dilation, dimension_numbers=dimension_numbers, feature_group_count=self.feature_group_count, precision=jax_precision) # TODO(shivaniagrawal): create quantized conv general dilated. # bias if self.use_bias: bias = self.param('bias', self.bias_init, (self.features,)) bias = jnp.asarray(bias, self.dtype) # The inputs can have an arbitrary number of spatial dims, so we broadcast # the bias to match: (batch_size, spatial_dim,... features) # TODO(shivaniagrawal): Consider making ConvAqt rank static (e.g. 2D) # or maybe add error checking (e.g. expect inputs to have rank N, but this # may already be checked by lax.conv_general_dialated). bias = utils.broadcast_rank(bias, inputs) y = y + bias return y # From flax.linen.Embed default_embed_init = nn.initializers.variance_scaling( 1.0, 'fan_in', 'normal', out_axis=0) # This is based on flax.linen.Embed # (https://github.com/google/flax/blob/65061e6128f6695eed441acf2bfffc3b1badd318/flax/nn/linear.py#L360) class EmbedAqt(nn.Module): """Quantized Embedding Module. A parameterized function from integers [0, n) to d-dimensional vectors. Attributes: num_embeddings: number of embeddings. features: Number of feature dimensions for each embedding. hparams: hyperparameters dtype: dtype to use for embedding. paxis_name: axis_name to which a user `pmaps` the parent module (model), refer to jax.pmap() for more documentation. This arg is used for get_bounds acts quantization (QuantOps.create_input_fake_quant) train: Whether model is training. update_bounds: Bool whether to update activation bounds. embedding_init: embedding initializer """ @dataclass class HParams: # pylint: disable=missing-docstring # Target integer precision of weights in bits. # If None, no quantization will be applied. weight_prec: Union[None, int, QuantOps.FloatQuant] # half_shift flag for weights weight_half_shift: bool # QuantOps hyperparameter to quantize inputs for logits. If None, no # activation quantization will be applied. quant_act: Optional[QuantOps.ActHParams] # Quantization strategy, one of `fake_quant` or `aqt`. quant_type: QuantType num_embeddings: int features: int hparams: HParams dtype: Any paxis_name: Optional[str] train: bool quant_context: quant_config.QuantContext embedding_init: InitializerType = default_embed_init def setup(self): self.embedding = self.param( 'embedding', self.embedding_init, # pylint: disable=missing-from-attributes (self.num_embeddings, self.features)) hparams = self.hparams if hparams.quant_act is not None and isinstance(hparams.quant_act.bounds, get_bounds.GetBounds.Hyper): self.get_bounds_logits = get_bounds.GetBounds( # pylint: disable=missing-from-attributes hyper=self.hparams.quant_act.bounds) self.quantized_dot = quantization.QuantizedDot( # pylint: disable=missing-from-attributes act_hparams=hparams.quant_act, quant_type=hparams.quant_type, dot_precision=None, prefer_int8_to_int32_dot=self.quant_context.prefer_int8_to_int32_dot, weight_params=QuantOps.WeightParams( prec=hparams.weight_prec, axis=(0,), expected_scale_shape=(1, self.embedding.shape[0]), half_shift=hparams.weight_half_shift)) def __call__( self, inputs, ): """Embeds the inputs along the last dimension. Args: inputs: input data, all dimensions are considered batch dimensions. Returns: Output which is embedded input data. The output shape follows the input, with an additional `features` dimension appended. """ batch_size, sequence_length = inputs.shape if inputs.dtype not in [jnp.int32, jnp.int64, jnp.uint32, jnp.uint64]: raise ValueError('Input type must be an integer or unsigned integer.') embedding = self.embedding embedding = jnp.asarray(embedding, self.dtype) hparams = self.hparams # Initialize state for stats and bounds, this would be required for logits # in the following method attend. if hparams.quant_act is not None and isinstance(hparams.quant_act.bounds, get_bounds.GetBounds.Hyper): self.get_bounds_logits( inputs, bounds_params=get_bounds.GetBounds.Params( update_stats=False, update_bounds=False, paxis_name=None), ) weight_prec = hparams.weight_prec weight_half_shift = hparams.weight_half_shift if weight_prec is not None: quantized_type = hparams.quant_type.to_jax_type() # In contrast to all other scale factor calculations in this module, we # compute per-row instead of per-column (ie, per-output-channel) scale # factors here. This is because the embedding matrix might be shared with # the output (logit) layer of the transformer, in which case the # *transpose* of the embedding matrix will be used as the weight matrix in # a mamtul. The per-row scale factors used here would thus correspond to # using per-column (because of the transpose) scale factors used by the # weight matrix in the logits layer, which is what we need for AQT. embedding_quant_ops = QuantOps.create_weights_ops( embedding, weight_params=QuantOps.WeightParams( prec=weight_prec, axis=(1,), half_shift=weight_half_shift)) embedding_quant_ops.assert_scale_shape_is(shape=(self.num_embeddings, 1)) quantized_embedding = embedding_quant_ops.to_quantized( embedding, dtype=quantized_type) quantized_embedded_inputs = quantized_embedding[inputs] # Since the embedding matrix 'quantized_embedding' is gathered based on # 'inputs' to produce the embedding tensor, we apply the same gathering to # the per-row scale factors of the embedding matrix so the scale factors # will broadcast appropriately in the subsequent call to 'to_quantized'. # TODO(malmaud): As part of quantization.py refactor, change # 'get_scale_for_aqt' to cleanly support this and hence avoid the need to # directly access a protected member of QuantOps. scale = embedding_quant_ops._scale[inputs] # pylint: disable=protected-access shape_utils.assert_shapes_equal(scale.shape, (batch_size, sequence_length, 1)) shape_utils.assert_shapes_equal( quantized_embedded_inputs.shape, (batch_size, sequence_length, self.features)) embedded_inputs = (quantized_embedded_inputs / scale).astype(self.dtype) else: embedded_inputs = embedding[inputs] shape_utils.assert_shapes_equal( embedded_inputs.shape, (batch_size, sequence_length, self.features)) return embedded_inputs def attend(self, query, padding_mask, **unused_kwargs): """Attend over the embedding using a query array. Args: query: array with last dimension equal the feature depth `features` of the embedding. padding_mask: boolean mask indicating which elements of 'query' are padding. Used for calculating activation statistics for the dynamic bounds quantization algorithm. **unused_kwargs: unused arguments passed from the apply method. Returns: An array with final dim `num_embeddings` corresponding to the batched inner-product of the array of query vectors against each embedding. Commonly used for weight-sharing between embeddings and logit transform in NLP models. """ del unused_kwargs batch_size = query.shape[0] if padding_mask is not None: shape_utils.assert_shapes_equal(padding_mask.shape, (batch_size, 1)) embedding = self.embedding embedding = jnp.asarray(embedding, self.dtype) # TODO(malmaud): Remove the 'mask' field from this struct so we can # make this struct a hyperparameter of the EncoderAqt class. get_bounds_params = get_bounds.GetBounds.Params( update_bounds=self.quant_context.update_bounds, update_stats=self.train, paxis_name=self.paxis_name, mask=padding_mask, module_name='logits') out = self.quantized_dot( act=query, w=jnp.transpose(embedding), get_bounds_params=get_bounds_params) return out # Forked from Flax LayerNorm module. class LayerNormAqt(nn.Module): """Layer normalization (https://arxiv.org/abs/1607.06450). Operates on the last axis of the input data. Adds quantization support to the Flax LayerNorm layer. Attributes: epsilon: A small float added to variance to avoid dividing by zero. dtype: the dtype of the computation (default: float32). Can also be eg bfloat16. Note this is the real Jax type that intermediate activations will be stored in and is separate from the quantized type specified in 'hparams' which we are simulating via downcasting operations. bias: If True, bias (beta) is added. scale: If True, multiply by scale (gamma). When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer. bias_init: Initializer for bias, by default, zero. scale_init: Initializer for scale, by default, one. hparams: An instance of LayerNormAqt.HParams with quantization-related hyperparameters. """ @dataclass class QuantHParams: # TODO(malmaud): Generalize this to other quantization formats. prec: QuantOps.FloatQuant.FloatPrec reduction_prec: Optional[QuantOps.FloatQuant.FloatPrec] @dataclass class HParams: # We have to refer to the type name with quotes here since we can't directly # refer to types being defined in the same class. quant_hparams: Optional['LayerNormAqt.QuantHParams'] hparams: HParams dtype: Any quant_context: quant_config.QuantContext epsilon: float = 1e-6 use_bias: bool = True use_scale: bool = True bias_init: InitializerType = nn.initializers.zeros scale_init: InitializerType = nn.initializers.ones @nn.compact def __call__(self, x): """Applies layer normalization on the input. It normalizes the activations of the layer for each given example in a batch independently, rather than across a batch like Batch Normalization. i.e. applies a transformation that maintains the mean activation within each example close to 0 and the activation standard deviation close to 1. Args: x: the inputs Returns: Normalized inputs (the same shape as inputs). """ if self.hparams.quant_hparams is None: return nn.LayerNorm( epsilon=self.epsilon, dtype=self.dtype, use_bias=self.use_bias, use_scale=self.use_scale, bias_init=self.bias_init, scale_init=self.scale_init)( x) # To match the behavior of the upstream layernorm before the quantization # start step, we use float32 for intermediate computations. dtype = jnp.float32 x = x.astype(dtype) hparams = self.hparams num_features = x.shape[-1] if self.use_scale: scale_param = self.param('scale', self.scale_init, (num_features,)) if self.use_bias: bias_param = self.param('bias', self.bias_init, (num_features,)) def quantized_layernorm(x): prec = hparams.quant_hparams.prec fp_quant = QuantOps.FloatQuant(is_scaled=False, fp_spec=prec) quant_ops = QuantOps.create_symmetric_fp(fp_quant=fp_quant, bounds=None) def to_quantized(x): return quant_ops.to_quantized(x, dtype=dtype) # If epsilon is too small to represent in the quantized format, we set it # to the minimal representative non-zero value to avoid the possibility of # dividing by zero. fp_bounds = quantization.fp_cast.get_bounds(prec.exp_min, prec.exp_max, prec.sig_bits) epsilon = max(self.epsilon, fp_bounds.flush_to_zero_bound) quantized_epsilon = to_quantized(jnp.array(epsilon, dtype=dtype)) # If
Let $p(\theta)= a+b\mathbbm{e}^{\mathbbm{i}\theta},$ and $q(\theta)=c+d\mathbbm{e}^{\mathbbm{i}\theta}.$ If there are more than $4p$ distinct $\theta\in[0,2\pi]$ such that \begin{equation*} |p(\theta)|^p-|q(\theta)|^p=C, \end{equation*} then $ab=cd$ and $a^2+b^2=c^2+d^2.$ \end{lem} \begin{lem}\label{lem: zerosetmeasurezero2} Let $p\geq1 $ be an integer and let $a,b,c,d,C \in\mathbb{R}, \gamma>0, \kappa\neq 0,\pm 1.$ Then the set of $\theta$ such that \begin{equation}\label{measurezero} \left|a+b\mathbbm{e}^{\mathbbm{i}\theta}+c\mathbbm{e}^{\mathbbm{i}(\gamma+1)\theta}\right|^p-\left|\kappa a+\frac{1}{\kappa}b\mathbbm{e}^{\mathbbm{i}\theta}+\kappa c\mathbbm{e}^{\mathbbm{i}(\gamma+1)\theta}\right|^p=C \end{equation} has measure zero. \end{lem} \section{The proof of Theorem \ref{thm: main support}} Before proving Theorem \ref{thm: main support} we will first prove a preliminary result which shows, even without the assumption that $x(t)$ is collision free, that $f_\xi(s)$ is a peicewise linear function whose set of knots is contained in $\mathcal{D}(x)$. This result is based on the observation that we may write \begin{equation*} f_\xi(s)=\sum_{i<j} \alpha_{i,j}(s)\beta_{i,j}(\xi). \end{equation*} where for each $i<j$, \begin{equation}\label{yform} \beta_{i,j}(\xi) \coloneqq \sum_{\ell=i}^j a_\ell\mathbbm{e}^{\mathbbm{i}\xi \Delta_{i,\ell}} \end{equation} is a function that only depends on $\xi$ and $\alpha_{i,j}(s)$ is piecewise linear function of $s$ whose singularities are contained in $\mathcal{D}(x).$ Specifically, we prove the following theorem. We emphasize that this result does not assume that $x(t)$ is collision free, which is why for $d\in\mathcal{D}(x)$ there might be multiple $i,j$ such that $\Delta_{i,j}=d$. \begin{thm} \label{thm: piecewiselinear} Let $p\geq 1$ be an integer, and assume $w(t)=1_{[0,1]}(t).$ For $i\leq j,$ $\beta_{i,j}(\xi)$ be as in \eqref{yform}. Then, for every fixed $\xi,$ the function $f_\xi(s) = \|g_{s,\xi}\ast x\|_p^p$ is piecewise linear, and $\partial_s^2 f_\xi(s)$ is a grid-free sparse signal whose support is contained in $\mathcal{D}(x).$ Specifically, \begin{equation}\label{eqn: 2nd deriv} \partial_s^2 f_\xi(s) = \sum_{d\in \mathcal{D}(x)} \left(\sum_{\Delta_{i,j}=d} c_{i,j}(\xi)\right) \delta_{d}, \end{equation} where \begin{equation}\label{fdoubleprimespecial} c_{i,i+1} (\xi) = |\beta_{i,i+1}(\xi)|^p-|\beta_{i+1,i+1}(\xi)|^p-|\beta_{i,i}(\xi)|^p \end{equation} and for $j\geq i+2$ \begin{equation}\label{fdoubleprimedistinct} c_{i,j}(\xi)= |\beta_{i,j}(\xi)|^p+|\beta_{i+1,j-1}(\xi)|^p-|\beta_{i+1,j}(\xi)|^p-|\beta_{i,j-1}(\xi)|^p. \end{equation} \end{thm} \begin{proof} We first note that \begin{align*} |(g_{s,\xi}\star x)(t)| & =\left|\sum_{i=1}^k a_ig_{s,\xi}(t-v_i)\right|\\ & =\left|\sum_{i=1}^k a_i\mathbbm{e}^{\mathbbm{i}\xi(t-v_i)}1_{[v_i,v_i+s]}(t)\right|\\ & =\left|\sum_{i=1}^k a_i\mathbbm{e}^{-\mathbbm{i}\xi v_i}1_{[v_i,v_i+s]}(t)\right|. \end{align*} For $I\subseteq \{1,\ldots, k\},$ let $R_I(s)$ be the set of $t$ for which $a_i\mathbbm{e}^{-\mathbbm{i}\xi v_i}1_{[v_i,v_i+s]}(t)$ is nonzero if and only if $i\in I$, i.e., \begin{equation*} R_I(s) = \{t: t\in [v_i,v_i+s]\; \forall i\in I, t\notin [v_i,v_i+s]\; \forall i\notin I\}. \end{equation*} Then, since $w(t)=1_{[0,1]}(t)$ it is clear that for $t\in R_I,$ \begin{equation*} |(g_{s,\xi}\star x)(t)| = \left|\sum_{i\in I} a_i \mathbbm{e}^{-\mathbbm{i}\xi v_i}\right| \eqqcolon y_I(\xi). \end{equation*} Therefore, \begin{equation}\label{fform} f_\xi(s) =\|(g_{s,\xi}\star x)(t)\|_p^p= \sum_{I\subseteq \{1,\ldots k\}} |y_I(\xi)|^p|R_I(s)|, \end{equation} where $|R_I(s)|$ denotes the Lebesgue measure of $R_I(s)$. We will show that for all $I\subseteq\{1,\ldots,k\}$, $|R_I(s)|$ is piecewise linear function whose knots are contained in $\mathcal{D}(x).$ First, we note that $R_I(s)=\emptyset$ unless $I$ has the form $\{i,i+1,\ldots, j-1, j\}$ for some $i\leq j.$ Therefore, \begin{equation}\label{eqn: modulation times set size} f_s(\xi) = \sum_{i=1}^k\sum_{j=i}^k |\beta_{i,j}(\xi)|^p|R_{i,j}(s)|, \end{equation} where $R_{i,j}\coloneqq R_{\{i,\ldots,j\}}.$ and, as in \eqref{yform}, $\beta_{i,j}(\xi)$ is given by \begin{equation*} |\beta_{i,j}(\xi)| = \left|\sum_{\ell=i}^j a_\ell\mathbbm{e}^{\mathbbm{i}\xi \Delta_{i,\ell}}\right| = \left|\sum_{\ell=i}^j a_\ell\mathbbm{e}^{\mathbbm{i}\xi v_\ell}\right|. \end{equation*} Now, turning our attention to $R_{i,j}(s),$ we observe by definition that a point $t$ is in $R_{i,j}(s)$ if and only if it satisfies the following three conditions: \begin{align*} v_\ell\leq &t \leq v_\ell+s \quad \text{for all } i\leq \ell \leq j,\\ &t>v_{i-1}+s, \text{ and }\\ &t<v_{j+1}. \end{align*} Therefore, letting $\lor(a,b)$ and $\wedge(a,b)$ denote $\min\{a,b\}$ and $\max\{a,b\}$, we see \begin{align} \label{R} R_{i,j}(s) &= [v_j,v_i+s]\cap [v_{i-1}+s,v_{j+1}] \\ &= [v_j \lor (v_{i-1}+s), (v_i+s) \wedge v_{j+1}], \end{align} and therefore \begin{equation*} |R_{i,j}(s)| = ((v_i+s) \wedge v_{j+1})-(v_j \lor (v_{i-1}+s)) \end{equation*} if the above quantity is positive and zero otherwise. It follows from $(\ref{R})$ that $|R_{i,j}(s)|$ is a piecewise linear function, and that $\partial_s^2|R_{i,j}(s)|$ is given by \begin{equation}\label{Rdirac} \partial_s^2 |R_{i,j{(S)}}| = \delta_{\Delta_{i,j}}(s) + \delta_{\Delta_{i-1,j+1}}(s) - \delta_{\Delta_{i-1,j}}(s) - \delta_{\Delta_{i,j+1}}(s). \end{equation} We note that in order for this equation to be valid for all $1\leq i<j\leq k,$ we identify $v_0$ and $v_{k+1}$ with $-\infty$ and $\infty,$ and therefore, $\delta_{{\Delta_{0,j}}}$ $\delta_{{\Delta_{i-1,k+1}}}$ are interpreted as being the zero function since the domain of $f$ is $(0,\infty).$ Likewise $\delta_{{\Delta_{i,i}}}=\delta_0$ is interpreted as the zero function in the above equation. Combining (\ref{Rdirac}) with (\ref{eqn: modulation times set size}) implies that $\partial_s^2 f_\xi(s)$ is a sparse signal with support contained in $\mathcal{D}(x),$ and for $d\in\mathcal{D}(x),$ \begin{equation*} \partial_s^2 f_\xi(d)=\sum_{\Delta_{i,j}=d}c_{i,j}(\xi) \end{equation*} as desired. \end{proof} Before we prove Theorem \ref{thm: main support}, we note the following example which shows that, in general, the support of $\partial_s^2 f_\xi(s)$ may be a proper subset of $\mathcal{D}(x).$ \begin{example} \label{ex: collision example} If $p=2$ and \begin{equation*} x(t) = \delta_{1}(t)+\delta_{2}(t)+\delta_{3}(t)-\delta_{4}(t), \end{equation*} then $2\in\mathcal{D}(x),$ but \begin{equation*}\partial_s^2 f_\xi(2)=0. \end{equation*} \end{example} \begin{proof}For this choice of $x,$ there are two pairs $(i,j)$ such that $\Delta_{i,j}=2,$ namely $(1,3)$ and $(2,4)$. Therefore, by Theorem \ref{thm: piecewiselinear}, \begin{align*} \partial_s^2 f_\xi(2) &= \left(|y_{1,3}(\xi)|^2 + |y_{2,2}(\xi)|^2 - |y_{1,2}(\xi)|^2 - |y_{2,3}(\xi)|^2\right) \\&\:\:\:+ \left(|y_{2,4}(\xi)|^2 + |y_{3,3}(\xi)|^2 - |y_{2,3}(\xi)|^2 - |y_{3,4}(\xi)|^2\right). \end{align*} Inserting $(a_1,a_2,a_3,a_4)=(1,1,1,-1),$ $\Delta_{i,i+1}=1,$ and $\Delta_{i,i+2}=2$ into (\ref{yform}) implies that \begin{align*} \partial_s^2 f_\xi(2) &= \left(|1+\mathbbm{e}^{\mathbbm{i}\xi}+\mathbbm{e}^{2i\xi}|^2 + 1 - |1+\mathbbm{e}^{\mathbbm{i}\xi}|^2 - |1+\mathbbm{e}^{\mathbbm{i}\xi}|^2\right) \\&\:\:\:+ \left(|1+\mathbbm{e}^{\mathbbm{i}\xi}-\mathbbm{e}^{2i\xi}|^2 + 1 - |1+\mathbbm{e}^{\mathbbm{i}\xi}|^2 - |1-\mathbbm{e}^{\mathbbm{i}\xi}|^2\right)\\ &=|1+\mathbbm{e}^{\mathbbm{i}\xi}+\mathbbm{e}^{2i\xi}|^2+|1+\mathbbm{e}^{\mathbbm{i}\xi}-\mathbbm{e}^{2i\xi}|^2 \\&\:\:\:+2 - 3|1+\mathbbm{e}^{\mathbbm{i}\xi}|^2-|1-\mathbbm{e}^{\mathbbm{i}\xi}|^2\\ &=0. \end{align*} The last inequality follows from repeatedly applying the the trigonometric identities $\sin^2(\theta)+\cos^2(\theta)=1$ and $\cos(\theta)=\cos(2\theta)\cos(\theta)+\sin(2\theta)\sin(\theta).$ \end{proof} We shall now prove Theorem \ref{thm: main support}. \begin{proof}[The Proof of Theorem \ref{thm: main support}] By assumption, $x(t)$ is collision free. Therefore, for all $d\in\mathcal{D}(x)$, there is a unique $i,j$ such that $\Delta_{i,j}=d$, and so, by \eqref{eqn: 2nd deriv}, it suffices to show that $c_{i,j}(\xi)\neq 0$ for all $i<j$ and for almost every $\xi\in\mathbb{R},$ where as in \eqref{fdoubleprimespecial} and for $j\geq i+$ \eqref{fdoubleprimedistinct} \begin{equation*} c_{i,i+1}(\xi) = |\beta_{i,i+1}(\xi)|^p - |\beta_{i+1,i+1}(\xi)|^p - |\beta_{i,i}(\xi)|^p, \end{equation*} and for $j\geq i+2$, \begin{equation*} c_{i,j}(\xi) = |\beta_{i,j}(\xi)|^p + |\beta_{i+1,j-1}(\xi)|^p - |\beta_{i+1,j}(\xi)|^p - |\beta_{i,j-1}(\xi)|^p, \end{equation*} where \begin{equation*} \beta_{i,j}(\xi) = \left|\sum_{k=i}^ja_k \mathbbm{e}^{-\mathbbm{i}\xi\Delta_{i,k}}\right|. \end{equation*} Observe that $\beta_{i,j}(\xi)$ are generalized exponential Laurent polynomials of the form introduced in Section \ref{sec: exppoly}, and in particular, $\beta_{i,j}\in\mathcal{E}_0(\Delta_{i,j}).$ Therefore, when $j\geq i+2,$ it follows from Lemma \ref{lem: zerosetmeasurezero} that $c_{i,j}(\xi)$ vanishes on a set of measure zero since if $c_{i,j}(\xi)=0$ we have \begin{equation*} |\beta_{i,j}(\xi)|^p + |\beta_{i+1,j-1}(\xi)|^p = |\beta_{i+1,j}(\xi)|^p + |\beta_{i,j-1}(\xi)|^p. \end{equation*} In the case where $j=i+1,$ we see that \begin{equation*} c_{i,i+1}(\xi) = |a_i+a_{i+1}\mathbbm{e}^{-\mathbbm{i}\xi\Delta_{i,i+1}}|^p-|a_i|^p-|a_{i+1}|^p, \end{equation*} For any $\xi$ such that $c_{i,i+1}(\xi)=0,$ we see that $\xi\Delta_{i,i+1}$ is a solution to \begin{equation*} \left|a_i + a_{i+1}\mathbbm{e}^{\mathbbm{i}\theta}\right|^2-\left(|a_i|^p+|a_{i+1}|^p\right)^{2/p}=0.\end{equation*} Thus, $c_{i,i+1}(\xi)$ vanishes on a set of measure zero since the left-hand side of the above equation is a trigonometric polynomial. \end{proof} \section{The Proof of Theorems \ref{thm: heights}} \label{sec: heights proof} \begin{proof}Let $\xi_1,\xi_2,\ldots,\xi_L$ be i.i.d. standard normal random variables. Since $x$ is collision free, with probability one, each of the $\xi_\ell\Delta_{i,i+1}(x)$ are distinct modulo $2\pi$, i.e. \begin{equation}\label{distinctfrequencies} \xi_\ell\Delta_{i,i+1}(x) \not \equiv \xi_{\ell'}\Delta_{i',i'+1}(x)\mod 2\pi \end{equation} for all $1\leq i,i'\leq k-1$ and $1\leq \ell,\ell'\leq L,$ except when $(i,\ell)= (i',\ell').$ For the rest of the proof we will assume this is the case. Let \begin{equation*} \widetilde{x}(t)=\sum_{j=1}^k \widetilde{a}_j \delta_{\widetilde{v}_j}(t) \end{equation*} be a signal $\mathcal{D}(\widetilde{x})=\mathcal{D}(x),$ $\|\vec{a}\|_p=\|\vec{\widetilde{a}}\|_p$, and $\partial_s^2f_{\xi_\ell}[x](d)=\partial_s^2f_{\xi_\ell}[\widetilde{x}](d)$ for all $d\in\mathcal{D}(x)$ and for all $1\leq \ell \leq L-1.$ Note that $\widetilde{x}(t)$ depends on $\xi_1,\ldots, \xi_{L-1},$ but is independent of $\xi_L.$ By assumption that $x(t)$ and $\widetilde{x}(t)$ are collision free (and also, as discussed in the Section \ref{sec: intro}, we assume that we are not in the special case where $k=6$ and the $\{v_j\}_{j=1}^k$ belong to a special parametrized family). Therefore, the fact that $\mathcal{D}(x)=\mathcal{D}(\widetilde{x})$ implies that the support sets of $x$ and $\widetilde{x}$ are equivalent up to translation and reflection, so we may assume without loss of generality that $\Delta_{i,j}(x)=\Delta_{i,j}(\widetilde{x})\eqqcolon\Delta_{i,j}$ for all $1\leq i\leq j\leq k.$ We will show that $\vec{\widetilde{a}}$ must be given by \begin{equation}\label{eqn: alternation} \widetilde{a}_i = \begin{cases} \frac{1}{c}a_i & \text{if } i \text{ is odd} \\ ca_i & \text{if } i \text{ is even} \end{cases}, \end{equation} where $c=\pm 1$ or \begin{equation}\label{eqn: alternate c} |c|^p= \frac{\sum_{i=1}^{\lfloor \frac{k+1}{2}\rfloor}|a_{2i-1}|^p}{\sum_{i=1}^{\lfloor \frac{k}{2}\rfloor}|a_{2i}|^p}. \end{equation} Then, we will show that, if $c$ satisfies \eqref{eqn: alternate c}, but $c\neq \pm 1,$ then with probability one $$\partial_s^2f_{\xi_L}[x](\Delta_{1,3})\neq \partial_s^2f_{\xi_L}[\widetilde{x}](\Delta_{1,3}).$$ Since $\widetilde{x}(t)$ \big{(}and therefore $\vec{\widetilde{a}}$\big{)} was chosen to depend on $\xi_1,\ldots,\xi_{L-1},$ but not $\xi_L,$ these two facts together will imply that, with probability one, if $\widetilde{x}(t)$ is any signal such that $\mathcal{D}(\widetilde{x})=\mathcal{D}(x)$ and $\partial_s^2f_{\xi_\ell}[x](d)=\partial_s^2f_{\xi_\ell}[\widetilde{x}](d)$ for all $d\in\mathcal{D}$ and all $1\leq \ell \leq L,$ then $\vec{\widetilde{a}}=\pm \vec{a}$ and therefore $\widetilde{x}(t)$ is equivalent to $\pm x(t)$ up to reflection and translation. We first will show that \eqref{eqn: alternation} holds in the case where $p$ is odd. Setting $\partial_s^2f_{\xi_\ell}[x](\Delta_{i,i+1}) =\partial_s^2f_{\xi_\ell}[\widetilde{x}](\Delta_{i,i+1})$ and using \eqref{fdoubleprimespecial} implies that for all $1\leq \ell\leq L-1$ and all $1\leq i \leq k-1$ we have \begin{align} &|a_i+a_{i+1}\mathbbm{e}^{\mathbbm{i}\xi_\ell\Delta_{i,i+1}}|^p-|a_{i+1}|^p-|a_{i}|^p\nonumber\\ =&|\widetilde{a}_i+\widetilde{a}_{i+1}\mathbbm{e}^{\mathbbm{i}\xi_\ell\Delta_{i,i+1}}|^p-|\widetilde{a}_{i+1}|^p-|\widetilde{a}_{i}|^p.\label{eqn: first deltas same} \end{align} Therefore, $\xi_1\Delta_{i,i+1},\ldots,\xi_{L-1}\Delta_{i,i+1}$ constitute $L-1$ solutions, which are distinct modulo $2\pi,$ to the equation \begin{equation*} |a_i + a_{i+1}\mathbbm{e}^{\mathbbm{i}\theta}|^p-|\widetilde{a}_i+\widetilde{a}_{i+1}^{\mathbbm{i}\theta}|^p=|\widetilde{a}_i|^p+|\widetilde{a}_{i+1}|^p-|a_i|^p-|a_{i+1}|^p. \end{equation*} Since $L-1> 4p,$ Lemma \ref{lem: 4psolutions} implies that \begin{equation}\label{prodsame} a_ia_{i+1}=\widetilde{a}_i\widetilde{a}_{i+1} \quad\text{and}\quad \widetilde{a}_i^2+\widetilde{a}_{i+1}^2 \end{equation} for all $1\leq i \leq k-1.$ It follows from (\ref{prodsame}) that \eqref{eqn: alternation} holds with $c\coloneqq a_1/\widetilde{a}_1.$ Now consider the case where $p=2m$ is even. Similarly to \eqref{eqn: first deltas same}, the assumption that $\partial_s^2f_{\xi_\ell}[x](\Delta_{i,i+1})= \partial_s^2f_{\xi_\ell}[\widetilde{x}](\Delta_{i,i+1})$ implies that for all $1\leq \ell\leq L,$ $1\leq i \leq k-1,$ \begin{align*} &|a_{i}+a_{i+1}\mathbbm{e}^{\mathbbm{i}\xi_\ell\Delta_{i,i+1}}|^{2m}-|a_i|^{2m}-|a_{i+1}|^{2m}\\=& |\widetilde{a}_{i}+\widetilde{a}_{i+1}\mathbbm{e}^{\mathbbm{i}\xi_\ell\Delta_{i,i+1}}|^{2m}-|\widetilde{a}_i|^{2m}-|\widetilde{a}_{i+1}|^{2m}. \end{align*} Therefore, for all $1\leq i \leq k-1,$ $\xi_1\Delta_{i,i+1},\ldots,\xi_{L-1}\Delta_{i,i+1}$ are $L-1$ zeros of \begin{align*} h_i(\theta)&\coloneqq|a_{i}+a_{i+1}\mathbbm{e}^{\mathbbm{i}\theta}|^{2m}-|\widetilde{a}_{i}+\widetilde{a}_{i+1}\mathbbm{e}^{\mathbbm{i}\theta}|^{2m}\\ &\quad\quad+|\widetilde{a}_i|^{2m}+|\widetilde{a}_{i+1}|^{2m}-|a_i|^{2m}-|a_{i+1}|^{2m} \end{align*} which are distinct modulo $2\pi.$ Using the fact that \begin{equation*} |a_{i}+a_{i+1}\mathbbm{e}^{\mathbbm{i}\theta}|^{2}=a_i^2+a_{i+1}^2+2a_ia_{i+1}\cos(\theta) \end{equation*} one may verify that $h_i(\theta)$ is a trigonometric polynomial of degree at
# Inverse transform sampling Inverse transform sampling (also known as inversion sampling, the inverse probability integral transform, the inverse transformation method, Smirnov transform, universality of the uniform, or the golden rule[1]) is a basic method for pseudo-random number sampling, i.e. for generating sample numbers at random from any probability distribution given its cumulative distribution function. Inverse transformation sampling takes uniform samples of a number ${\displaystyle u}$ between 0 and 1, interpreted as a probability, and then returns the largest number ${\displaystyle x}$ from the domain of the distribution ${\displaystyle P(X)}$ such that ${\displaystyle P(-\infty . For example, imagine that ${\displaystyle P(X)}$ is the standard normal distribution with mean zero and standard deviation one. The table below shows samples taken from the uniform distribution and their representation on the standard normal distribution. Transformation from uniform sample to normal ${\displaystyle u}$ ${\displaystyle F^{-1}(u)}$ .5 0 .975 1.95996 .995 2.5758 .999999 4.75342 1-2^{-52} 8.12589 Inverse transform sampling for normal distribution We are randomly choosing a proportion of the area under the curve and returning the number in the domain such that exactly this proportion of the area occurs to the left of that number. Intuitively, we are unlikely to choose a number in the far end of tails because there is very little area in them which would require choosing a number very close to zero or one. Computationally, this method involves computing the quantile function of the distribution — in other words, computing the cumulative distribution function (CDF) of the distribution (which maps a number in the domain to a probability between 0 and 1) and then inverting that function. This is the source of the term "inverse" or "inversion" in most of the names for this method. Note that for a discrete distribution, computing the CDF is not in general too difficult: we simply add up the individual probabilities for the various points of the distribution. For a continuous distribution, however, we need to integrate the probability density function (PDF) of the distribution, which is impossible to do analytically for most distributions (including the normal distribution). As a result, this method may be computationally inefficient for many distributions and other methods are preferred; however, it is a useful method for building more generally applicable samplers such as those based on rejection sampling. For the normal distribution, the lack of an analytical expression for the corresponding quantile function means that other methods (e.g. the Box–Muller transform) may be preferred computationally. It is often the case that, even for simple distributions, the inverse transform sampling method can be improved on:[2] see, for example, the ziggurat algorithm and rejection sampling. On the other hand, it is possible to approximate the quantile function of the normal distribution extremely accurately using moderate-degree polynomials, and in fact the method of doing this is fast enough that inversion sampling is now the default method for sampling from a normal distribution in the statistical package R.[3] ## Definition The probability integral transform states that if ${\displaystyle X}$ is a continuous random variable with cumulative distribution function ${\displaystyle F_{X}}$, then the random variable ${\displaystyle Y=F_{X}(X)}$ has a uniform distribution on [0, 1]. The inverse probability integral transform is just the inverse of this: specifically, if ${\displaystyle Y}$ has a uniform distribution on [0, 1] and if ${\displaystyle X}$ has a cumulative distribution ${\displaystyle F_{X}}$, then the random variable ${\displaystyle F_{X}^{-1}(Y)}$ has the same distribution as ${\displaystyle X}$ . Graph of the inversion technique from ${\displaystyle x}$ to ${\displaystyle F(x)}$. On the bottom right we see the regular function and in the top left its inversion. ## Intuitions From ${\displaystyle U\sim \mathrm {Unif} [0,1]}$, we want to generate ${\displaystyle X}$ with CDF ${\displaystyle F_{X}(x).}$We assume ${\displaystyle F_{X}(x)}$to be a strictly increasing function, which provides good intuitions but is not true in general. We want to see if we can find some strictly monotone transformation ${\displaystyle T:[0,1]\mapsto \mathbb {R} }$, such that ${\displaystyle T(U){\overset {d}{=}}X}$. Still, the condition of strictly monotone may not be true for general situations. We will have ${\displaystyle F_{X}(x)=\Pr(X\leq x)=\Pr(T(U)\leq x)=\Pr(U\leq T^{-1}(x))=T^{-1}(x),{\text{ for }}x\in \mathbb {R} .}$ So we got ${\displaystyle F_{X}}$ to be the inverse function of ${\displaystyle T}$, or, equivalently ${\displaystyle T(u)=F_{X}^{-1}(u),u\in [0,1].}$ Therefore, we can generate ${\displaystyle X}$ from ${\displaystyle F_{X}^{-1}(U).}$ ## The method The problem that the inverse transform sampling method solves is as follows: • Let ${\displaystyle X}$ be a random variable whose distribution can be described by the cumulative distribution function ${\displaystyle F_{X}}$. • We want to generate values of ${\displaystyle X}$ which are distributed according to this distribution. The inverse transform sampling method works as follows: 1. Generate a random number ${\displaystyle u}$ from the standard uniform distribution in the interval ${\displaystyle [0,1]}$, e.g. from ${\displaystyle U\sim \mathrm {Unif} [0,1].}$ 2. Find the inverse of the desired CDF, e.g. ${\displaystyle F_{X}^{-1}(x)}$. 3. Compute ${\displaystyle X=F_{X}^{-1}(u)}$. The computed random variable ${\displaystyle X}$ has distribution ${\displaystyle F_{X}(x)}$. Expressed differently, given a continuous uniform variable ${\displaystyle U}$ in ${\displaystyle [0,1]}$ and an invertible cumulative distribution function ${\displaystyle F_{X}}$, the random variable ${\displaystyle X=F_{X}^{-1}(U)}$ has distribution ${\displaystyle F_{X}}$ (or, ${\displaystyle X}$ is distributed ${\displaystyle F_{X}}$). A treatment of such inverse functions as objects satisfying differential equations can be given.[4] Some such differential equations admit explicit power series solutions, despite their non-linearity.[citation needed] ## Examples • As an example, suppose we have a random variable ${\displaystyle U\sim \mathrm {Unif} (0,1)}$ and a cumulative distribution function {\displaystyle {\begin{aligned}F(x)=1-\exp(-{\sqrt {x}})\end{aligned}}} In order to perform an inversion we want to solve for ${\displaystyle F(F^{-1}(u))=u}$ {\displaystyle {\begin{aligned}F(F^{-1}(u))&=u\\1-\exp \left(-{\sqrt {F^{-1}(u)}}\right)&=u\\F^{-1}(u)&=(-\log(1-u))^{2}\\&=(\log(1-u))^{2}\end{aligned}}} From here we would perform steps one, two and three. • As another example, we use the exponential distribution with ${\displaystyle F_{X}(x)=1-e^{-\lambda x}}$ for x ≥ 0 (and 0 otherwise). By solving y=F(x) we obtain the inverse function ${\displaystyle x=F^{-1}(y)=-{\frac {1}{\lambda }}\ln(1-y).}$ It means that if we draw some ${\displaystyle y_{0}}$from a ${\displaystyle U\sim Unif(0,1)}$and compute ${\displaystyle x_{0}=F_{X}^{-1}(y_{0})=-{\frac {1}{\lambda }}\ln(1-y_{0}),}$This ${\displaystyle x_{0}}$has exponential distribution. The idea is illustrated in the following graph: Random numbers yi are generated from a uniform distribution between 0 and 1, i.e. Y ~ U(0, 1). They are sketched as colored points on the y-axis. Each of the points is mapped according to x=F−1(y), which is shown with gray arrows for two example points. In this example, we have used an exponential distribution. Hence, for x ≥ 0, the probability density is ${\displaystyle \varrho _{X}(x)=\lambda e^{-\lambda \,x}}$ and the cumulated distribution function is ${\displaystyle F(x)=1-e^{-\lambda \,x}}$. Therefore, ${\displaystyle x=F^{-1}(y)=-{\frac {\ln(1-y)}{\lambda }}}$. We can see that using this method, many points end up close to 0 and only few points end up having high x-values - just as it is expected for an exponential distribution. Note that the distribution does not change if we start with 1-y instead of y. For computational purposes, it therefore suffices to generate random numbers y in [0, 1] and then simply calculate ${\displaystyle x=F^{-1}(y)=-{\frac {1}{\lambda }}\ln(y).}$ ## Proof of correctness Let F be a continuous cumulative distribution function, and let F−1 be its inverse function (using the infimum because CDFs are weakly monotonic and right-continuous):[5] ${\displaystyle F^{-1}(u)=\inf \;\{x\mid F(x)\geq u\}\qquad (0 Claim: If U is a uniform random variable on (0, 1) then ${\displaystyle F^{-1}(U)}$ has F as its CDF. Proof: {\displaystyle {\begin{aligned}&\Pr(F^{-1}(U)\leq x)\\&{}=\Pr(U\leq F(x))\quad &({\text{applying }}F,{\text{ to both sides}})\\&{}=F(x)\quad &({\text{because }}\Pr(U\leq y)=y,{\text{ when U is uniform on}}(0,1))\\\end{aligned}}} ## Truncated distribution Inverse transform sampling can be simply extended to cases of truncated distributions on the interval ${\displaystyle (a,b]}$ without the cost of rejection sampling: the same algorithm can be followed, but instead of generating a random number ${\displaystyle u}$ uniformly distributed between 0 and 1, generate ${\displaystyle u}$ uniformly distributed between ${\displaystyle F(a)}$ and ${\displaystyle F(b)}$, and then again take ${\displaystyle F^{-1}(u)}$. ## Reduction of the number of inversions In order to obtain a large number of samples, one needs to perform the same number of inversions of the distribution. One possible way to reduce the number of inversions while obtaining a large number of samples is the application of the so-called Stochastic Collocation Monte Carlo sampler (SCMC sampler) within
usage metrics computed from millions of users, but undisclosed in this work for privacy reasons). Deezer is jointly interested in 1) predicting new connections in the graph corresponding to new albums pairs that users would enjoy listening to together, which is achieved by performing the \textit{link prediction} task; and 2) learning groups of similar albums, with the aim of providing usage-based recommendations (i.e., if users listen to several albums from a community, other unlistened albums from this same community could be recommended to them), which is achieved by performing the \textit{community detection} task. In such an industrial application, learning high-quality album representations that would jointly enable effective link prediction and community detection would therefore be desirable. For evaluation, node communities will be compared to a ground-truth clustering of albums in 20 groups defined by their main \textit{music genre}, allowing us to assess the musical homogeneity of the node~communities~proposed~by~each~model. \subsubsection{Tasks} \label{s412} For each of these seven graphs, we assess the performance of our models on two downstream tasks. \begin{itemize} \item \textbf{Task 1:} We first consider a pure \textit{community detection} task, consisting in the extraction of a partition of the node set $\mathcal{V}$ which ideally agrees with the ground-truth communities of each graph. Communities will be retrieved by running the $k$-means algorithm (with $k$-means++ initialization \cite{arthur2017kmeans}) in the final embedding space of each model to cluster the vectors $z_i$ (with $k$ matching the known number of communities from Table~\ref{tab:datasetstats}); except for some baseline methods that explicitly incorporate another strategy to partition nodes (see Section \ref{s413}). We compare the obtained partitions to the ground-truth using the popular \textit{Adjusted Mutual Information (AMI)} and \textit{Adjusted Rand Index (ARI)} scores\footnote{\label{footnote:sklearn} Scores are computed via scikit-learn \cite{pedregosa2011scikit}, using formulas provided here: \href{https://scikit-learn.org/stable/modules/classes.html?highlight=metrics\#module-sklearn.metrics}{https://scikit-learn.org/stable/modules/classes.html?highlight=metrics\#module-sklearn.metrics} (accessed October 13, 2021).} for clustering~evaluation. \item \textbf{Task 2:} We also consider a \textit{joint link prediction and community detection} task. In such a setting, we learn all node embedding spaces from \textit{incomplete} versions of the seven graphs, where 15\% of edges were randomly masked. We create a validation and a test set from these masked edges (from 5\% and 10\% of edges, respectively, as in Kipf and Welling~\cite{kipf2016-2}) and the same number of randomly picked unconnected node pairs acting as ``non-edge'' negative pairs. Then, using decoder predictions $\hat{A}_{ ij}$ computed from vectors $z_i$ and $z_j,$ we evaluate each model's ability to distinguish edges from non-edges, i.e., \textit{link prediction}, from the embedding space, using the \textit{Area Under the ROC Curve (AUC)} and \textit{Average Precision (AP)} scores\footref{footnote:sklearn} on the test sets. Jointly, we also evaluate the community detection performance obtained from such incomplete graphs, using the same methodology and AMI/ARI scores as in Task~1. \end{itemize} In the case of Task~2, we expect AMI and ARI scores to slightly decrease w.r.t. Task~1, as models will only observe \textit{incomplete} versions of the graphs when learning embedding spaces. Task~2 will further assess whether empirically improving community detection inevitably leads to deteriorating the original good performances of GAE and VGAE models on link prediction. As our proposed Modularity-Inspired GAE and VGAE are designed for \textit{joint link prediction and community detection}, we expect them to 1) reach comparable (or, ideally, identical) AUC/AP link prediction scores w.r.t. standard GAE and VGAE, while 2) reaching better community~detection~scores. \subsubsection{Details on Models} \label{s413} For the aforementioned evaluation tasks and graphs, we will compare the performances of our proposed \textit{Modularity-Aware GAE and VGAE} models to standard GAE and VGAE and to several other baselines. All results reported below will verify $d = 16,$ i.e., all node embedding models will learn embedding vectors $z_i$ of dimension 16. We also tested models with $d \in \{32, 64\}$ by including them in our grid search space and reached similar conclusions to the $d = 16$ setting (we report and further discuss the impact of $d$ in Section~\ref{s42}. Note, the dimension $d$ is a selectable parameter in our public implementation, permitting direct model training on any node embedding dimension). \paragraph{\textbf{Modularity-Aware GAE and VGAE}} We trained two versions of our Modularity-Aware GAE and VGAE: one with the \textit{linear encoder} described in Section~\ref{s322}, and one with the \textit{2-layer GCN encoder} ($\text{GCN}^{(2)}$). The latter encoder includes a 32-dimensional hidden layer. We recall that link prediction is performed from inner product decoding $\hat{A}_{ij} = \sigma(z^T_i z_j)$, and that community detection is performed via a $k$-means on the final vectors $z_i$ learned by each model. During training, we used the Adam optimizer~\cite{kingma2014adam}, without dropout (but we tested models with dropout values in $\{0,0.1,0.2\}$ in our grid search optimization). All hyperparameters were carefully tuned following the procedure described in Section~\ref{s332}. For each graph, we tested learning rates from the grid $\{0.001,0.005,0.01,0.05,0.1,0.2\}$, number of training iterations in $\{100, 200, 300, ..., 800\}$, with $\lambda \in \{0, 0.01,0.05, 0.1, 0.2, 0.3, ..., 1.0\}$, $\beta \in \{0, 0.01,0.05, 0.1, 0.25, 0.5, 1.0, 1.5, 2.0\}$, $\gamma \in \{0.1, 0.2, 0.5, 1.0, 2, 5, 10\}$ and $s}%\mathbf{o \in \{1, 2, 5, 10\}$. The best hyperparameters for each graph are reported in Table~\ref{tab:hyperparameterstable}. We adopted the same optimal hyperparameters for GAE \textit{and} VGAE variants (a result which is consistent with the literature \cite{kipf2016-2}). Lastly, as exact loss computation was computationally unaffordable for our two largest graphs, SBM and Deezer-Album, their corresponding models were trained by using the FastGAE method \cite{salha2021fastgae}, approximating losses by reconstructing degree-based sampled subgraphs of $n =$ 10 000 nodes (a different one at each training iteration). \begin{table}[t] \centering \caption{Complete list of optimal hyperparameters of Modularity-Aware GAE and VGAE models} \vspace{0.2cm} \label{tab:hyperparameterstable} \resizebox{\textwidth}{!}{ \begin{tabular}{c|cccccccc} \toprule \textbf{Dataset} & \textbf{Learning} & \textbf{Number of} & \textbf{Dropout} & \textbf{Use of FastGAE \cite{salha2021fastgae}} & $\lambda$ & $\beta$ & $\gamma$ & $s}%\mathbf{o$ \\ & \textbf{rate} & \textbf{iterations} & \textbf{rate} & \textbf{(if yes: subgraphs size)} & & & \\ \midrule \midrule \textbf{Blogs} & 0.01 & 200 & 0.0 & No & 0.5 & 0.75 & 2 & 10\\ \textbf{Cora (featureless)} & 0.01 & 500 & 0.0 & No & 0.25 & 1.0 & 0.25 & 1\\ \textbf{Cora (with features)} & 0.01 & 300 & 0.0 & No & 0.001 & 0.01 & 1 & 1\\ \textbf{Citeseer (featureless)} & 0.01 & 500 & 0.0 & No & 0.75 & 0.5 & 0.5 & 2\\ \textbf{Citeseer (with features)} & 0.01 & 500 & 0.0 & No & 0.75 & 0.5 & 0.5 & 2\\ \textbf{Pubmed (featureless)} & 0.01 & 500 & 0.0 & No & 0.1 & 0.5 & 0.1 & 5\\ \textbf{Pubmed (with features)} & 0.01 & 700 & 0.0 & No & 0.1 & 0.5 & 10 & 2\\ \textbf{Cora-Large} & 0.01 & 500 & 0.0 & No & 0.001 & 0.1 & 0.1 & 10 \\ \textbf{SBM} & 0.01 & 300 & 0.0 & Yes (10 000) & 0.5 & 0.1 & 2 & 10 \\ \textbf{Deezer-Album} & 0.005 & 600 & 0.0 & Yes (10 000) & 0.25 & 0.25 & 1 & 5\\ \bottomrule \end{tabular}} \end{table} We used Tensorflow~\cite{abadi2016tensorflow}, training our models (as well as GAE/VGAE baselines described below) on an NVIDIA GTX 1080 GPU, and running other operations on a double Intel Xeon Gold 6134 CPU\footnote{On our machines, running times of the Modularity-Aware GAE and VGAE models were comparable to running times of their standard GAE and VGAE counterparts. For example, training each variant of VGAE on the Pubmed graph for 500 training iterations and with $s}%\mathbf{o = 5$ approximately takes 25 minutes on a single GPU (without the FastGAE method, which significantly speeds up training \cite{salha2021fastgae}). This is consistent with our claims on the comparable complexity of Modularity-Aware and standard models.}. Along with this paper, we will publicly release our source code on GitHub for reproducibility and to encourage future usage of our method\footnote{ \href{https://github.com/GuillaumeSalhaGalvan/modularity_aware_gae}{https://github.com/GuillaumeSalhaGalvan/modularity\_aware\_gae}} \paragraph{\textbf{Standard GAE and VGAE}} We compare the above Modularity-Aware GAE and VGAE to two variants of the standard GAE and VGAE: one with 2-layer GCN encoders with 32-dimensional hidden layer (which is equal to the seminal GAE and VGAE from Kipf and Welling~\cite{kipf2016-2}) and one with a linear encoder (which equals the linear GAE
angle of the intrinsic polarization vector, with the western structure almost perpendicular to it. It may very well be that the circumstellar material around HD 87643 repeats the same geometry on both large scales (the reflection nebula) and small spatial scales (seen in the H$\alpha$\ polarization). \section{On the nature of and distance to HD 87643 } As the distance to HD 87643 and, by implication, the star's evolutionary status, is not yet a settled question, a reassessment of its value is appropriate. We will show below that, based on the magnitude of the interstellar extinction and kinematics, HD 87643 is more likely to be an evolved object than a young star. \subsection{Magnitude of interstellar extinction} The extinction to HD 87643 can be determined in several ways: \begin{enumerate} \item{For a spectral type of B2V, the total {\it E(B--V)}\ found by de Freitas Pacheco {et al.}\ (1985) is 0.63 mag. This value should not be greatly dependent on the assumed spectral sub-type within the early-B range. However, given the problems that inevitably arise in fitting the spectral energy distributions of significantly reddened stars, this issue may benefit from a fresh examination} \item{ The strength of the Na D absorption components (total EW 0.75 and 0.55 $\rm \AA$\ for the D2 and D1 lines respectively) would indicate a relatively large interstellar {\it E(B--V)}. One should however be cautious in interpreting the total EW of Na D absorption. In a recent study of the strength of the Na D1 absorption as a function of reddening by Munari \& Zwitter (1997), the EW shows a linear dependence on reddening at small EWs, while the dependence flattens for larger EW, in a sense, this is a simple curve-of-growth effect, where saturated lines need higher column densities than the linear part would imply. Taking into account the multiple components in the Na~D lines we arrive at a low {\it E(B--V)}\ of 0.2 $\pm$ 0.15. The caveat in this approach may be that circumstellar emission components would artificially give the impression of multiple components. However, taking the data as they are, we arrive at a low value for the {\it E(B--V)} . } \item{The few Diffuse Interstellar Bands (DIBs) that are visible in our spectrum suggest a larger {\it E(B--V)}. The EW of the $\lambda$5780 DIB is 0.3 $\rm \AA$, which corresponds to {\it E(B--V)}\ = 0.51, based on the calibration EW- {\it E(B--V)}\ provided by Jenniskens \& D\'esert (1994). Other DIBs in the spectrum result in the same values for {\it E(B--V)}: EW($\lambda$5795+5797) = 0.18 $\rm \AA$\ ( $\rightarrow$ {\it E(B--V)}\ = 0.72) and $\lambda$ 6284 (combined 6281 and 6284; 1.05 $\rm \AA$\ $\rightarrow$ {\it E(B--V)}\ = 0.57) ). Since DIBs are weaker in the spectra of stars of which it is known that much of the line-of-sight extinction originates in their circumstellar shells, it is generally accepted that much of the extinction traced by DIBs is interstellar (see the discussion by Oudmaijer, Busfield \& Drew 1997, and references therein). Assuming that this applies to HD 87643 as well, the interstellar extinction towards HD 87643 traced by the DIBs has a value {\it E(B--V)}\ $\sim$ 0.6, or an $A_{V}$\ $\sim$ 1.9, for a ratio of total-to-selective extinction, R$_V$ , of 3.1. This agrees well with the total {\it E(B--V)}\ as de Freitas Pacheco {et al.}\ found based on the reddening of the spectral energy distribution, but does not agree with the lower {\it E(B--V)}\ based on the Na D1 line. } \end{enumerate} From the above, we find that the determination of the interstellar extinction to HD 87643 is ambiguous, the DIBs would imply a {\it E(B--V)}\ of order 0.6, while the Na D components may imply a lower {\it E(B--V)}. This may be compared with the interstellar extinction towards nearby stars. The polarization catalogue compiled by Matthewson {et al.}\ (1978), discussed earlier, also contains values for the extinction and photometric distances of more than 7500 objects. For the stars within 200$^{\prime}$\ from HD 87643, we find from Fig.~\ref{mm} that \av\ of order 1.9 suggests a distance modulus of order 10 -- 14 (1 -- 6 kpc), while for lower values of \av\ the distance modulus can be anything between 0 and 15 magnitudes. \begin{figure} \mbox{\epsfxsize=0.48\textwidth\epsfbox[20 160 580 550]{ oud9.ps }} \caption{ $A_{V}$\ as function of distance modulus for all stars in the Matthewson {et al.}\ (1978) catalogue of polarized stars within 200\hbox{$^\prime$}\ from HD~87643. \label{mm} } \end{figure} \subsection{Kinematic information} Let us now consider the location of the object in the Galaxy. The object projects onto the sky close to the tangent edge of the Carina spiral arm in the outer Galaxy. At this particular longitude, the distance to the near side of the Carina arm is estimated to be of order 1.5 -- 3 kpc, while its far side extends to more than 10 kpc from the Sun (Cohen {et al.}\ 1985, Grabelsky {et al.}\ 1987). The CO and H {\sc i} maps of Grabelsky {et al.}\ (1987) indicate that the intervening local material, the near side and far side of the Carina arm are relatively well separated in velocities. The local material has LSR velocities roughly between --9 to 7 km s$^{-1}$, while the near side of the Carina arm, overtaking the Sun, is moving at more negative velocities (at least to $\approx$ --35 km s$^{-1}$\ at the longitude of HD 87643), and the far side, trailing the Sun, is located at more positive velocities (to $\approx$ + 40 km s$^{-1}$). Assuming that the Na D absorption is interstellar, we can use the kinematics of the Na D absorption components to estimate the location of the star in this sightline. The Na D1 and D2 lines from the echelle spectrum are shown on top of each other in Fig.~\ref{nad}. Three distinct absorption components can be identified at --35 km s$^{-1}$, --9 km s$^{-1}$\ and +10 km s$^{-1}$\ in the LSR frame. The vertical lines indicate the entire velocity coverage, which ranges from --50 to +20 km s$^{-1}$. The range of velocities spanned by the Na D absorption suggests that HD 87643 lies in the nearer half of the Carina arm. If the interstellar extinction, $A_{V}$ , is greater than 1, a broadly consistent picture of HD 87643 at a distance of $\mathrel{\copy\simgreatbox}$ 2 kpc emerges. For an average {\it V} magnitude of 9, the intrinsic \mbox{$m_{_V}$}\ lies in the range --2.9 to --6.9 for a distance range 1 -- 6 kpc. The bolometric correction for an early B-type star is of order --2.5 (Strai\v{z}ys \& Kuriliene 1981), so the bolometric magnitude of HD~87643 would range between $\sim$ --5.4 to --9.4 (corresponding to luminosities between 10~000 and 400~000 L$_{\odot}$ ). The higher luminosities are in the neighbourhood of those of B-type supergiants, but the lower end can be occupied by both main sequence objects and lower luminosity B[e] stars in the Magellanic Clouds (Gummersbach, Zickgraf \& Wolf 1995). There is even a slight possibility that HD 87643 is a low mass post-AGB star. \begin{figure} \mbox{\epsfxsize=0.48\textwidth\epsfbox[20 160 580 550]{ oud10.ps }} \caption{ Na D1 \& D2 lines in the line of sight towards HD~87643 The systemic velocity of the star is also indicated. \label{nad} } \end{figure} \subsection{HD 87643 as an evolved B[e] star} The balance of probabilities on the question of the distance to HD 87643 is in favour of its being a (somewhat) evolved luminous object, rather than a very young star (an opinion previously expressed by e.g. McGregor {et al.}\ 1988). In this respect this star may prove to be related to the so-called B[e] supergiants in the Magellanic Clouds (see Zickgraf {et al.}\ 1985, 1986, 1996 and references therein). It is thus far not very clear whether these objects can be linked to the Luminous Blue Variable phase of evolution, or how they link to any other phase (e.g. Schulte-Ladbeck 1998; Zickgraf 1998). The B[e] stars exhibit hybrid spectra, with mostly very broad wind absorption in the UV resonance lines, no photospheric absorption lines, very broad hydrogen recombination emission lines, with absorption components at small velocities, and a host of narrow permitted and forbidden lines of singly ionized iron. HD~87643 fits in with this description. The generally accepted model for these B[e] objects was devised in 1985 by Zickgraf {et al.}\ (see also Zickgraf {et al.}\ 1986). It describes these objects as having a two-component wind. Fast outflows are thought to originate in a line-driven polar wind, while the narrower lines are formed in a relatively slowly rotating, slowly expanding circumstellar disk. The presence of disk-like structures around these objects was strengthened with the polarimetric observations of
years and indicate that perturbative corrections can have a relevant impact on the annihilation rate \cite{Herrmann:2007ku,Baro:2007em,Harz:2012fz,Harz:2014tma,Harz:2014gaa,Harz:2016dql}. For stop coannihilation these corrections were found to change the relic density by $\approx 20 \%$~\cite{Harz:2014tma}. As expected the impact is even larger if Sommerfeld corrections are also included~\cite{Harz:2014gaa}. This is certainly larger than the observational uncertainty on the relic density and should be included in detailed studies of the MSSM. In the scenario at hand this correction can be absorbed via a shift of $\Delta m$ which does not have an appreciable effect of the qualitative features of the phenomenology. It has recently been emphasized that bound state formation may have an impact on dark matter freeze-out~\cite{Wise:2014jva,vonHarling:2014kha,Ellis:2015vaa,Kim:2016kxt,Liew:2016hqo,Mitridate:2017izz,Keung:2017kot}. For squarks the non-Abelian structure of QCD leads to a partial accidental cancellation in the matrix element for bound state formation~\cite{Mitridate:2017izz} which somewhat diminishes its importance. Once all electroweak annihilation channels are included in the freeze-out calculation, the impact of bound states was found to be at the $10\%$ level for all viable neutralino masses~\cite{Keung:2017kot}. We do not consider their effects here, but note that while the formation of bound states would allow for somewhat heavier dark matter consistent with the relic density constraint, the qualitative picture would remain the same. \begin{figure}[tb] \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{mchi_deltam.pdf}~~&~~ \includegraphics[width=0.5\textwidth]{mchi_deltam_Higgs.pdf}\\ (a) ~~&~~ (b)\\ \end{tabular} \caption{ \label{f:deltam} {The mass difference $\Delta m$ required to obtain the correct relic density as a function of the LSP mass $m_{\chi}$. {\it Left:} All allowed masses, only constrained by the stability of the vacuum. {\it Right:} Requiring a Higgs mass between 122-128 GeV. The blue (green) band indicates the range of mass splittings for a $\tilde{t}_1$ ($\tilde{b}_L$) NLSP due to the different values of the stop mixing angle. The solid black lines indicate the naive expectation if only QCD processes contribute to the relic density calculation. The dot-dashed black line marks the mass splitting consistent with only a right-handed light stop contributing to co-annhilations.}} \end{figure} In Fig.~\ref{f:deltam} we show the mass difference $\Delta m$ between $\chi$ and the NLSP where co-annihilations provide a viable mechanism for reproducing the dark matter density. We have shown separately the regions where the left-handed sbottom is the NLSP and where the lightest stop is the NLSP. To give guidance for the importance of various processes in the early universe we have shown two additional curves. The first is the $\Delta m$ curve~(solid black line) if only the process $(\tilde{f} \tilde{f}^* \to gg)$ were active. The true $\Delta m$ is always larger, showing the importance of other annihilation channels. Another curve, labeled $\tilde{t}_R$~(dot-dashed black line), shows the mass splitting if the NLSP were a single pure right-handed stop. In this case, there are no $X_{t}$ enhanced couplings to the Higgs boson, but the pure right-handed stop has the largest coupling to the bino~[cf. Eqs.~(\ref{ht1t1})-(\ref{bLx})]. In this case, the processes $\chi \chi \to t \bar{t}$, $\chi \tilde{t}_R \to t g/h$, $\tilde{t}_R \tilde{t}_R^* \to t\bar{t}/gg/hh$ are relevant \cite{Ibarra:2015nca}. The precise value of the mass splitting depends strongly on the mixing in the stop sector. This mixing controls both interactions with the Higgs boson and the bino, see~Eqs.~(\ref{ht1t1})-(\ref{t2xx}). The variation in these couplings due to different mixings explains the width of the band of consistent $\Delta m$ at each given value of the dark matter mass. In Fig.~\ref{f:deltam}, the allowed mass splitting for stop NLSPs broadens for higher dark matter masses. Near $m_{\chi} \sim 500$ GeV, it populates a band from $\sim 35- 60 $ GeV, but in the multi-TeV range it can range from near degeneracy to mass splittings approaching 100 GeV. The largest mass splittings require contributions from channels such as $(\tilde{t}_1 \tilde{t}_1^* \to hh)$ to be large -- realized by increasing $X_t$~[cf. Eq.~(\ref{ht1t1})]. The cut-off in the maximal values of $\Delta m$ seen in the left panel results from the need for an $X_t$~$(A_t)$ so large as to make the vacuum unstable. The smallest $\Delta m$ obtained are somewhat smaller than the values consistent with a purely right handed stop NLSP. This is because perturbing away from the pure $\tilde{t}_R$ case, an important effect is that the $\tilde{t}_L$ admixture has a much smaller hypercharge and hence the coupling to $\chi$ is reduced, see Eq.~(\ref{t1xx}). This suppresses diagrams that rely on this coupling, necessitating a smaller $\Delta m$ to maintain the thermal relic abundance. In contrast, sbottom NLSPs never have $\Delta m$ exceeding 80 GeV, irrespective of the vacuum stability constraint. This is because for the sbottom to be the NLSP, $X_t$ cannot be too large (lest the $\tilde{t}_1$ mass be driven down by level repulsion). In the sbottom NLSP case, the lightest stop will always be nearby, and for some parameter choices the heavier stop is also allowed to be degenerate. In fact the largest mass splitting obtained for the sbottom NLSP is precisely where both the stops are nearly degenerate with the sbottom, and dominant annihilation channels may include $\tilde{t}_{1,2}$ and $\tilde{b}_1$; for example $(\tilde{f} \tilde{f}^* \to gg)$ or $(\tilde{t}_2/\tilde{t}_1 \tilde{b}_1 \to Wg)$. As the heavier stop decouples from the cosmology, the required mass splitting decreases and the smallest $\Delta m$ is obtained when only $\tilde{t}_1$ and $\tilde{b}_1$ contribute to co-annihilations. The right panel of Fig.~\ref{f:deltam} shows the region that is carved out demanding consistency with the Higgs mass. Note that in particular the width of the sbottom NLSP region is significantly reduced. This can be understood by looking at the region in Fig.~\ref{fig:Higgs} where the sbottom is the NLSP: Only values of $s_t$ close to 0 and very large $m_{\tilde{t}_2}$ can accommodate both a sbottom NLSP and a consistent Higgs mass. This results in a well-defined cosmology for this scenario, and hence a well-defined $\Delta m$: values of the mass splitting are obtained close to the ones denoted by the bottom of the $\tilde{b}_1$ NLSP band in the left panel of Fig.~\ref{f:deltam}~($m_{\tilde{b}_L}\sim m_{\tilde{t}_1} \ll m_{\tilde{t}_2}$). Regarding the stop NLSP region, some of the largest $\Delta m$ are eliminated as they require an $X_{t}$ so large that a large enough Higgs boson mass may not be achieved (see the right panel of Fig.~\ref{fig:Higgs}, particularly for $s_t > 0$). While the masses of the stop NLSPs shown in Fig.~\ref{f:deltam} are in broad agreement with the latest LHC limits, it should be noted that the very lightest masses for a $\tilde{b}_1$ NLSP (approximately $m_{\chi} \lesssim 625$ GeV) are excluded by LHC searches, see Sec.~\ref{s.LHC}. The mass splittings shown in Fig.~\ref{f:deltam} represent important future targets for the LHC experiments. \section{Direct Detection}\label{s.DD} Having determined the regions of parameter space in which thermal freeze-out can account for the observed relic density, we now turn to direct searches for dark matter. \begin{figure}[tb] \begin{tabular}{ccc} \includegraphics[width=0.25\textwidth]{higgs_triangle-2.pdf}~~ & ~~\includegraphics[width=0.25\textwidth]{higgs_triangle.pdf}~~ & ~~ \includegraphics[width=0.27\textwidth]{gluon.pdf} \\ (a) & (b) & ~~~(c) \end{tabular} \caption{ \label{fig:DDdiagrams} { Triangle $\chi\chi h$~[(a) and (b)] and box $gg\chi\chi$~(c) diagrams contributing to the dark matter nucleon coupling.}} \end{figure} The bino dark matter candidate considered here has a vanishing direct detection cross section at tree level. A process that can generate a direct detection cross section arises from the loop-induced coupling of the dark matter with the Higgs boson. Such an effective $\chi\chi h$ coupling is generated by triangle diagrams with tops and stops in the loop, see Figs.~\ref{fig:DDdiagrams}a and \ref{fig:DDdiagrams}b. This effective coupling has been calculated in Ref.~\cite{Djouadi:2001kba} and re-derived by us using the low energy Higgs theorem~\cite{Kniehl:1995tn}.\footnote{Loops with the $\tilde{t}$ will also contribute to the Higgs coupling to gluons. This effect is always subleading and we do not include it in the following qualitative discussion. Our numerical calculations take the effect into account. For a discussion of the full SUSY-QCD corrections to direct detection see for instance \cite{Klasen:2016qyz}} In addition to the Higgs triangle-diagrams there are also box-diagrams with $\tilde{t}_{1,2}/t$ and $\tilde{b}_1/b$ in the loop which induce an effective coupling between the bino and the gluon content in the nucleus~(see Fig.~\ref{fig:DDdiagrams}c as an example diagram). The full loop result for this
evaluate the robustness of applicability of Lagrangian and Eulerian metrics. In this paper, we propose the use of two Lagrangian metrics computed using surface horizontal velocities to predict regions of high sub-surface \R{downward displacement of fluid} in the ocean. The regions where these metrics are strongest are considered target zones for downward displacement, and the efficacy of these target zones in locating large vertical displacements is evaluated using skill metrics that quantify how likely they are to predict regions of \R{large displacements} and the fraction of such regions identified by the target zones. The correspondence between the surface target zones and sub-surface \R{\dz\ fields computed for particles} initialized at different depths below the surface is then studied to evaluate the range of applicability of the target zones. A practical concern when performing a Lagrangian analysis is the time interval of applicability of results. Hence, we investigate the effect of varying the time interval of Lagrangian analysis, and the ability of the chosen Lagrangian metrics to predict \R{large displacements} occurring beyond that interval. Finally, since model forecasts do not capture the true state of the ocean~\citep{mcwilliams2007irreducible}, the ability of target zones identified across an ensemble of forecasts to locate \R{regions of strong downward displacement} in a separate realization is evaluated with a data-assimilative operational model of the western Mediterranean. \section{Methods} \subsection{Vertical Displacement, Surface Metrics and Target Zones.} Conduits of vertical transport can be located by identifying regions with fluid trajectories that show large vertical displacements. Given a velocity field $\mathbf{u}=(u,v,w)$ specified at $\mathbf{x}=(x,y,z)$, fluid trajectories are solutions to \begin{equation} \label{eq:trajectories} \frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} = \mathbf{u}(\mathbf{x},t). \end{equation} For each of the three models, particles are seeded uniformly on a horizontal plane at $z(t=t_0)$ below the surface and then advected by solving equation~(\ref{eq:trajectories}) using a fourth order Runge-Kutta scheme. \R{Velocities for particle advection are obtained from model fields using a cubic Lagrange polynomial for temporal interpolation, followed by a bilinear interpolation in space.} The \R{downward vertical displacement} (\dz) undergone by particles seeded at depth $z(t=t_0)$ is then computed as \begin{equation} \delta z = z(t_0) - z(t_0+T_d), \end{equation} where the displacement time $T_d$ is the time over which advection is performed. In our study, a $\SI{12}{\hour}$ interval has been used for the two larger-scale models, whereas a smaller $T_d = \SI{6}{\hour}$ is used for the turbulence resolving model so that the non-dimensional value $T_d/(2\pi/f)$ remains approximately 0.5 across the three models. Next, we compute Eulerian and Lagrangian metrics that can potentially identify regions of high \dz\ using only surface information. The most intuitive measure of vertical motions, given a velocity field, is the vertical velocity $w$. Because $w<0$ corresponds to downwelling, the negative vertical velocity ($-w$) at time $t_0$ is the first metric. It is important to note that although $w$ highlights regions of strongest vertical motion at an instant, the Eulerian field need not be representative of vertical displacement that occurs over a finite time interval. Another drawback of this metric is that vertical velocities are often difficult to measure in the ocean and model accurately. The horizontal velocities, on the other hand, are measurable, and incompressibility dictates that convergence on the ocean surface corresponds to downwelling. Therefore, the negative horizontal divergence of surface velocity ($-\nabla_H \cdot \mathbf{u}$), positive for downwelling, is the second Eulerian metric. Physically, surface convergence is enhanced at submesoscale density fronts in the ocean, which are known to be sites of intense vertical motions~\citep{mahadevan06}. Hence, the magnitude of the horizontal gradient of surface density ($|\nabla_H \rho|$), which can be measured observationally~\citep{pascual17,rudnick07,ullman14}, is the final Eulerian metric. All of these Eulerian metrics are computed only at the initial instant, $t_0$. The inability to obtain accurate measurements of $w$ prompts the usage of the other Eulerian metrics for practical purposes. In this study, however, the Eulerian metric $w$ is available from the models and is used as a barometer to compare the other metrics with. In all three models considered, $w$ is computed on the horizontal layer just below the ocean surface. This can impact our results, especially in the process study model, as will be discussed later. To better account for the Lagrangian nature of vertical displacement, two Lagrangian metrics are computed on the ocean surface: FTLE and dilation rate. Solving equation~(\ref{eq:trajectories}) for $x$ and $y$ at $z=0$, using only the horizontal velocity and setting $w=0$, allows us to define the two-dimensional flow map, $\mathrm{F}_{t_0}^{t_0+T_L}(\mathbf{x}_0):=\mathbf{x}(t_0+T_L;\mathbf{x}_0,t_0)$, that maps particles seeded at $\mathbf{x}_0 = (x_0,y_0,0)$ at time $t_0$ to their final positions at $t_0+T_L$. Here, $T_L$ is the length of the time interval over which the Lagrangian metrics are calculated, and this may or may not be the same as the displacement time, $T_d$, depending on the time over which accurate surface velocity fields are available. The flow map gradient, $\nabla \mathrm{F}_{t_0}^{t_0+T_L}(\mathbf{x}_0)$, is then used to compute the right Cauchy-Green strain tensor $\mathrm{C}=[\nabla \mathrm{F}_{t_0}^{t_0+T_L}(\mathbf{x}_0)]^T[\nabla \mathrm{F}_{t_0}^{t_0+T_L}(\mathbf{x}_0)]$. Given the two eigenvalues of $\mathrm{C}$, $0<\lambda_1<\lambda_2$, the FTLE ($\sigma$) at time $t_0$ is \begin{equation} \sigma = \frac{\log{\lambda_2}}{2T_L}, \end{equation} and the dilation rate ($\Delta$) at $t_0$ is \begin{equation} \Delta = \frac{\log{\lambda_1\lambda_2}}{2T_L}. \end{equation} The dilation rate is the Lagrangian time-average of the Eulerian divergence ($\nabla_H \cdot \mathbf{u}$) along the trajectory~\citep{huntley15}. It is worth noting that the Lagrangian metrics computed along particle trajectories differ from metrics obtained by averaging at a fixed location in space. Another potential metric that could be considered is the instantaneous Lyapunov exponent (iLE)~\citep{nolan20}, which is the Eulerian infinitesimal time limit of FTLE. This metric has been employed for the detection of objective Eulerian coherent structures~\citep{serra16}, but its ability to locate regions of strong \dz\ is similar to the other Eulerian metrics (Figure~S1, Supporting Information). When the Lagrangian metrics are computed forward in time from $t_0$ to $t_0+T_L$, the two-dimensional FTLE and dilation rate fields obtained at the initial time $t_0$ physically represent the amount of maximal directional expansion and area expansion rates over the time interval, respectively. Since we are interested in surface convergence instead of expansion, these quantities need to be computed starting at the end of the time interval $t_0+T_L$ and moving backwards in time to the beginning of the interval $t_0$~\citep{haller15}. A backward-in-time calculation, however, yields Lagrangian fields at $t_0+T_L$, which do not correspond to the \dz\ field at $t_0$. To make the direct comparison possible, we map the Lagrangian metric fields to their position at $t_0$. \RR{Additionally, result independence from interpolation and advection parameters was ensured for all numerically extracted particle trajectories, and hence the results are comparable to those from online calculations performed during model runs~\cite{van2018lagrangian}. } The five metrics discussed above are chosen such that the regions with the largest values of these metrics should correspond to regions of greatest downwelling. Hence, the target zone (TZ) for each metric is defined as the region with values in the \R{99th percentile of the respective Eulerian or Lagrangian metric computed on the ocean surface. As a result, each target zone covers only 1\% of the entire domain.} The choice of 1\% is based on our study varying the threshold (presented in Figure~S2, Supporting Information), such that the TZs identify only the strongest \dz\ regions, while being large enough to yield regions that are not trivially small. The choice of the target threshold need not be universal; however, in the cases studied here, $1\%$ was found to be ideal. The TZs can also be used to predict regions of strongest \dz\ when computed using model forecasts, as discussed later. To quantify the ability of each metric's TZ to identify regions of strong \R{downward vertical displacement}, we investigate the extent of overlap between the TZ derived from the metric on the ocean surface and the regions of strongest \dz\ below the surface. \subsection{Numerical Models Analysed.} \label{subsec:methods_2} Three different ocean models are analysed to evaluate the robustness of the target zones capacity to identify sites of \R{strongest downward vertical displacement}. \R{Given the connection between submesoscale features and subduction, we consider three models that each produce submesoscale features by accounting for different flow physics at spatial scales spanning two orders of magnitude.} Figure~\ref{fig:1} presents the surface density field at $t_0$, the time at which we start our analysis. The horizontal gradient in
\section{Introduction} \label{S1} Aluminium alloys are widely used in industry for their various advantages: they are light weighted metallic alloys, have a high corrosion resistance and good mechanical properties \cite{Mondolfo1979,Develay1992}. Since the early work of Guinier \cite{Guinier1938} and Preston \cite{Preston1938}, it has been demonstrated that the increase of hardness in age hardened Al-Cu alloys is due to the formation of nanoscaled zones that have been first detected by small-angle X-ray scattering (SAXS) known as the Guinier-Preston (GP) zones \cite{Sigli2018}. When such alloy is quenched, then aged at room temperature (\textit{i.e.}{}: naturally aged), the formation of these GP zones is activated by the excess vacancies after quenching, which are slowly annihilated near residual dislocations and grain boundaries. When an Al-Cu alloy is heat treated, the precipitation sequence is \cite{Starink2004,Son2005,Rodriguez2018}: Super Saturated Solid Solution $\rightarrow$ Cu clustering $\rightarrow$ GP zones $\rightarrow$ $\theta''$ $\rightarrow$ $\theta'$ $\rightarrow$ $\theta$. The formation of GP zones is obtained quickly at ambient temperature, thus the copper clustering sequence is usually neglected to describe the precipitation sequence of this alloy. GP zones in Al-Cu alloys are circular nanoscaled disks parallel to \{100\} planes, isolated in one layer \cite{Silcock1954,Bourgeois2011}. The precipitation in Al-Cu alloys has been extensively studied in the literature through hardness measurements, transmission electron microscopy (TEM), differential scanning calorimetry (DSC) and strength models have been developed to correlate the increase of hardening to the precipitation state \cite{Myhr2001,Rodriguez2018}. Atomistic calculations have been used to predict the evolution during ageing of the precipitation \cite{Hutchinson2010,Sigli2018,Deschamps2021} in addition to determine the formation energy of particles, their geometries and interaction with dislocations \cite{Zander2008,Singh2010,Staab2018,Miyoshi2019}. These parameters can be then implemented in classical nucleation and growth theories or/and clusters dynamics \cite{Clouet2009c,Stegmuller2019}. Publications about naturally aged alloys and the role of excess vacancies are numerous, but less information is reported on GP zone nucleation and growth during the first step of natural ageing. Besides, aluminium alloys are among materials bearing the highest potential to contain, and to facilitate the transport of hydrogen fuel due to their various advantages and they are already used in fuel-cell-based, electric vehicles \cite{Horikawa2019}. More generally, since hydrogen is becoming a key component of the energy transition worldwide, it can either be used as an energy carrier or directly as a fuel in vehicles, including automobiles and planes. However, due to its small size and high mobility, hydrogen influences the mechanical properties of metals and alloys leading to premature failures of engineering structures. This phenomenon is called hydrogen embrittlement (HE) and several models have been proposed in the literature to describe the underlying physical mechanisms (see for details \cite{Lynch2012,Kirchheim2014,Robertson2015}). For all models, HE involves the energy reduction of one process in the presence of hydrogen to activate a mechanism (\textit{e.g.}{}: grain boundary segregation \cite{Hondros1996,Oudriss2012}, Cottrell atmosphere of dislocation \cite{Gu2018,Hachet2020b}, shielding effect promoting slip band localisation \cite{Beachem1972,Birnbaum1994,Delafosse2001}, enhancement of vacancy formation \cite{McLellan1997,Harada2005,Fukai2006}, and so on...). These models can describe accurately HE of pure metals, but can fail describing HE in more complex materials (\textit{e.g.}{} in alloys) where mechanical properties are dependent on the distribution of strengthening precipitates and their interactions with defects, in particular dislocations. Aluminium alloys are also not immune to hydrogen ingress and when hydrogen is introduced as a solute, it easily diffuses and segregates to crystalline defects \cite{Scully2012,Zhao2022}. \textit{Ab initio}{} calculations have shown that hydrogen atoms strongly interact with vacancies in aluminium \cite{Wolverton2004,Nazarov2014,Connetable2018}. Previous studies focusing on the hydrogen/vacancy interactions in metals have shown that hydrogen decreases the formation energy of vacancy clusters containing hydrogen \cite{Fukai2006,Nazarov2014} and the vacancy migration energy \cite{Du2020}. Hydrogen may also delay the clustering of solutes and the coarsening of precipitates in some Al alloys \cite{Zhao2022,Hachet2022b}. Therefore, it is critical to understand how these interactions impact the kinetic and thermodynamic of precipitates in aluminium alloys (which may evolve even at room temperature) to predict microstructural evolutions and eventually to reduce the damaging effect of hydrogen. In this study, the influence of hydrogen on the early stage of GP zone formation during natural ageing in an Al-5Cu alloy is investigated. The following section is focused on experimental data that highlight the influence of hydrogen on GP zone nucleation and growth during the first step of natural ageing. Then, \textit{ab initio}{} calculations are presented to first demonstrate the impact of copper on the interaction between hydrogen and vacancy. The second part of the \textit{ab initio}{} calculations is focused on the effect of hydrogen on the diffusion of vacancy and copper in FCC aluminium. \section{Experimental evidence of the hydrogen influence on the GP zone formation and hardening kinetics} \label{S2} \subsection{Experimental details} \label{S21} The investigated material is provided by Goodfellow$^{\circledR}$ with the following composition (wt.\%): 5.3\%Cu-0.7\%Fe-0.4\%Si-0.3\%Pb, Al balance (standard AA2011, called Al-5Cu further). Disc shaped samples (with a diameter of 6\,mm and a thickness of 1\,mm) are solutionised at 810\,K during 1\,h, water quenched, then naturally aged at room temperature either in air or 5\,h in a 0.1\,M NaOH solution. Before the introduction in the solution, samples are quickly (few minutes) mechanically grinded using SiC foil paper with a particle size of 8\,µm to remove the oxide layer grown during the solution heat treatment. Aqueous solution containing NaOH is aggressive towards aluminium and its oxide, it prevents the formation of a passive layer and leads to H incorporation in the alloy \cite{Birnbaum1997,Scully2012}. After 5\,h in NaOH, the samples are further aged in air at room temperature and the hardness evolution is compared to the alloy directly aged in air. The increasing hardness resulting from the GP zone nucleation and growth is firstly measured by micro-hardness measurements, using a Future tech FM7 device at room temperature. The micro-hardness values presented in this study are the average of at least 6 indents obtained with a micro Vickers diamond indenter using a load of 500\,g and a dwell time of 10\,s. High angle annular dark field scanning TEM (HAADF-STEM) images are recorded with collection angles ranging from 50 to 180\,mrad using a JEOL ARM 200 microscope, operated at 200\,kV. Thin foil specimens are prepared with a twin-jet electro-polisher (TENUPOL 5 from Struers$^{\circledR}$) using a mixture of $30\%\mathrm{HNO_3}-70\%\mathrm{CH_3OH}$ (\%vol) at 243\,K. Final thinning is carried out by low-energy ion milling conducted with a GATAN$^{\circledR}$ Precision Ion Polishing System. \subsection{Hardness kinetic variations of the naturally aged Al-5Cu alloy due to hydrogen} \label{S22} After water quenching, the hardness of the alloy is 91$\pm$1\,HV, and it increases progressively to reach a maximum of 117$\pm$2\,HV after $\sim50\,h$ (see fig. \ref{FigHVAgeTime_AlCu}). When the alloy is aged 5\,h in NaOH solution to introduce hydrogen, the microhardness is significantly lower than the alloy directly aged in air: after 5\,h, the hardness of the alloy naturally aged in air is 105$\pm$2\,HV while it is only 95$\pm$2\,HV when NaOH treatment is carried out. However, after several additional hours (50 to 200 hours) at room temperature in air, the micro-hardness further increases and catches up the hardness of the material without hydrogen. This suggest that hydrogen atoms quickly desorb from the alloy and do not significantly affect the final microstructure (fig. \ref{FigHVAgeTime_AlCu}). To confirm these measurements, HAADF-STEM observations are carried out further. \begin{figure}[bth!] \centering \includegraphics[width=0.99\linewidth]{FigHVAgeTime_AlCu.eps} \caption{Vickers micro-hardness evolution as a function of the natural ageing time when the alloy is diractly aged (DA) in air and after 5h in NaOH.} \label{FigHVAgeTime_AlCu} \end{figure} Naturally aged materials are observed by HAADF-STEM in (001) zone axis to clearly exhibit GP zones. They are observed after being naturally aged 1 and 9 days. When the alloy is directly aged 1 day in air, small GP zones appear, as shown in fig. \ref{FigSTEMNA}.a and they become significantly larger after 9 days (fig. \ref{FigSTEMNA}.c). When it is aged 5\,h in NaOH, GP zones are not visible after 1 day (fig \ref{FigSTEMNA}.b), but become visible after 9 days (fig \ref{FigSTEMNA}.d), with an average diameter similar to those observed in the alloy directly aged 1 day in air (fig \ref{FigSTEMNA}.a). These observations are consistent with the delayed hardening (fig. \ref{FigHVAgeTime_AlCu}). However, the hardnesses of the alloy aged in both conditions are relatively similar after 9 days (fig.
above we used Eqs.\,(\ref{eq:mobility_final}) and (\ref{eq:quantummobility_final}) for a single layer with the smallest $f$.} \fi In the second part of the paper, we calculate $\mu_B$ and $\mu_{q,B}$ taking into account EES. We show that at $f\geq f_c$ EES eliminates scattering by all impurities located at distances larger than $0.5d_w$ from the midplane of the 2DEG's quantum well. We assume BI concentrations $N_1$ and $N_2$ in the Al$_x$Ga$_{1-x}$As barriers and GaAs quantum well, respectively, and arrive at the linear equations \begin{align}\label{eq:mobility_linearnumbers} \mu_B^{-1}=A_1N_1+A_2N_2\,,\\ \mu_{q,B}^{-1}=B_1N_1+B_2N_2\,,\label{eq:quantummobility_linearnumbers} \end{align} with coefficients $A_1 \ll A_2$ and $B_1 \sim B_2$ which we calculate. As a result, Eqs.\,(\ref{eq:mobility_linearnumbers}) and (\ref{eq:quantummobility_linearnumbers}) allow one to estimate $N_1$ and $N_2$ from measured $\mu_B$ and $\mu_{q,B}$. We find\cite{note:pars} $N_1 \simeq 2\times 10^{14}$ cm$^{-3}$, $N_2\simeq2\times10^{13}$ cm$^{-3}$. These estimates suggest that $\mu_{q,B}$ is dominated by BI in the Al$_x$Ga$_{1-x}$As barriers, due to their larger concentration, and therefore should benefit from the purification of the Al source.\cite{reichl:2014,LorenAl} The plan of this paper is as follows. In Sec.\,\ref{sec:variation} we study the quantum mechanics of an isolated compact dipole atom in the doping layer. We compute the binding energy of an electron in a compact dipole atom and show that its localization length in the plane of the layer is small enough to proceed classically. In Sec.\,\ref{sec:remote} we study the screening of fluctuations of the donor concentration $\nd(\rho)$ by EES and compute $\mu_R$ and $\mu_{q,R}$ [Eqs.\,(\ref{eq:mobility_final}) and (\ref{eq:quantummobility_final})]. In Sec.\,\ref{sec:} we compute $\mu_B$ and $\mu_{q,B}$ [Eqs.\,(\ref{eq:mobility_linearnumbers}) and (\ref{eq:quantummobility_linearnumbers})], taking into account EES of the BI and derive simple analytical formulas for $A_1$, $A_2$, $B_1$, and $B_2$. In Sec.\,\ref{sec:disorder} we examine the possible suppression of EES by spreading of the $\delta$-layer of Si donors in the GaAs well and by roughness of the AlAs/GaAs and AlAs/Al$_x$Ga$_{1-x}$As interfaces. In Sec.\,\ref{sec:5/2QHE} we comment on the possible relation between the RI potential and the measured gap of the fractional quantum Hall effect at filling factor 5/2. We conclude in Sec.\,\ref{sec:summary} with a summary of our results and possible avenues for further improvement of GaAs/Al$_x$Ga$_{1-x}$As heterostructures. \section{Localization of electrons in the doping layers}\label{sec:variation} A remarkable feature of the SPSL-doping scheme is that the excess electrons in the AlAs layers are able to reduce the random potential of donors in the GaAs layer but their parallel-to-2DEG conductance is negligible. As stated above, the main reasons for this are the proximity of the electrons to the donors and the large effective electron mass in AlAs. In this section we justify this claim, showing that excess electrons, while residing in AlAs, are strongly bound to donors in GaAs. \begin{figure}[h!] \includegraphics[width=0.8\linewidth]{electron_boundstate2} \caption{(Color online) Schematic image of the electron wave function cloud (blue) in the AlAs layer of thickness $t$. This cloud is bound to a Si donor ($+$) in the GaAs layer a distance $s$ away from the midpoint of the AlAs layer. $\xi$ is the electron localization length in the $x-y$ plane. }\label{fig:boundstate} \end{figure} An illustration of an electron bound state is shown in Fig.\,\ref{fig:boundstate}. Each excess electron resides in the middle of the AlAs layer of thickness $t$ and is bound to a donor in GaAs at a distance $s$ away with the localization length $\xi$ in the $x-y$ plane. The $z$-axis is perpendicular to the AlAs/GaAs interface and the origin is centered above the donor at the midpoint of the AlAs layer. The AlAs/GaAs and AlAs/Al$_x$Ga$_{1-x}$As interfaces are treated as infinite barriers so that the electrons are completely confined to the AlAs layer. This means that there are two competing energy scales: the separation $\Delta$ between the first and the second subbands of the AlAs layer and the Coulomb binding energy $E_b$. Below we show that $E_b \ll \Delta$ for reference sample parameters $t=2$ nm and $s=2.5$ nm. This allows us to think that electrons are bound in the plane at $z=0$ by an effective 2D potential \begin{equation} V(\rho,s)=-\frac{2}{t}\int\limits_{-t/2}^{t/2}dz\cos^2\left(\frac{\pi z}{t}\right)\frac{e^2}{\bar{\kappa}\sqrt{\rho^2+(s+z)^2}}\,, \end{equation} obtained by averaging the Coulomb attraction of the donor over the ground state wave function $\phi(z)=(2/t)^{1/2}\cos(\pi z/t)$. Here $\rho = \sqrt{x^2+y^2}$ and $\bar{\kappa}$ is the effective dielectric constant. Because the dielectric constants of GaAs ($\kappa\simeq13$) and AlAs ($\kappa_{\mbox{\scriptsize{A}}}\simeq10$) are relatively close, we use $\bar{\kappa}= (\kappa+\kappa_{\mbox{\scriptsize{A}}})/2\simeq11.5$. (Here and below we do not discriminate between the dielectric constants of GaAs and Al$_{x}$Ga$_{1-x}$As for the relevant $x\simeq0.24$.) The corresponding Schr\"{o}dinger equation is then given by \begin{equation} -\frac{\hbar^2}{2m_{xy}^\star }\nabla^2\psi(\rho)+V(\rho,s)\psi(\rho)=-E_b\psi(\rho)\,, \end{equation} where $m_{xy}^\star $ is the electron's effective mass in the $x-y$ plane. To find $E_b$ we use a variational approach with the trial wave function \begin{equation}\label{eq:wavefunction} \psi(\rho)=\exp\left(-\frac{\sqrt{\rho^2+s^2}}{b}\right)\,, \end{equation} where $b$ is the variational parameter which minimizes $E_b$. \begin{figure} \includegraphics[width=\linewidth]{Graph5} \caption{(Color online) The binding energy $E_b$ in $Ry$ (thick line) and the variational parameter $b$ in units of the in-plane effective Bohr radius $a_{xy}$ (thin line) as a function of the distance $s$ to the binding donor in units of $a_{xy}$ as obtained from the variational calculation for $t=2$ nm. } \label{fig:variation} \end{figure} The results of the variational calculation $E_b/Ry$ and $b/a_{xy}$ as a function of $s/a_{xy}$ are given in Fig.\,\ref{fig:variation}, where $a_{xy}=\bar{\kappa}\hbar^2/(m_{xy}^\star e^2)$ is the in-plane effective Bohr radius and $Ry=\hbar^2/(2m_{xy}^\star a_{xy}^2)$. Using $m_{xy}^\star = 0.22 m_e$ for AlAs, we find $a_{xy} \simeq 2.6$ nm and $Ry \simeq 23$ meV near the $X$-point minima.\cite{Vurgaftman} With $s=2.5$ nm, we then estimate the electron binding energy $E_b \approx 21$ meV. Above we have assumed that $E_b \ll \Delta$, allowing us to average the potential over the fast motion along $z$-direction and treat an electron as two-dimensional. To justify this assumption we estimate the inter-subband separation $\Delta=3\hbar^2\pi^2/(2m_z^\star t^2)$. The electronic spectrum near the $X$-point minima in AlAs is anisotropic and $ m^\star_z=0.95 m_e$.\cite{Vurgaftman} We find that indeed $\Delta \simeq 0.26$ eV $\gg E_b$. The localization length $\xi$ of the electron in an isolated dipole atom in the $x-y$ plane is given by \begin{equation}\label{eq:localization_length} \xi=\frac{\hbar}{\sqrt{2m_{xy}^\star E_b}}\,, \end{equation} which yields $\xi \simeq 2.7$ nm. For $\nd=1\times 10^{12}$ cm$^{-2}$, we find $\nd \xi^2 \simeq 0.07$ which should be compared to the critical value of $(\nd\xi^2)_c$ below which electrons are localized and transport is activated. We can estimate $(\nd\xi^2)_c$ using the data for a Si MOSFET doped by sodium at the SiO$_2$ side of the interface.\cite{fowler:1980} These sodium atoms donate electrons which reside on the silicon side of the interface. Such MOSFET is therefore similar to the SPSL-doping layer in which a sodium ion in SiO$_2$ assumes the role of Si in GaAs. The activation energy $E_1$ of the electron conductivity along the interface $E_1(\nd)$ as a function of the surface concentration of sodium $\nd$ was investigated in Ref.\,[\onlinecite{fowler:1980}]. At small $\nd$, $E_1 \approx 24$ meV, in agreement with theoretical predictions for the binding energy of an isolated donor. With increasing $\nd$, $E_1 (\nd) $ decreases and extrapolation to large $\nd$ shows that it vanishes at $\nd \approx 1.7\pm0.5\times 10^{12}$ cm$^{-2}$. Using Eq.\,(\ref{eq:localization_length}) with the binding energy of an isolated donor (24 meV) and the in-plane effective electron mass ($0.19$ $ m_e$) we find the localization length of the electron bound to an isolated sodium ion $\xi \approx 2.9$ nm and conclude that in Si MOSFET localization sets in at $(\nd\xi^2)_c \approx 0.14\pm0.04$. Since our estimate of $\nd \xi^2 \approx 0.07$ for SPSL-doping layer is smaller than this value, excess electrons should be localized and their hopping conductivity at low $T$ should be much smaller than $e^2/h$ (and activated).\cite{note:2} Measurements of the conductivity of SPSL-doping layers have shown that it is indeed activated.\cite{DoroEng,dorozhkin:2018} \section{Scattering by remote donors}\label{sec:remote} Since excess electrons in the AlAs layers and donors in the GaAs layer form compact dipole atoms, scattering from these dipoles can be ignored. However, localized electrons can still choose among host donors, minimizing the total energy of the system. As a result, the ionized donors are screened by the $f\nd$ electrons, so that the correlator of the random potential energy $\la U(\boldsymbol{\rho})U(0)\ra$ is reduced ($\boldsymbol{\rho} =(x,y)$ is a vector in the $x-y$ plane). The Fourier image of the potential correlation function and the Fourier image of the correlator $D(\rho)\la \nd(0)[1-f(0)]\nd(\boldsymbol{\rho})[1-f(\boldsymbol{\rho})]\ra$ of ionized donor concentration fluctuations can be related as \begin{equation}\label{eq:potential_correlator} \la|U(q)|^2\ra=\left(\frac{2\pi e^2}{\kappa
ever-fresh forbearance and impartiality? (12) • Rhymes and rhymers pass away, poems distill'd from poems pass away, The swarms of reflectors and the polite pass, and leave ashes… (13) • The proof of a poet shall be sternly deferr'd till his country absorbs him as affectionately as he has absorb'd it. (13) • In the need of songs, philosophy, an appropriate native grand-opera, shipcraft, any craft, He or she is greatest who contributes the greatest original practical example. (13) • Give me to sing the songs of the great Idea, take all the rest, I have loved the earth, sun, animals, I have despised riches, I have given alms to every one that ask'd, stood up for the stupid and crazy, devoted my income and labor to others, Hated tyrants, argued not concerning God, had patience and indulgence toward the people, taken off my hat to nothing known or unknown, Gone freely with powerful uneducated persons and with the young, and with the mothers of families, Read these leaves to myself in the open air, tried them by trees, stars, rivers, Dismiss'd whatever insulted my own soul or defiled my body, Claim'd nothing to myself which I have not carefully claim'd for others on the same terms, Sped to the camps, and comrades found and accepted from every State, (Upon this breast has many a dying soldier lean'd to breathe his last, This arm, this hand, this voice, have nourish'd, rais'd, restored, To life recalling many a prostrate form;) I am willing to wait to be understood by the growth of the taste of myself, Rejecting none, permitting all. (14) I am for those that have never been master'd, For men and women whose tempers have never been master'd, for those whom laws, theories, conventions, can never master. • Underneath all, individuals, I swear nothing is good to me now that ignores individuals, The American compact is altogether with individuals, The only government is that which makes minute of individuals, The whole theory of the universe is directed unerringly to one single individual — namely to You. (15) • Underneath all is the Expression of love for men and women, (I swear I have seen enough of mean and impotent modes of expressing love for men and women, After this day I take my own modes of expressing love for men and women.) (16) • I am for those that have never been master'd, For men and women whose tempers have never been master'd, For those whom laws, theories, conventions, can never master. • I am for those who walk abreast with the whole earth, Who inaugurate one to inaugurate all. w, Whether that which appears so is so, or is it all flashes and specks? Men and women crowding fast in the streets, if they are not flashes and specks wha ## AUTUMN RIVULETS The horizon's edge, the flying sea-crow, the fragrance of salt marsh and shore mud, these became part of that child who went forth every day, and who now goes, and will always go forth every day. ### There Was a Child Went Forth (1855; 1871) • There was a child went forth every day, And the first object he look'd upon, that object he became , And that object became part of him for the day or a certain part of the day, Or for many years or stretching cycles of years. • Affection that will not be gainsay'd, the sense of what is real, the thought if after all it should prove unreal, The doubts of day-time and the doubts of night-time, the curious whether and hot are they? • The hurrying tumbling waves, quick-broken crests, slapping, The strata of color'd clouds, the long bar of maroon-tint away solitary by itself, the spread of purity it lies motionless in, The horizon's edge, the flying sea-crow, the fragrance of salt marsh and shore mud, These became part of that child who went forth every day, and who now goes, and will always go forth every day. ### To a Foil'd European Revolutionaire (1856;1881) Liberty is to be subserv'd whatever occurs... is nothing that is quell'd by one or two failures, or any number of failures... • Courage yet, my brother or my sister! Keep on — Liberty is to be subserv'd whatever occurs; That is nothing that is quell'd by one or two failures, or any number of failures, Or by the indifference or ingratitude of the people, or by any unfaithfulness… When there are no more memories of heroes and martyrs, and when all life and all the souls of men and women are discharged from any part of the earth, then only shall liberty or the idea of liberty be discharged from that part of the earth, and the infidel come into full possession. • What we believe in waits latent forever through all the continents, Invites no one, promises nothing, sits in calmness and light, is positive and composed, knows no discouragement, Waiting patiently, waiting its time. • (Not songs of loyalty alone are these, But songs of insurrection also, For I am the sworn poet of every dauntless rebel the world over, And he going with me leaves peace and routine behind him, And stakes his life to be lost at any moment.) • The battle rages with many a loud alarm and frequent advance and retreat, The infidel triumphs, or supposes he triumphs, The prison, scaffold, garrote, handcuffs, iron necklace and leadballs do their work, The named and unnamed heroes pass to other spheres, The great speakers and writers are exiled, they lie sick in distant lands, The cause is asleep, the strongest throats are choked with their own blood, The young men droop their eyelashes toward the ground when they meet; But for all this Liberty has not gone out of the place, nor the infidel enter'd into full possession. • When liberty goes out of a place it is not the first to go, nor the second or third to go, It waits for all the rest to go, it is the last. • When there are no more memories of heroes and martyrs, And when all life and all the souls of men and women are discharged from any part of the earth, Then only shall liberty or the idea of liberty be discharged from that part of the earth, And the infidel come into full possession. My spirit to yours dear brother, Do not mind because many sounding your name do not understand you, I do not sound your name, but I understand you... ### To Him That Was Crucified (1860; 1881) • My spirit to yours dear brother, Do not mind because many sounding your name do not understand you, I do not sound your name, but I understand you, I specify you with joy O my comrade to salute you, and to salute those who are with you, before and since, and those to come also, That we all labor together transmitting the same charge and succession, We few equals indifferent of lands, indifferent of times, We, enclosers of all continents, all castes, allowers of all theologies, Compassionaters, perceivers, rapport of men, We walk silent among disputes and assertions, but reject not the disputers nor any thing that is asserted, We hear the bawling and din, we are reach'd at by divisions, jealousies, recriminations on every side, They close peremptorily upon us to surround us, my comrade, Yet we walk unheld, free, the whole earth over, journeying up and down till we make our ineffaceable mark upon time and the diverse eras, Till we saturate time and eras, that the men and women of races, ages to come, may prove brethren and lovers as we are. ### To a Common Prostitute (1860) Why, who makes much of a miracle? As to me I know of nothing else but miracles... • Be composed — be at ease with me — I am Walt Whitman, liberal and lusty as Nature, Not till the sun excludes you do I exclude you, Not till the waters refuse to glisten for you and the leaves to rustle for you, do my words refuse to glisten and rustle for you. • My girl I appoint with you an appointment, and I charge you that you
Fellahîn, is, on the contrary, uniformly wavy, because more or less oval in section. The religions, again, of the Red Man, we are told by Carl Schultz-Sellack, Oscar Loew, and other good observers, are “essentially astrological, based on star, sun, and moon worship,” with which was often associated an intricate method of measuring time built on a series of twenty constellations" (Zeitschr. für Ethnologie, 1879, p. 209). “The sun,” says Loew, “is the god of most Indian tribes. ‘He diffuses warmth and nourishment for us and our animals; why shall we not worship him?’ observed to me on one occasion Masayamtiba, a Moqui Indian (New Mexico)” (ib. p. 265). This Masayamtiba was a better philosopher than those ethnologists who seek for the origin of such a simple cult in the remote corners of the globe, rather than in the beneficial influence of the heavenly bodies which shine alike for all mankind. The four great gods of the Mayas, the “props of the heavens,” answered to the four great Mexican gods of the four quarters of the compass, all being associated with the four elements of wind, water, fire, and earth. But to what does either system answer in the polytheistic creeds of the Hindus, Assyrians, Babylonians, or other nations of antiquity? There is something similar in the Neo-Buddhistic teachings; but Buddhism, even of the oldest type, is much too recent to explain anything in the religious worlds of Mexico or Yucatan. The hare is associated in America, in Asia, and even amongst the Bushmen of South Africa with the moon. But this association was obviously suggested independently by the spots which, especially in the first quarter of the moon seem to present the outlines of a hare on its form. Waitz (Anthropology, p. 255) well observes that a common belief in a universal flood, or in the periodical destruction of the world, whether by fire, water, storms, or earthquakes, and analogous or parallel lines of thought—taken individually—afford no proof whatever in favour of affinity, and even resemblances in several points possess only a secondary importance; for they may partly, under like conditions, arise spontaneously among peoples who have always lived in a state of separation, or may have partly resulted from periods of short intercourse between two different peoples. In any case, these slight coincidences are of little account when weighed against the argument based on diversity of speech. The tremendous force of this argument, as applied to the American aborigines, is scarcely realized by anthropologists such as Waitz or Virchow, who have not cultivated philological studies, and it is significant that, in the already quoted paper by Virchow on the “Anthropology of America,” the linguistic element is not even referred to. On the other hand, it has been greatly depreciated and even brought into contempt by the vagaries of certain etymologists, who discover affinities where there is nothing but the vaguest verbal resemblance.[1] Science has demonstrated beyond all cavil that, while differing widely among themselves, the American languages not only betray no affinity to any other tongues, but belong to an absolutely distinct order of speech. They are neither isolating or monosyllabic like the Indo-Chinese group, agglutinating like the Ural-Altaic, Bantu, or Dravidian, nor inflexional like the Semitic and Aryan. They come nearest in structure to the Basque, which is the only incorporating language of the Old World, but differ from it essentially inasmuch as their capacity of incorporating words in the sentence is not restricted to the verb and a few pronominal elements, but extends in principle to all the parts of speech. This faculty, which, with one or two doubtful exceptions, seems to be characteristic of every American idiom from Behring Strait to Cape Horn, has received the name of polysynthesis, literally “a much putting together.”[2] Hence, in a comprehensive classification of articulate speech according to its inner mechanism, a special place must be reserved for the American group; and, if we assume as the most probable theory that all speech has slowly evolved from a few simple beginnings, passing successively from the state of crude roots to the isolating condition, and so onwards to the agglutinating and other orders, then in such a scheme the American will stand apart in some such position as under:— Here it is not intended to imply that American derives from Malayan or Dravidian, but only from some now extinct agglutinating forms of speech of which Malayan or Dravidian may be taken as still surviving typical instances. The disappearance in America of all such assumed forms, unless the Otomi of Mexico is to be accepted as a solitary lingering specimen, argues both a very great antiquity and an independent evolution of the American languages. And as the course this evolution has taken differs entirely from that pursued by the idioms of the Old World, it follows that the first peopling of America, if from the Old World, must be thrown back to a time when all speech itself was in its infancy, to a time when slow diffusion might be conceived as equally probable from an eastern or a western starting-point. It is this feature of polysynthesis that gives the American race its first and greatest claim to be regarded as truly autochthonous, in the same sense that we regard the Mongolian and Caucasian races as truly autochthonous in Asia. There is a general consensus amongst anthropologists that on the western continent we are in presence of two distinct original types, the brachycephalous and dolichocephalous. But these are no longer confined to separate geographical areas, as Retzius supposed. The very general practice of artificially flattening or otherwise deforming the skull has naturally caused less value to be attached to the craniological test in America than elsewhere. The practice has been traced back even to prehistoric times, and a clay figure recently found associated with the remains of a child by Reiss and Stübel in a grave in Ancon puts in a clear light the method adopted by the ancient Peruvians (The Necropolis of Ancon, Berlin and London, 1881, plate 90). Still, careful investigations have placed it beyond doubt that the normal skull both in North and in South America is now mesaticephalous, or of a type intermediate between the two extremes, a fact supposed to imply a general intermingling of the two primeval stocks. On the other hand, Virchow (loc. cit., passim) shows perfectly normal ancient and recent crania from both sides of Greenland, from El Carmen on the Rio Negro, Patagonia, from the Botocudo tribe, East Brazil, from a tumulus of Santa Fé de Bogota, and even a Peruvian mummy exhumed at Pancatambo, all of which are distinctly, in some cases extremely, dolichocephalic. In the same way he produces brachycephalic skulls from the Brazilian shell-mounds of Santos and Santa Catharina, from the barrows of the Ohio valley mound-builders, from the Carib and Araucanian tribes, and from the Pampas of La Plata, the last mentioned of an extreme type, in close proximity to the extreme dolichocephalous specimens from Patagonia. Were it safe to argue from the analogy of Britain, where the dolichocephalic builders of the long barrows seem to have preceded and afterwards become intermingled with the brachycephalic builders of the round barrows (Dr Thurnam), the western continent might be supposed to have been successively occupied first by a long-headed and then by a round-headed race, which kept aloof in a few places, while more generally becoming fused in a normally mesaticephalic type. But we have in America no guide to the relative priority of the two forms of head, nor are there now any long-headed races on the eastern Asiatic seaboard whose ancestors might be taken as the precursors of the corresponding element
# silicon carbide intermolecular forces kazakhstan (PDF) Biomaterials Science: An Introduction to Materials … PDF | On Jan 1, 1996, R.B. More and others published Biomaterials Science: An Introduction to Materials in Medicine | Find, read and cite all the research you need on 1818 ACC Chemistry (silicon carbide) Intra molecular covalent bonds are quite strong, but generally not as strong as ionic bonds. Inter Intermolecular forces and intramolecular forces are identical for these compounds. The IMF is an electrostatic coulo force that is present a) Preparation of Portland cement with high compressive … We found that the micron-size green silicon carbide was adsorbed to the surface of calcium hydroxide (the hydration product of cement) by intermolecular forces. The hydration of cement in the later stage and the hardened structure of the cement were improved, thus increasing the strength of the cement. Select the correct answer. What is the percent … Answer:C. 70%Explanation:Atomic Mass of the silicon = 28 g.Atomic mass of the Carbon = 12 g.Total mass of the Silicon Carbide = 28 + 12= 40 g.Now, Using the for… The current understanding on the diamond machining of … 20/5/2014· Silicon carbide (SiC) is an extremely hard and brittle non-oxide ceramic material. Due to its semiconducting properties, and due to it being highly oxidation and wear resistant (chemical + mechanical + thermal), use of SiC in the semiconductor electronics has been found advantageous in many areas compared to the current silicon based very-large-scale integration (VLSI) technology. Types of Crystals | Chemistry [Master] - Lumen Learning Silicon Carbide: Silicon carbide is an extremely rare mineral, and in nature is is mostly found in a certain type of meteorite. Molecular Crystals Molecules held together by van der Waals forces … Quiz+ | A crystal and its melt readily conduct … Quiz 12: Intermolecular Forces: Liquids and Solids Q 43 A crystal and its melt readily conduct electricity.The crystal also has a luster and is easily deformed.Thus,it is: A)a covalent network crystal B)an ionic crystal C)a metallic crystal D)a molecular crystal E)a covalent crystal For the most varied of reasons, in the past increasingly more organic pigments have been used (e.g. for heavy metal-free, more brilliant shades) and this trend has led to the development of a new group of additives: polymeric wetting and dispersing additives. They Chemistry A Bonding - KING''S SCIENCE PAGE Diamonds (a form of pure carbon (see figure)), carborundum (silicon carbide) and quartz (silicon dioxide) are examples of macromolecules. 4. What is a network solid? 5. What type of bonding exists in network solids? 6. What are some properties of network GO intermolecular forces? 35. List three types of intermolecular forces, and give an example of a substance having each type of force. 36. Write an empirical definition (based on observable properties) for an ionic compound. 37. Summarize the theoretical structure 38. Molecular crystal | crystallography | Britannica 20/8/2020· Other articles where Molecular crystal is discussed: chemical bonding: Molecular solids: The structures of molecular solids, which are solids composed of individual molecules, have also been touched on in the section on intermolecular forces. These molecules are held to one another by hydrogen bonds (if they can form them), dispersion forces, and other dipolar… SOLID STATE - GO VIDYALAYA GO - Google Sites While the intermolecular forces of attraction tend to keep the particles closer; the thermal energy tends to keep the particles apart from each other by making them move faster. When the net resultant of these two opposing forces, i.e. intermolecular forces and thermal energy, makes the particles cling together and forces them to occupy fixed positions, matters exist in solid state. FORCES BETWEEN PARTICLES - MR. PORTER''S CHEMISTRY PAGES Because these intermolecular forces are generally weak, molecular compounds are typically gases or liquids at room temperature. Larger molecules will have stronger intermolecular forces, some even strong enough to make the substance solid. Silicon Carbide Current Scenario, Investment Feasibility … Table 324. Silicon Carbide Flexible Ac Transmission Systems (Facts) , by Region USD Million (2019-2024) Table 325. Silicon Carbide High-Voltage, Direct Current (HVCD) , by Region USD Million (2019-2024) Table 326. Silicon Carbide Power Supply and Probing Intermolecular Forces and Potentials with … Casimir forces from conductive silicon carbide surfaces. Physical Review B 2014, 89 DOI: 10.1103/PhysRevB.89.195440. Mario S. Rodrigues, Luca Costa, Joël Chevrier, Fabio Comin. System analysis of force feedback microscopy. Journal of Applied Physics , Lanthanum carbide - WikiMili, The Free Encyclopedia Lanthanum carbide has also shown superconductive properties when converted into a layered lanthanum carbide halide La 2 C 2 X 2 (X=Br,I). Investigations using high-resolution neutron powder diffraction measurements from room temperature to 1.5 Kelvin showed that it has superconductive properties at about 7.03 Kelvin for X=Br and at about 1.7 Kelvin for X=I, respectively. What type of bonding does tungsten carbide form - … Carbides are compounds that are made of carbon and a less electronegative element. They can be classified by the type of chemical bonding involved as follows: (i) salt-like, (ii The Solid State Amorphous silicon is used as photovoltaic material for conversion of sunlight into electricity. Examples Cu, Ag,Fe,S,etc. Glass, rubber, plastic, etc. Classifiion of Crystalline Solids Crystalline solids are classified (on the basis of nature of intermolecular forces Compounds & Bonding Notes Completed KHS Jun 99 page 3 Compounds & Bonding Unit 1 Section 2 Higher Silicon carbide (carborundum) and silicon dioxide (silica) have the typical properties of covalent network substances. Properties & Uses Very High Melting points - a large nuer of very strong Covalent Network Solid | Liquids and Solids A covalent crystal contains a three-dimensional network of covalent bonds, as illustrated by the structures of diamond, silicon dioxide, silicon carbide, and graphite. Graphite is an exceptional example, composed of planar sheets of covalent crystals that are held together in layers by noncovalent forces. Characteristic properties of Silicone Rubber Compounds 4 Comparison of high-temperature operating life Chloroprene rubber vs. silicone rubber Low-temperature properties of various rubbers JIS K 6261, Section 5 600 400 200 0 2 4 6 8 Time (days) Elongation at break (%) Silicone rubber (150 C) Silicone Explain why silicon carbide-reinforced alumina is… Silicon carbide, $\mathrm{SiC}$ , is one of the hardest materials known. Surpassed in hardness only by diamond, it is sometimes known commercially as carborundum. silicon carbide is used primarily as an abrasive for sandpaper and is manufactured by heating common sand (silicon dioxide, $\mathrm{SiO}_{2} )$ with carbon in a furnace. Silicon - Wikipedia Silicon is a chemical element with the syol Si and atomic nuer 14. It is a hard, brittle crystalline solid with a blue-grey metallic lustre, and is a tetravalent metalloid and semiconductor.It is a meer of group 14 in the periodic table: carbon is above it; and germanium, tin, and lead are below it. are below it. Effect of Silicon Carbide (SiC) Nanoparticles on the Spectroscopic … Silicon carbide nanostructures have specific properties useful for appliions in microelectronics and optoelectronics [12] [13] [14]. Actually, SiC has selected due to their properties as a high hardness, semiconductor processing equipment, etc. Theseic devices. why are intermolecular forces stronger than … 20/3/2010· take CO2 as an example. it is covalent molecular. the forces between the C and O2 is very strong (intramolecular) . however, the forces between 1 CO2 and another CO2 are weak (intermolecular) take SiC (silicon carbide) as an example. it is covalent network Why does SiC have a higher boiling point than other ionic … 3/1/2008· I have CsI and LiF, but SiC has an almost 4 times higher boiling point. Why? In terms of bonding and intermolecular forces. In silicon carbide, every atom of carbon and silicon is bonded with four strong covalant bonds to the neighboring atoms, so, to get it to convert Optimization of crucible and heating model for large … 1/3/2020· In addition, the convection-diffusion equation and the growth rate equation are introduced as follows. The carbon molar concentration, denoted by C, is computed using the convection-diffusion equation. (1) ∂ C ∂ t-D 2 ∇ C
& \ +Speaker Normalization & 3.16$\pm$0.12 & 3.15$\pm$0.12 & 3.37$\pm$0.10 \\ \cline{2-6} & P2 & \ \ +Weight Regularization & 3.27$\pm$0.12 & 3.21$\pm$0.13 & 3.43$\pm$0.07 \\ \hline \multicolumn{2}{c|}{P3 (Proposed)} & \ \ \ +Prosody Module & 3.43$\pm$0.12 & \textbf{3.76$\pm$0.17} & \textbf{3.56$\pm$0.15} \\ \bottomrule \end{tabular} \end{table*} \subsection{Training and Conversion Procedure} As shown in Fig.~\ref{fig:model_framework}, our approach is composed of three training phases and a conversion phase. Different phases are marked with numbers in the figure. \textbf{Training phase 1.} As we mentioned in Section~\ref{sec:sn}, we utilize an any-to-one VC method to implement our content module. The content module is trained on a large amount of speech data and finetunes on a specific speaker to ensure the performance of speaker normalization. In our experiment, the content module is optimized with mel reconstruction loss. \textbf{Training phase 2.} The whole model is trained with a large amount of data. Note that the speaker module is trained with a randomly selected mel spectrum of the same speaker to make sure that the extracted speaker embedding is only related to the speaker information. The prosody and conversion modules are optimized with reconstruction loss $L_{recons}$. The speaker module is optimized with cross-entropy loss $L_{ce}$. The loss function of this stage can be described as follows: \begin{equation} Loss_{stage2}=L_{recons}+ L_{ce} \end{equation} \textbf{Training phase 3.} This stage is for adaptation. In this stage, we only use one utterance from target speaker to fine-tune the model. Previous works~\cite{Adaspeech,GCtts} confirm that a large number of parameters make model easy to over-fit. As shown in Fig.~\ref{fig:model_framework}, we only optimize a part of the whole model. To further prevent performance degradation caused by over-fitting, we adopt the weight regularization~\cite{weightRegLi2020} in this phase. The loss used in this phase is described as follows: \begin{equation} Loss_{stage3}=L_{recons}+\gamma L_{wReg} \end{equation} \textbf{Conversion phase.} The content and prosody representation are extracted from source speech. Note that we use linear transformation to obtain converted f0. The speaker representation is extracted from target speech. The conversion module takes content, prosody, and speaker representation to reconstruct converted speech. Through the proposed disentanglement and training procedure, the converted speech is expected to have the same prosody as source speech, while maintaining the content and target speaker's identity. \section{EXPERIMENTS AND RESULTS} \subsection{Dataset and Experimental Setup} In our experiments, 102 speakers from VCTK~\cite{VCTK} are used to train the conversion model. For one-shot testing, \textit{p340}, \textit{p363} from VCTK, as well as \textit{slt}, \textit{bdl} from CMU-ARCTIC~\cite{CMU-Arctic} are used as the target speakers. The duration of target speech ranges from 3 to 4 seconds. For the source audio, we use CMU-ARCTIC (rms, clb) and ESD dataset~\cite{ESD}. All speech utterances are downsampled to 16kHz. We use 80-dim mel spectrum computed with 50ms frame length and 12.5ms frame shift. The ASR system is a TDNN-F model trained with 1k hours standard English corpus. We use the 256-dim bottleneck features as the linguistic representation, which is extracted from the last fully-connected layer before softmax. Modified LPCnet~\cite{LPCnet} is adopted to reconstruct waveform from mel spectrum. The vocoder is trained with speech data of 102 speakers from VCTK. To validate our proposed method, we implement comparison and ablation systems. For comparison systems, we selected three SOTA one-shot VC methods, including AGAINVC~\cite{AgainVC}, GSE~\cite{GSE} and VQMIVC~\cite{VQMIVC}. For a fair comparison, we finetune GSE with target speaker's utterance, which is referred as GSE-finetune. For ablation analysis, we implement four systems BL, P1, P2, and P3 (proposed). Among them, BL is composed of speaker and conversion module to convert BN to mel spectrum. P1 uses speaker normalization based on BL, and P2 adopts weight regularization based on P1. P3 is our final proposed system combining all the contributions. For our proposed method, the content adopts the same model configuration as Tian \textit{et al.}~\cite{VCC2020_Tian2020}. The content module follows a typical encoder-decoder architecture using the CBHG as the encoder and an auto-regressive module consisting of prenet, decoder RNN and postnet as the decoder. For conversion module, the configuration of CBHG is the same as that of content module. Prenet consists of 2 fully connected layers with 80 and 256 hidden units respectively. Postnet contains 4 1D-convolution layers with 3*3 kernel size and 256 filters and a fully connected layers with 80 hidden units. The architecture and hyper-parameters of the reference encoder follow the origin configuration~\cite{Rerferceencoder}. The speaker classifier consists of 3 fully connected layers. In training stage 2, the conversion model is trained for 120 epochs using batch size of 16. We use Adam optimizer to optimize our model with learning rate decay, which starts from 0.001 and decays every 20 epochs with decay rate 0.7. In training stage 3, we train the conversion model for 2000 steps using one utterance, and $\gamma$ is set to 1. The learning rate starts from 0.001 and decays every 200 steps in decay rate 0.5. \begin{figure*}[h] \centering \includegraphics[width=1\linewidth]{figure/overfit_v3.pdf} \caption{Spectrograms for a source audio utterance (left) and its corresponding converted utterances by different systems (right). The formants from the proposed system (esp. P3) are more similar to the source, indicating good style transfer. By contrast, the formants from BL are too flat.} \label{fig:mel} \vspace{-6pt} \end{figure*} \subsection{Subjective evaluation} We conduct the following listening tests: three mean opinion score (MOS) tests to assess speech quality, style similarity, and speaker similarity respectively. Style similarity measures styles between converted audio and source audio. We select three sentences from rms, clb, and ESD respectively as source audio. Nine sentences are converted to four target speakers (p340, p363, slt, bdl), a total of 36 sentences are used for listening test. We highly recommend readers to listen to our samples\footnote{Samples can be found in \href{https://kerwinchao.github.io/Oneshotvc.github.io/}{\url{https://kerwinchao.github.io/Oneshotvc.github.io/}}}. \textbf{Comparison analysis.} We compare the proposed method with SOTA one-shot VC methods. The results of MOS tests are shown in Table~\ref{tab:mos} comparison part. It is observed that GSE achieves the best result in terms of speech quality, and our proposed system P3 gets significantly higher MOS scores in terms of style and speaker similarity than the comparison systems. Previous one-shot methods lack prosody modeling ability, and target speaker‘s timber is unknown to model, which leads to low speaker similarity and unstable performance. Comparing GSE with GSE-finetune, we can see that making GSE learn from the target speaker utterance can significantly improve the speaker similarity. \textbf{Ablation analysis.} As shown in Table~\ref{tab:mos} ablation part, we evaluated the ablation systems. The BL system obtains poor results in three MOS tests, which indicates that BL system is easy to over-fit. By using our proposed method, the phenomenon of over-fitting is alleviated, and all MOS scores are improved. Adding the prosody module makes the speaker module focus on extracting timber from only one utterance, therefore that style and speaker similarity are able to be improved. \textbf{Varying duration.} We further evaluate the performance of the proposed system under different duration of utterances (1, 3, 6, 9, 15 seconds) from CMU-ARCTIC speakers. Note that here only CMU-ARCTIC speakers are used for subjective test as listeners need to assess a large set of audio samples. As shown in Fig.~\ref{fig:duration}, the result is affected by the duration of the target speech in general. For the extreme cases, e.g. 1-3s, the model still shows quality degradation caused by over-fitting even if our proposed method is used. But from 3 to 6s, benefiting from the proposed approach, the three MOS scores have clear increased while speech quality and style similarity undergo a quick boost. After 6 seconds, all three curves do not improve significantly and begin to stabilize from 9 seconds. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{figure/duration-v1.pdf} \caption{The MOS performance of proposed system vs. training utterance with different duration} \label{fig:duration} \end{figure} \subsection{Objective evaluation} \textbf{Over-fitting on spectrograms}. We visualize the spectrogram of the converted speech for further investigation. The spectrogram of a testing sample is shown in Figure~\ref{fig:mel}. We can see that BL has a very flat formant on the spectrogram, and noise appears in the silence part. Comparing these spectrograms, we observe that over-fitting phenomenon reduces the frequency of the formant and the fundamental
ridge axis to the far end of the model domain. Finally, we let the model evolve for another 8000 years ($\sim$370 time steps) with a uniform cell size of 140~m. This allows us to export the final state of the model to data files and use them to create high-resolution initial conditions for the model runs presented in the following. The data files are freely available at \url{https://github.com/tjhei/paper-aspect-melt-paper-2-data} together with the input files and allow it to reproduce our results. \subsubsection{Influence of problem size} \label{sec:problem-size} To show that iteration numbers of the linear solver do not vary substantially with the size of the problem we are solving, we used the data files created from the final state of the 2-D mid-ocean ridge model described above to compute instantaneous flow models with different resolutions. Our results (see Table~\ref{table:2D-iteration-counts}) show that the number of GMRES iterations is insensitive to the problem size, and the number of Schur complement iterations that are done per GMRES iteration only increases slightly with problem size. This result highlights the usefulness of our new method for large-scale magma/mantle dynamics models. \begin{table} \centering \begin{tabular}{|c|rr|} \multicolumn{3}{c}{\textbf{Problem size: Number of linear solver iterations}} \\ \hline \#cells & GMRES iterations & average S block iterations \\ \hline $6,144$ & 213 & 157\\ $24,576$ & 176 & 199\\ $98,304$ & 118 & 229\\ $393,216$ & 118 & 261\\ $1,572,864$ & 116 & 308\\ $6,291,456$ & 119 & 343\\ \hline \end{tabular} \caption{Iteration counts for a linear solver tolerance of $10^{-14}$.} \label{table:2D-iteration-counts} \end{table} \subsubsection{Influence of material properties} \label{sec:material-properties} \citet{sander_two_field} and \citet{sanderthreefield} have identified the ratio of compaction to shear viscosity as a key control on the rate of convergence of the iterative solver for the linear system we solve. Because the compaction viscosity is inversely proportional to the porosity, this ratio increases with decreasing porosity and becomes infinity in the limit of $\phi \rightarrow 0$ (which is the mathematically degenerate case) at the boundaries between regions with and without melt. As this boundary is present in most models of magma/mantle dynamics, and has the potential to slow down convergence of the linear solver substantially, we investigate the dependence of the convergence rate on the compaction-to-shear-viscosity ratio. In our new formulation, we address the part of the problem that relates to the interface between the solid and the partially molten region by rescaling the equation that contains the compaction viscosity, and introducing a threshold for the onset of two-phase flow. Hence, in the following we will test the sensitivity of the iteration count to both the global compaction-to-shear-viscosity ratio and the choice of the melt transport threshold. For this purpose, we use the same setup as described above in section~\ref{sec:problem-size} to compute instantaneous flow models. When the compaction-to-shear-viscosity ratio $\xi/\eta$ is varied globally (Table~\ref{table:parameter-iteration-counts}), we see that there is a weak dependence of the GMRES iteration count on the compaction-to-shear-viscosity ratio, similarly to the results of \citet{sanderthreefield}. In addition, the S block iteration count increases with $\xi/\eta$. This is expected, as our formulation only addresses the increase of $\xi$ as the porosity $\phi \rightarrow 0$. However, this sensitivity to $\xi/\eta$ might not be problematic for realistic applications, as this ratio is expected to be on the order of 1--100 \citep{hewitt2008partial, takei2009viscous1, simpson2010multiscale, Katz2010, schmeling2012effective, alisic2014compaction}. Note that the values $\xi/\eta$ given in Table~\ref{table:parameter-iteration-counts} correspond to the ratio of the shear and compaction viscosity for a porosity $\phi=$~0.015. The actual ratio in the model varies by two orders of magnitude upwards from this reference value due to the different dependencies on porosity, which means that the ratio increases both for very low and very high porosities. \begin{table} \centering \begin{tabular}{|c|rr|} \multicolumn{3}{c}{\textbf{Compaction-to-shear-viscosity ratio: Number of linear solver iterations}} \\ \hline $\xi/\eta \, (\phi=1.5\%)$ & GMRES iterations & average S block iterations \\ \hline $2 \cdot 10^1$ & 74 & 116\\ $2 \cdot 10^2$ & 124 & 147\\ $2 \cdot 10^3$ & 124 & 248\\ $2 \cdot 10^4$ & 125 & 345\\ $2 \cdot 10^5$ & 175 & 403\\ $2 \cdot 10^6$ & 182 & 434\\ $2 \cdot 10^7$ & 183 & 435\\ \hline \end{tabular} \caption{Iteration counts for a linear solver tolerance of $10^{-14}$, and 887939 Stokes degrees of freedom (98304 mesh cells).} \label{table:parameter-iteration-counts} \end{table} In addition, we also test the sensitivity of the solver convergence rate to the increase in the compaction-to-shear-viscosity ratio as $\phi \rightarrow 0$ by varying the threshold for the onset of two-phase flow. The results (Table~\ref{table:threshold-iteration-counts}) reveal no sensitivity of the GMRES iteration count and only a very weak sensitivity of the S block iteration count to this threshold. \begin{table} \centering \begin{tabular}{|c|rr|} \multicolumn{3}{c}{\textbf{Threshold for melt transport: Number of linear solver iterations}} \\ \hline $K_\text{threshold}$ & GMRES iterations & average S block iterations \\ \hline $10^{-6}$ & 124 & 248\\ $10^{-8}$ & 124 & 252\\ $10^{-10}$ & 124 & 255\\ $10^{-12}$ & 124 & 262\\ $10^{-14}$ & 124 & 290\\ \hline \end{tabular} \caption{Iteration counts for a linear solver tolerance of $10^{-14}$, and 887939 Stokes degrees of freedom (98304 mesh cells). } \label{table:threshold-iteration-counts} \end{table} Finally, we also want to provide a direct comparison to to the method of \cite{dannberg2016compressible}. Due to the strong dependence on problem size, we had to reduce the resolution, increase the threshold for the onset of two-phase flow and increase the solver tolerance of the model for this comparison, and we also removed the temperature dependence of viscosity. The results in Table~\ref{table:threshold-comparison} show both overall lower iteration counts and lower sensitivity to model parameters for the formulation developed in this study. They highlight that also for realistic application cases such as melt migration below mid-ocean ridges, our new method performs substantially better than the one developed in \cite{dannberg2016compressible}, and is feasible for accurately modelling the interface between regions with and without melt. \begin{table} \centering \begin{tabular}{|c|r|rr|rr|} \multicolumn{6}{c}{\textbf{Threshold for melt transport: Number of linear solver iterations}} \\ \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{\textbf{\cite{dannberg2016compressible}}} & \multicolumn{2}{c|}{\textbf{this study}} \\ \hline $K_\text{threshold}$ & $\phi_\text{threshold}$ & GMRES iterations & avg. S block iterations & GMRES iterations & avg. S block iterations \\ \hline $10^{0}$ & $10^{-2}$ & 1496 & 10 & 62 & 30\\ $10^{-1}$ & $2.15 \cdot 10^{-3}$ & 3471 & 10 & 63 & 110\\ $10^{-2}$ & $4.64 \cdot 10^{-4}$ & 12600 & 10 & 64 & 137\\ $10^{-3}$ & $10^{-4}$ & 42272 & 10 & 64 & 166\\ $10^{-4}$ & $2.15 \cdot 10^{-5}$ & 95869 & 10 & 64 & 166\\ $10^{-5}$ & $4.64 \cdot 10^{-6}$ & -- & -- & 64 & 173\\ \hline \end{tabular} \caption{Iteration counts for a linear solver tolerance of $10^{-8}$, and 62404 Stokes degrees of freedom (6144 mesh cells). Entries marked with `--' indicate that there was no convergence reached after 100000 GMRES iterations.} \label{table:threshold-comparison} \end{table} \subsubsection{Scaling behaviour of the implemented solver} \label{sec:scaling} In practice, not only the number of iterations, but also the wallclock time per iteration controls the computational cost of a model time step. Therefore we present scaling tests for the models of this section and Section~\ref{sec:3d-application} in Figure~\ref{fig:combined_scaling}. All scaling tests were done on Intel Xeon (Skylake) cores connected by an Intel Omnipath network at the Stampede 2 system of the Texas Advanced Computing Center (TACC). Both models show a linear strong scaling to about 50,000 degrees of freedom (DoFs) per core (considering only solid velocity, fluid pressure, and compaction pressure DoFs); beyond that the efficiency drops significantly. The weak scaling results suggest a slightly less than optimal, but still acceptable scaling with model size, which leads to an increase of Stokes solver time by about a factor of 2.7 when increasing the model size by a factor of 64 (from 5 million DoFs to 327 million DoFs in 2-D, and from 6 million DoFs to 396 million DoFs in 3-D). These results are consistent with the slight increase in Schur complement iterations with model size discussed in Section~\ref{sec:problem-size} and show that our solver
# Dual voice coil speaker: Each coil get an inductor? 1. Dec 25, 2014 ### BeautifulLight 1st order crossover desired low-pass frequency: 100hz At 8 ohms, that'd be about 12mH. But what happens with a DVC speaker? Each coil get a 12mH inductor? I'd like to think so... Speaker is 8+8 ohms and will be parallelled to a set of 8 ohm extended range speakers showing my amplifier 4 ohms per channel. *Edit: Oh my, a 12mH inductor is nearly $40.00 USD! Can I use an unpolarized electrolytic capacitor in parallel to each coil? They are only$3.75 each. Last edited: Dec 25, 2014 2. Dec 26, 2014 ### Baluncore If a DVC speaker is 8+8 then you can run it's two coils in parallel as 4 ohm, in series as 16 ohm or by using only one of the coils as an 8 ohm speaker with half the power rating. Any which way, you will only need one crossover network, unless there is a crossover power handling problem. We do not know the power involved here. That does not seem to make sense. You are missunderstanding something or confusing us by mixing two channel stereo discussion with one channel discussion. The series inductor required to block higher frequencies should be in series with the woofer. You could use a capacitor across the VC but that possibility would depend on the circuit you are using for your crossover network and the possibility of self-resonances. 3. Dec 26, 2014 ### meBigGuy What power levels do you want to run at? Post a sketch of the hookup you are proposing. How many channels, how many speakers, where you want the crossover. 4. Dec 26, 2014 ### Averagesupernova Not uncommon at all to run dual voice coil subs with one coil per channel. A properly set up system with crossovers will not let the amp see your setup as 4 ohms. 5. Dec 26, 2014 ### jim hardy yep. 6. Dec 27, 2014 ### BeautifulLight Amp: 2 x 20 Watts RMS Extended/full range speakers: 8 ohms each (single voice coil or SVC for short) The manual that came with my amplifier didn't specify whether or not that 2 x 20 Watts figure was at 4 or 8 ohms. I assume 4 ohms as it'd make a good selling point (20 x 2 at 4 ohms sounds better than 10 x 2 at 8 ohms). Manual just stated the amplifier was stable down to 4 ohms. Anyways, the full range speakers sounded pretty good alone, but I wasn't getting anything under 100hz or at least not very much, so I decided to add a subwoofer. The ideal thing would be to purchase another (more powerful) amplifier, but I'm broke and my room is only 12 x 12 so it's not going to take much. I'll just use the amplifier I have. I have a few options, but the ideal thing to do is to run 1 DVC 8+8 ohm subwoofer. Take one 8 ohm coil from my subwoofer and parallel it to one of my 8 ohm full ranges and connect it to one of my amplifier's outputs. This would show my amplifier 4 ohms which it can safely be ran at. Now do the same thing for the other channel. I can parallel the coils of my subwoofer and run it to either of my amps outputs (either left or right) but that'd only give me bass from left or right channel. Same Applies if I'd wire it in series. I opted for a dvc 8 ohm so I'd only have to buy 1 subwoofer. It'd get me both left and right, and get me the most power my amp can put out. I can't afford to buy two 12mH inductors to wire to each coil, so I'm asking if I can use two electrolytic capacitors in parallel to each coil of my subwoofer. This would save me $74 usd. Also, take that 20 x 2 with a grain of salt. According to the audio gurus over on diyaudio, it's more like 12 watts and swapping out the 12v, 2 amp supplied power supply with a 12v, 5 amp psu is recommended. I forgot to mention. I will be wiring a 500uf cap in series to each full range. My iPod is refusing to show an upright image. #### Attached Files: • ###### image.jpg File size: 41.9 KB Views: 207 • ###### image.jpg File size: 48.3 KB Views: 214 Last edited: Dec 27, 2014 7. Dec 27, 2014 ### jim hardy Is my arithmetic right, that 200 uf would be 8 ohms at 100 hz? That'd be the value to put in series with your smaller speakers. The object as you know is to force the lf into that woofer (which i presume you'll put in a good enclosure). Impedance in series with speaker adds to its 8 ohms. So with a good crossover your amp sees more or less same impedance at all frequencies, 8 ohms not 4. Series inductor makes woofer "disappear" at high frequency, series capacitor makes tweeter disappear at low frequency. That's why you cant just bypass your woofer with a capacitor, it'll look to the amplifier like a short at high frequency. 12 millihenries will be somewhere around 300 feet of 18awg on a 1 inch form. Here's a calculator to tinker with.. http://www.diyaudioandvideo.com/Calculator/Inductor/ That much magnet wire will cost about the same as the inductors you considered. http://www.powerwerx.com/wire-cable...15&plc={placement}&pkw=+magnetic +wire&pmt|=p Your dual voicecoil sub is a good idea. Maybe you can scrounge enough magnet wire to wind your own inductors. 8. Dec 27, 2014 ### meBigGuy Hopefully you know that you need non-polarized capacitors. To build an RC lowpass you need a series resistor for the capacitor to work against. That resistor consumes power. The resistor will appear across the amp at high frequencies, and in series with the sub at low. It will consume significant power. The smaller you make it, the more power you can get to the sub, but the less power you will get to the other speaker before the amp clips. You have a low power system, so you can maybe wind with 22 gauge wire. It is 200 ft of wire, and 3.6 ohms on a 0.5 dia X 1in core. At you low power levels you could try an iron core inductor. 8 ohms, 10 watts = 1.11 amps. No idea how to design that with an iron or ferrite core. Here is a 10mH 3.8ohm ferrite crossover coil for$10. http://www.mcmelectronics.com/product/555-25635 10mH 1.2 ohms is a bit more http://www.mcmelectronics.com/product/555-25615 9. Dec 27, 2014 ### BeautifulLight The 100hz crossover point is for my woofer. The 500uF capacitors are for my full ranges. The full ranges I have in my possession are a set of Aurasound ns-3's. These Aura's + the 500uF capacitors are called Wolf's PC Speakers. It's a simple design from a member on diyaudio. I already had the Auras and capacitors in my possession. When I purchased the woofer, the tech recommended that I cross the woofer over around 100hz and call it a day. When you say I can't just bypass my woofer with a capacitor, you mean I can't wire a capacitor in parallel to either Coil of my woofer with the intent to low pass the woofer at a desired frequency? I'm aware a capacitor in series with a loudspeaker will shun low-er frequencies and will act as a high pass filter. I'm also aware that an inductor in series with loudspeaker will shun high-er frequencies and will act as a low pass filter. But I thought that a capacitor in PARALLEL with a loudspeaker would serve as a low pass? And an inductor in PARALLEL would serve as a high pass? I assume when you say short, you saying that ...uh, an escape route for electrons to flow ...so they (high frequencies) would flow through the speaker (and not the cap). So
similar trend is observed for $f_{esc}$, further highlighting the role of dust and gas mass in the escape of Ly$\alpha$ photons. At $z\sim 2.2$, \citet{mathee2016} also observe the $f_{esc}$ anti-correlation from Figure \ref{fig:6}, although only when stacking galaxies. Their massive objects showing high $f_{esc}$, which they associate with dusty gas outflows, seem to lie below the M$_*$-SFR sequence at $z\sim 2$. These inferences, combined with the significantly higher $f_{esc}$ they measure for larger apertures, make sense in a Ly$\alpha$ diffuse halo scheme, which we do not observe due to our aperture size. Studies on the dependence of $f_{esc}$ on M$_*$ also explore $1.9<z<3.6$ (\citealt{hagen2014}). Their $W_{Ly\alpha}$-selected LAEs follow a trend similar to the one we find at $z\sim 4$. In summary, the evidence for an anti-correlation between $W_{Ly\alpha}$ (or $f_{esc}$) and M$_*$ is significant, but the scatter seems to depend on measurement methodology and sample selection. \begin{figure*} \centering \includegraphics[width=7in]{fig5.pdf} \caption{Detail of the M$_*$ and SFR of our galaxies. Left: M$_*$-SFR plane location of our objects. Non-detections are shown in gray. Low, medium, and high $W_{Ly\alpha}$ detections are plotted as cyan circles, blue diamonds, and magenta pentagons, respectively. The values of M$_*$ and SFR we show come from SED fitting of 3D-HST photometry using FAST under the assumption of constant SFHs. The dashed lines indicate the minimum and maximum M$_*$ that galaxies can reach assuming that they have been forming stars at a constant rate. Right: M$_*$ (top) and SFR (bottom) histograms of the 625 galaxies in our sample (grey) and 120 detections (black). For better comparison, we divide the histogram values for the full sample by five. Note that inferences on the fraction of detections must take into account that incompleteness is dependent on both M$_*$ and SFR. In this paper, we use the exponential W$_{Ly\alpha}$ distribution normalization ($A$; Equations \ref{mass1} and \ref{mass2}) as a proxy for the dependence of the Ly$\alpha$ fraction on M$_*$. Our measurements are more than $2\sigma$ consistent with a decrease in the fraction going to higher M$_*$ (see Figure \ref{fig:7}).} \label{fig:5} \end{figure*} Inferences on the M$_*$ distribution of LAEs are not as evident. \citet{hagen2014} do not find their $1.9<z<3.6$ Ly$\alpha$ luminosity-selected LAE number distribution to depend on M$_*$. Their results agree with the $z\sim 3.1$ narrowband-selected survey of \citet{mclinden2014}. However, we show in Figure \ref{fig:6} that spectroscopic completeness is not independent of M$_*$. Therefore, most LAEs survey follow-ups could have higher incompleteness toward lower M$_*$. Since our Bayesian analysis takes into account our completeness for every object, we can test the significance of this claim. The coefficient $A_{\mbox{\scriptsize M}_*}$ in Equation (\ref{mass1}) represents the exponential fraction of LAE dependence on M$_*$. As evidenced by Equation (\ref{mass12}), our measurements are more than $2\sigma$ consistent with a decrease in the fraction going to higher M$_*$. In the scheme of an exponential model, this translates to an LAE distribution dominated by lower M$_*$ galaxies. This result complements the much more significant anti-correlation between $W_{0}$ and M$_*$ we find (see Equation \ref{mass22}). \begin{figure} \centering \includegraphics[width=3.4in]{fig6.pdf} \caption[Ly$\alpha$ Equivalent Width and Escape Fraction dependence on Stellar Mass]{Top: plot of our $W_{Ly\alpha}$ as a function of M$_*$. Bottom: Plot of our Ly$\alpha$ escape fractions as a function of M$_*$. The detections corresponding to the low-, medium-, and high-mass subsamples are shown as blue circles, green diamonds, and red pentagons, respectively. We observe both Ly$\alpha$ diagnostics, $W_{Ly\alpha}$ and $f_{esc}$, to anti-correlate with M$_*$. The upper limits associated with non-detections are plotted as gray triangles. We use these non-detections to give a rough estimate of our 40\% completeness (dashed). Note how our completeness is not independent of M$_*$, which has to be considered when making inferences on the M$_*$ distribution and characteristic $W_{Ly\alpha}$ ($f_{esc}$) of LAEs. We also show in yellow the typical rest-frame selection threshold of $W_{Ly\alpha}>20$\AA \ imposed by narrowband surveys (e.g. \citealt{gronwall2007,guaita2010,adams2011}).} \label{fig:6} \end{figure} \begin{figure*} \centering \includegraphics[width=7in]{fig7.pdf} \caption{Rest-frame $W_{Ly\alpha}$ distributions of the low-, medium-, and high-mass bins, from left to right. We use Monte Carlo simulations to characterize the exponential $W_{Ly\alpha}$ distribution dependence on M$_*$ following Equations (\ref{mass1}) and (\ref{mass2}). We can then simulate $W_{Ly\alpha}$ distributions for every subsample and obtain 1$\sigma$ constraints (shaded contours). Keep in mind that these constraints are given by collections of curves resulting from our Monte Carlo simulations. Our best results correspond to the median distribution we expect by considering every object in the bin. We also plot as dotted lines the completeness-corrected counterparts of the median distributions. In contrast to \citet{oyarzun2016}, where the modeling is based on the characteristic M$_*$ of each bin, the results here take into account the M$_*$ of every galaxy. As we show in Equations (\ref{mass12}) and (\ref{mass22}), we find both parameters, the fraction of Ly$\alpha$-emitting galaxies and characteristic $W_{Ly\alpha}$ scale, to anti-correlate with M$_*$ with significances higher than 2$\sigma$ and 6$\sigma$, respectively.} \label{fig:7} \end{figure*} \begin{figure} \centering \includegraphics[width=3.4in]{fig9.pdf} \caption{Plot of our $W_{Ly\alpha}$ (top panel) and $f_{esc}$ (bottom panel) as a function of intrinsic SFR. Detections corresponding to our low-, medium-, and high-mass bins are shown as the blue circles, green diamonds, and red pentagons, respectively. We find both measurements to anti-correlate with the SFR we measure from SED fitting. For the plot in the bottom panel, we use the same value for SFRs when determining the escape fractions and galaxy SFRs. For reference, we show our non-detection upper limits as gray triangles. We use these non-detections to give a rough estimate of our 40\% completeness (dashed), confirming how strongly our sensitivity depends on SFR.} \label{fig:9} \end{figure} \subsection{SFR} \label{5.2} High-redshift galaxies have been observed to follow a correlation between SFR and M$_*$, known as the star-forming main sequence (e.g., \citealt{keres2005,finlator2007,stark2009,gonzalez2011,whitaker2012}; see our Figure \ref{fig:5}). In terms of the underlying physics, more massive objects dominate gas accretion in their neighborhood, feeding and triggering star-formation. Such gas infall seems to dominate over galaxy growth at high redshift (\citealt{keres2005,finlator2007}). This scheme implies that more massive objects form stars at higher rates, at least down to our observational limitations and modeling of high-redshift ISM. Given our results on M$_*$ from the previous section, we expect similar trends between $W_{Ly\alpha}$ ($f_{esc}$) and SFR (Figure \ref{fig:5}). Even more, star-forming galaxies have a higher neutral gas mass, which can hamper the escape of Ly$\alpha$ photons from galaxies (\citealt{verhamme2006}). In fact, it has also been suggested that photoelectric absorption rules Ly$\alpha$ depletion, even over dust attenuation (\citealt{reddy2016}). In this section, we explore any Ly$\alpha$ dependence on SFR within our dataset. We remark that our SFRs come from SED fitting of 3D-HST photometry using FAST (see Section \ref{2.3}), i.e., they have typical associated timescales of 100 Myr (\citealt{kennicutt1998}). We stress that our derived SFRs differ from 3D-HST SFRs, since our calculation assumes cSFHs instead of exponentially declining SFHs (\citealt{skelton2014}). We show the $W_{Ly\alpha}$ and $f_{esc}$ dependence on SFR in Figures \ref{fig:5} and \ref{fig:9}. In the latter, we include upper limits for our non-detections to give an insight into how our incompleteness depends on SFR. A clear anti-correlation between $W_{Ly\alpha}$ ($f_{esc}$) and SFR is observed. These results come as no surprise, as they have been previously reported. Most studies of $W_{Ly\alpha}$ dependence on SFR involve uncorrected SFRs (\citealt{pettini2002,shapley2003,yamada2005,gronwall2007,tapken2007,ouchi2008}; all compiled in \citealt{verhamme2008}). Even without dust correction, the anti-correlation is still present in these studies (refer to Figure 19 in \citealt{verhamme2008} and Section \ref{5.4} of this work). Based on a $z\sim 2$ H$\alpha$ emitters sample, \citet{mathee2016} also observe a clear anti-correlation between Ly$\alpha$ $f_{esc}$ and SFR. Interestingly, they do not only observe such a trend in their individual objects, but likewise on their stacks when using different apertures (galaxy diameters of 12 and 24 kpc). As their dataset includes H$\alpha$ fluxes, they can recover SFRs and $f_{esc}$ using H$\alpha$ luminosities. The fact that they observe similar trends with such a different sample suggests that the anti-correlation between $W_{Ly\alpha}$ ($f_{esc}$) and SFR is not only independent of redshift, but also observational constraints like aperture and methodology for recovering SFRs. Their comparison, however,
\bar{\nabla}_I(\lambda)).\end{align*} By \cite[Lemma 2.2 (1)]{A2}, we have $R\uHom(\bar{\nabla}_I(\mu), \bar{\nabla}_I(\lambda)) = 0$ as desired. Parts (2)-(4): Let $V=R\uHom(\bar{\nabla}_I(\mu), \bar{\nabla}_I(\mu))$. Lemma \ref{lem:vectorbundlehelp} (2) gives an isomorphism $R\uHom(\mathcal{V}_I(\mu), \bar{\nabla}_I(\mu)\langle\delta_\mu\rangle)\cong \mathbbm{k}$, so a similar argument as above along with part (1) imply there are $n_1, \ldots, n_k >0$ so that $\mathbbm{k} \in V\star V\langle n_1\rangle \star\cdots\star V\langle n_k\rangle$. Thus, \cite[Lemma 2.2 (2)]{A2} yields parts (2), (3), and (4). That is, for $i<0$, $\co^i(V)=\uHom^i(\bar{\nabla}_I(\mu), \bar{\nabla}_I(\mu)) = 0$. We have $\co^0(V)=\uHom(\bar{\nabla}_I(\mu), \bar{\nabla}_I(\mu)) \cong \mathbbm{k}$ so in particular, $\Hom(\bar{\nabla}_I(\mu), \bar{\nabla}_I(\mu)\langle m\rangle) = 0$ for $m\neq 0$. For $i>0$, $\co^i(V) = \uHom^i(\bar{\nabla}_I(\mu), \bar{\nabla}_I(\mu))$ is concentrated in positive degrees, so for $m\leq 0$, we have $\Hom^i(\bar{\nabla}_I(\mu), \bar{\nabla}_I(\mu)\langle m\rangle) = 0$. \end{proof} \begin{remark} A similar strategy as in this section using the partial order $\wtles_I$ on $\mathbb{X}^+_I$ proves the collection $(A_I(\lambda), \lambda\in\mathbb{X}^+_I)$ is also graded quasi-exceptional. However, this partially ordered set cannot be refined to be isomorphic to $(\mathbb{Z}_{\geq}, \leq)$ (except when $I = \Sigma$), so the results of \cite{BEZ2} will not apply to give an associated t-structure. \end{remark} By the same techniques, the proper standard objects also form a quasi-exceptional collection with respect to the opposite order on $\mathbb{X}^+_I$. \begin{prop} \label{prop:stdqex}The objects $(\bar{\Delta}_I(\lambda), \lambda\in\mathbb{X}^+_I)$ satisfy: \begin{enumerate} \item $R\uHom(\bar{\Delta}_I(\lambda), \bar{\Delta}_I(\mu)) = 0$ for all $\mu<\lambda$, \item $\uHom^i(\bar{\Delta}_I(\mu), \bar{\Delta}_I(\mu)) = 0$ for $i<0$, \item $\uHom(\bar{\Delta}_I(\mu), \bar{\Delta}_I(\mu)) \cong \mathbbm{k}$, \item If $i>0$ and $n\leq 0$, then $\Hom^i (\bar{\Delta}_I(\mu), \bar{\Delta}_I(\mu)\langle n \rangle) = 0$. \end{enumerate} \end{prop} \begin{cor} \label{cor:mut} Let $\lambda\in\mathbb{X}^+_I$. \begin{enumerate} \item $\mathrm{D}^I_{< \lambda}$ coincides with the triangulated category generated by the collection $(\bar{\nabla}_I(\gamma)\langle m\rangle,\gamma<\lambda \textup{ with } \gamma\in\mathbb{X}^+_I, m\in\mathbb{Z})$. \item $\mathrm{D}^I_{< \lambda}$ coincides with the triangulated category generated by the collection $(\bar{\Delta}_I(\gamma)\langle m\rangle, \gamma<\lambda \textup{ with } \gamma\in\mathbb{X}^+_I, m\in\mathbb{Z})$. \item For all $\mathcal{G}\in \mathrm{D}^I_{< \lambda}$, we have $\Hom(\mathcal{G}, \bar{\nabla}_I(\lambda)) = \Hom(\bar{\Delta}_I(\lambda), \mathcal{G}) = 0$. \end{enumerate} \end{cor} \begin{proof} Part (1) can be proven by induction on $(\mathbb{X}^+_I, \leq)$ using the first distinguished triangle in Lemma \ref{lem:dtri}. Part (2) is similar. Now, part (3) follows from the first two parts along with part (1) of Propositions \ref{prop:cosqex} and \ref{prop:stdqex}. \end{proof} \begin{remark} The distinguished triangles in Lemma \ref{lem:dtri} along with Corollary \ref{cor:mut} justify referring to the collection $(\bar{\nabla}_I(\lambda), \lambda\in\mathbb{X}^+_I)$ as the $\leq$-{\em{mutation}} of the collection $(A_I(\lambda), \lambda\in\mathbb{X}^+_I)$, although we were not able to perform mutation directly. \end{remark} For each $\mu\in\mathbb{X}^+_I$, we get a morphism \begin{equation}\label{eq:pstdpcosmap}\bar{\Delta}_I(\mu)\rightarrow \bar{\nabla}_I(\mu)\end{equation} by composing the maps $\bar{\Delta}_I(\mu)\rightarrow A_I(\mu)\langle -\delta_\mu\rangle$ and $A_I(\mu)\langle -\delta_\mu\rangle\rightarrow\bar{\nabla}_I(\mu)$ from the triangles in Lemma \ref{lem:dtri}. Next, we must confirm that our two quasi-exceptional sets above are dual to each other in the sense of \cite[2.2]{BEZ2}. See also \cite[Definition 2.6]{A2}. \begin{prop}\label{prop:dual} The collection of objects $(\bar{\Delta}_I(\lambda), \lambda\in\mathbb{X}^+_I)$ satisfy \begin{enumerate} \item $\bar{\Delta}_I(\lambda)\cong \bar{\nabla}_I(\lambda) \,\mathrm{ mod }\, \mathrm{D}^I_{<\lambda}$ \item If $\lambda>\mu$, then $R\uHom(\bar{\Delta}_I(\lambda), \bar{\nabla}_I(\mu)) = 0$ \end{enumerate} That is, the quasi-exceptional collection $(\bar{\nabla}_I(\lambda), \lambda\in\mathbb{X}^+_I)$ is dualizable with dual collection $(\bar{\Delta}_I(\lambda), \lambda\in\mathbb{X}^+_I)$. \end{prop} \begin{proof} Part (1) is an immediate consequence of Lemma \ref{lem:dtri}. For part (2), the first distinguished triangle in Lemma \ref{lem:dtri} and that $\mu<\lambda$ imply that $\bar{\nabla}_I(\mu)\in\mathrm{D}^I_{<\lambda}$. Hence, the required vanishing follows from Corollary \ref{cor:mut} part (3). \end{proof} Now we come to the main theorem of this section which defines the {\it exotic} t-structure on $\mathrm{D}^\mathrm{b}\mathrm{Coh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}}^I)$. In the statement, we use the notation $\langle S \rangle$ for a set of objects $S$ to mean the full subcategory generated by extensions of objects in $S$. \begin{thm}\label{thm:exotict-str} There is a $t$-structure $(^{E}{\mathcal{D}}^{\leq 0}, ^{E}{\!\mathcal{D}}^{\geq 0})$ on $\mathcal{D}:=\mathrm{D}^\mathrm{b}\mathrm{Coh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}}^I)$ given by \[^{E}{\mathcal{D}}^{\leq 0} = \langle\{ \bar{\Delta}_I(\lambda)\langle m\rangle[d] \,|\, d\geq0, \lambda\in\mathbb{X}^+_I, m\in\mathbb{Z}\} \rangle\] \[^{E}{\mathcal{D}}^{\geq 0} = \langle\{ \bar{\nabla}_I(\lambda)\langle m\rangle[d] \,|\, d\leq0, \lambda\in\mathbb{X}^+_I, m\in\mathbb{Z}\}\rangle.\] Moreover, $\upmu{I*}$ is t-exact with respect to these exotic t-structures, and the heart $\mathrm{ExCoh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}}^I) \subset \mathrm{D}^\mathrm{b}\mathrm{Coh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}}^I)$ is stable under $\langle 1 \rangle$ and contains $\bar{\nabla}_I(\lambda)$ and $\bar{\Delta}_I(\lambda)$ for all $\lambda\in\mathbb{X}^+_I$. \end{thm} \begin{proof}Propositions \ref{prop:cosqex} and Corollary \ref{prop:dual} yield that the objects $(\bar{\nabla}_I(\lambda), \mathbb{X}^+_I)$ form a dualizable graded quasi-exceptional collection with dual collection $(\bar{\Delta}_I(\lambda), \mathbb{X}^+_I)$. Moreover, Corollary \ref{lem:cosgen} says that they also generate our category, and our set $(\mathbb{X}^+_I, <)$ is well-ordered. Thus, \cite[Proposition 1]{BEZ2} proves we have a t-structure. (Technically, \cite[Proposition 1]{BEZ2} is considering the ungraded case, but the same argument works for the graded case as well, see \cite[Proposition 4]{BEZ}.) It's clear the heart is stable under $\langle 1\rangle$, and that $\upmu{I*}$ is t-exact by definition of the two t-structures and our definition of the proper (co) standards. Note that \cite[Proposition 1]{BEZ2} does not guarantee that proper standards and proper costandards are in the heart. However, it is known that $\nabla(\lambda)$ and $\Delta(\lambda)$ (for all $\lambda\in\mathbb{X}$) are in $\mathrm{ExCoh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}})$ by \cite[Corollary 3.10]{MR}, so the same is true here by exactness of $\upmu{I*}$. \end{proof} \begin{remark} \begin{enumerate} \item Exactness of $\upmu{I*}$ along with \cite[Proposition 3.12, (1)]{MR} imply the generalized Andersen--Jantzen sheaf $A_I(\lambda)$ is exotic for all $\lambda\in\mathbb{X}$. Similarly, $\mathcal{V}_I(\lambda)$ for $\lambda\in\mathbb{X}^+_I$ is exotic since $\mathcal{V}(\lambda)$ has a filtration by line bundles. \item It is easy to see that the functor $\mu_{I*}: \mathrm{D}^\mathrm{b}\mathrm{Coh}^{G\times\mathbb{G}_{\mathrm{m}}}(\widetilde{\mathcal{N}}^I)\rightarrow \mathrm{D}^\mathrm{b}\mathrm{Coh}^{G\times\mathbb{G}_{\mathrm{m}}}(\mathcal{N})$ is also exact taking exotic sheaves to perverse coherent sheaves. \item Lemma \ref{lem:inducedaj} proves that the ``induction" functor $\mathrm{D}^\mathrm{b}\mathrm{Coh}^{G\times\mathbb{G}_{\mathrm{m}}}(G\times^{P_I}\mathcal{N}_L)\rightarrow \mathrm{D}^\mathrm{b}\mathrm{Coh}^{G\times\mathbb{G}_{\mathrm{m}}}(\widetilde{\mathcal{N}}^I)$ is exact as well taking perverse coherent sheaves to exotic sheaves since the generalized Andersen--Jantzen sheaves are exotic. \end{enumerate} \end{remark} \section{Study of the heart}\label{Sec: heart} The t-structure in \cite[Proposition 1]{BEZ2} arising from a quasi-exceptional collection is obtained by gluing or recollement. For each $\lambda\in \mathbb{X}^+_I$, let $i_\lambda: \mathrm{D}^I_{<\lambda}\rightarrow\mathrm{D}^I_{\leq\lambda}$ and $\Pi_\lambda: \mathrm{D}^I_{\leq\lambda}\rightarrow \mathrm{D}^I_{\leq\lambda}/\mathrm{D}^I_{<\lambda}$ denote the natural inclusion and quotient functors. Then \cite[Lemma 4]{BEZ2} guarantees existence of left and right adjoints to both $i_\lambda$ and $\Pi_\lambda$ which we denote by $i^L_\lambda$, $i^R_\lambda$, $\Pi^L_\lambda$, and $\Pi^R_\lambda$. That is, for each $\lambda\in \wts^+_I$, we have a recollement diagram \[ \begin{tikzpicture} \node (a) at (0,0) {$\mathrm{D}^I_{<\lambda}$}; \node (b) at (4,0) {$\mathrm{D}^I_{\leq\lambda}$}; \node (c) at (8,0) {$\mathrm{D}^I_{\leq\lambda}/\mathrm{D}^I_{<\lambda}$}; \draw[-latex] (a) -- (b); \draw[-latex, above] (a) to node {$i_\lambda$} (b); \draw[-latex, above] (b) to node {$\Pi_\lambda$} (c); \draw[-latex,bend right=30, above] (b) to node {$i_\lambda^L$} (a); \draw[-latex,bend left=30, above] (b) to node {$i_\lambda^R$} (a); \draw[-latex,bend right=30, above] (c) to node {$\Pi_\lambda^L$} (b); \draw[-latex,bend left=30, above] (c) to node {$\Pi_\lambda^R$} (b); \end{tikzpicture} \] Each of the above categories has induced compatible t-structures such that $i_\lambda, \Pi_\lambda$ are exact, $i^R_\lambda, \Pi_\lambda^R$ are left exact, and $i^L_\lambda, \Pi_\lambda^L$ are right exact, see \cite[Section 1.4, Proposition 1.4.16]{BBD}. Now, Lemma \ref{lem:dtri} and Corollary \ref{cor:mut} part (3) imply \[\bar{\nabla}_I(\lambda)\cong \Pi_\lambda^R\Pi_\lambda(A_I(\lambda)\langle-\delta_\lambda\rangle) \hspace{.5cm}\textup{ and }\hspace{.5cm}\bar{\Delta}_I(\lambda)\cong\Pi_\lambda^L\Pi_\lambda (A_I(\lambda)\langle -\delta_\lambda\rangle).\] For each $\lambda\in \mathbb{X}^+_I$, we also define objects \begin{equation} \label{eq:(co)std}\nabla_I(\lambda):= \Pi_\lambda^R\Pi_\lambda(\mathcal{V}_I(\lambda)\langle-2\delta^I_{w_I\lambda}-\delta_\lambda\rangle)\hspace{.5cm}\textup{ and }\hspace{.5cm}\Delta_I(\lambda):=\Pi_\lambda^L\Pi_\lambda (\mathcal{V}_I(\lambda)\langle -\delta_\lambda\rangle),\end{equation} and refer to them as costandard and standard objects respectively. Recall also the natural map from \eqref{eq:pstdpcosmap} $\bar{\Delta}_I(\lambda)\rightarrow \bar{\nabla}_I(\lambda)$; denote the image by $\mathscr{L}_I(\lambda)$. Then \cite[Proposition 2]{BEZ2} guarantees that $\mathscr{L}_I(\lambda)$ is irreducible, and any irreducible in $\mathrm{ExCoh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}}^I)$ is isomorphic to $\mathscr{L}_I(\lambda)\langle m\rangle$ for some $\lambda\in\mathbb{X}^+_I, m\in\mathbb{Z}$. The following proposition shows $\upmu{I*}:\mathrm{ExCoh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}})\rightarrow\mathrm{ExCoh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}}^I)$ factors through a Serre quotient. \begin{prop} The exact functor $\upmu{I*}:\mathrm{ExCoh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}})\rightarrow\mathrm{ExCoh}^{G \times \mathbb{G}_m}(\widetilde{\mathcal{N}}^I)$ satisfies $\upmu{I*}(\mathscr{L}(\lambda)) = 0$ in case $\lambda\not\in-\mathbb{X}^+_I$. \end{prop} \begin{proof} By definition, the object $\mathscr{L}(\lambda)$ is image of a nonzero map $h: \Delta(\lambda)\rightarrow \nabla(\lambda)$. Hence, using Lemma \ref{lem:stdtostd}, $\upmu{I*}(\mathscr{L}(\lambda))$ is image of the map \[\upmu{I*}(h): \bar{\Delta}(\dm_I(\lambda))\la2\delta^I_{\mathrm{ant}_I(\lambda)}-\delta^I_\lambda\rangle\rightarrow \bar{\nabla}(\dm_I(\lambda))\langle\delta^I_\lambda\rangle,\] where $\mathrm{ant}_I(\lambda)$ denotes the unique element in $-\mathbb{X}^+_I\cap W_I(\lambda)$. Properties of dualizable quasi-exceptional collections (namely Proposition \ref{prop:stdqex} part (3) together with Proposition \ref{prop:dual} part (1)) guarantee that $\Hom(\bar{\Delta}(\dm_I(\lambda))\langle m\rangle, \bar{\nabla}(\dm_I(\lambda))\langle n\rangle)=0$ unless $m=n$. In other words, $\upmu{I*}(\mathscr{L}(\lambda)) = 0$ unless $\lambda\in-\mathbb{X}^+_I$. \end{proof} \subsection{Properly Stratified Categories} Suppose $\mathcal{A}$ is a $\mathbbm{k}$-linear abelian category. Assume $\mathcal{A}$ is \textit{graded}, that is, there is an automorphism $\la1\rangle:\mathcal{A}\rightarrow\mathcal{A}$ which acts as a \textit{shift the grading} functor. In particular, $\langle 1\rangle$ permutes $\mathrm{Irr}(\mathcal{A})$, the set of (isomorphism classes of) irreducible objects in $\mathcal{A}$. Let $\Omega = \mathrm{Irr}(\mathcal{A})/\mathbb{Z}$, and for each $\gamma\in\Omega$, choose a representative simple object $L_\gamma$ in $\mathcal{A}$. Then any simple object in $\mathcal{A}$ is isomorphic to $L_\gamma\langle n\rangle$ for some $\gamma\in\Omega$ and $n\in\mathbb{Z}$. Assume $\Omega$ is partially ordered with respect to $\leq$, and assume for any $\gamma\in\Omega$, the set $\{\xi\in\Omega | \xi\leq\gamma\}$ is finite. Given a finite order ideal $\Gamma\subset \Omega$, we let $\mathcal{A}_\Gamma\subset\mathcal{A}$ denote the Serre subcategory of $\mathcal{A}$ generated by the collection of simple objects $\{L_\gamma\langle n\rangle | \gamma\in\Gamma, n\in\mathbb{Z}\}$. We simplify notation in the special case $\mathcal{A}_{\leq \gamma} := \mathcal{A}_{\{\xi\in\Omega | \xi\leq \gamma\}}$.
# Documentation ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English verison of the page. To view all translated materals including this page, select Japan from the country navigator on the bottom of this page. ## Ensemble Methods ### Framework for Ensemble Learning You have several methods for melding results from many weak learners into one high-quality ensemble predictor. These methods closely follow the same syntax, so you can try different methods with minor changes in your commands. You can create an ensemble for classification or regression using fitensemble. To train an ensemble using fitensemble, use this syntax: ens = fitensemble(X,Y,model,numberens,learners) • X is the matrix of data. Each row contains one observation, and each column contains one predictor variable. • Y is the vector of responses, with the same number of observations as the rows in X. • model is a character vector, such as 'bag', naming the type of ensemble. • numberens is the number of weak learners in ens from each element of learners. So the number of elements in ens is numberens times the number of elements in learners. • learners is either a character vector, such as 'tree', naming a weak learner, a weak learner template, or a cell array of such templates. Pictorially, here is the information you need to create an ensemble: For all classification or nonlinear regression problems, follow these steps to create an ensemble: #### Put Predictor Data in a Matrix All supervised learning methods start with a data matrix, usually called X in this documentation. Each row of X represents one observation. Each column of X represents one variable, or predictor. #### Prepare the Response Data You can use a wide variety of data types for response data. • For regression ensembles, Y must be a numeric vector with the same number of elements as the number of rows of X. • For classification ensembles, Y can be any of the following data types. This table also contains the method of including missing entries. Data TypeMissing Entry Numeric vectorNaN Categorical vector<undefined> Character arrayRow of spaces Cell array of character vectors'' Logical vector(not possible to represent) fitensemble ignores missing values in Y when creating an ensemble. For example, suppose your response data consists of three observations in the following order: true, false, true. You could express Y as: • [1;0;1] (numeric vector) • nominal({'true','false','true'}) (categorical vector) • [true;false;true] (logical vector) • ['true ';'false';'true '] (character array, padded with spaces so each row has the same length) • {'true','false','true'} (cell array of character vectors) Use whichever data type is most convenient. Because you cannot represent missing values with logical entries, do not use logical entries when you have missing values in Y. #### Choose an Applicable Ensemble Method fitensemble uses one of these algorithms to create an ensemble. • For classification with two classes: • 'LogitBoost' • 'GentleBoost' • 'RobustBoost' (requires Optimization Toolbox™) • 'LPBoost' (requires Optimization Toolbox) • 'TotalBoost' (requires Optimization Toolbox) • 'RUSBoost' • 'Subspace' • 'Bag' • For classification with three or more classes: • 'LPBoost' (requires Optimization Toolbox) • 'TotalBoost' (requires Optimization Toolbox) • 'RUSBoost' • 'Subspace' • 'Bag' • For regression: • 'LSBoost' • 'Bag' 'Bag' applies to all methods. When using 'Bag', indicate whether you want a classifier or regressor with the type name-value pair set to 'classification' or 'regression'. For descriptions of the various algorithms, see Ensemble Algorithms. This table lists characteristics of the various algorithms. In the table titles: • Regress. — Regression • Classif. — Classification • Preds. — Predictors • Imbalance — Good for imbalanced data (one class has many more observations than the other) • Stop — Algorithm self-terminates • Sparse — Requires fewer weak learners than other ensemble algorithms AlgorithmRegress.Binary Classif.Binary Classif. Multi- Level Preds.Classif. 3+ ClassesClass ImbalanceStopSparse Bag×× × LogitBoost ×× GentleBoost ×× RobustBoost × LPBoost × × ×× TotalBoost × × ×× RUSBoost × ×× LSBoost× Subspace × × RobustBoost, LPBoost, and TotalBoost require an Optimization Toolbox license. Try TotalBoost before LPBoost, as TotalBoost can be more robust. Suggestions for Choosing an Appropriate Ensemble Algorithm • Regression — Your choices are LSBoost or Bag. See General Characteristics of Ensemble Algorithms for the main differences between boosting and bagging. • Binary Classification — Try AdaBoostM1 first, with these modifications: Data CharacteristicRecommended Algorithm Skewed data (many more observations of one class)RUSBoost Categorical predictors with over 31 levelsLogitBoost or GentleBoost Label noise (some training data has the wrong class)RobustBoost Many observationsAvoid LPBoost, TotalBoost, and Bag • Multiclass Classification — Try AdaBoostM2 first, with these modifications: Data CharacteristicRecommended Algorithm Skewed data (many more observations of one class)RUSBoost Many observationsAvoid LPBoost, TotalBoost, and Bag For details of the algorithms, see Ensemble Algorithms. General Characteristics of Ensemble Algorithms • Bag generally constructs deep trees. This construction is both time consuming and memory-intensive. This also leads to relatively slow predictions. • Boost algorithms generally use very shallow trees. This construction uses relatively little time or memory. However, for effective predictions, boosted trees might need more ensemble members than bagged trees. Therefore it is not always clear which class of algorithms is superior. • Bag can estimate the generalization error without additional cross validation. See oobLoss. • Except for Subspace, all boosting and bagging algorithms are based on tree learners. Subspace can use either discriminant analysis or k-nearest neighbor learners. For details of the characteristics of individual ensemble members, see Characteristics of Classification Algorithms. #### Set the Number of Ensemble Members Choosing the size of an ensemble involves balancing speed and accuracy. • Larger ensembles take longer to train and to generate predictions. • Some ensemble algorithms can become overtrained (inaccurate) when too large. To set an appropriate size, consider starting with several dozen to several hundred members in an ensemble, training the ensemble, and then checking the ensemble quality, as in Test Ensemble Quality. If it appears that you need more members, add them using the resume method (classification) or the resume method (regression). Repeat until adding more members does not improve ensemble quality. Tip   For classification, the LPBoost and TotalBoost algorithms are self-terminating, meaning you do not have to investigate the appropriate ensemble size. Try setting numberens to 500. The algorithms usually terminate with fewer members. #### Prepare the Weak Learners Currently the weak learner types are: • 'Discriminant' (recommended for Subspace ensemble) • 'KNN' (only for Subspace ensemble) • 'Tree' (for any ensemble except Subspace) There are two ways to set the weak learner type in the ensemble. • To create an ensemble with default weak learner options, pass in the character vectors as the weak learner. For example: % or ens = fitensemble(X,Y,'Subspace',50,'KNN'); • To create an ensemble with nondefault weak learner options, create a nondefault weak learner using the appropriate template method. For example, if you have missing data, and want to use trees with surrogate splits for better accuracy: templ = templateTree('Surrogate','all'); To grow trees with leaves containing a number of observations that is at least 10% of the sample size: templ = templateTree('MinLeafSize',size(X,1)/10); Alternatively, choose the maximal number of splits per tree: templ = templateTree('MaxNumSplits',4); While you can give fitensemble a cell array of learner templates, the most common usage is to give just one weak learner template. For examples using a template, see Train Ensemble With Unequal Classification Costs and Surrogate Splits. Decision trees can handle NaN values in X. Such values are called "missing". If you have some missing values in a row of X, a decision tree finds optimal splits using nonmissing values only. If an entire row consists of NaN, fitensemble ignores that row. If you have data with a large fraction of missing values in X, use surrogate decision splits. For examples of surrogate splits, see Train Ensemble With Unequal Classification Costs and Surrogate Splits. Common Settings for Tree Weak Learners • The depth of a weak learner tree makes a difference for training time, memory usage, and predictive accuracy. You control the depth these parameters: • MaxNumSplits — The maximal number of branch node splits is MaxNumSplits per tree. Set large values of MaxNumSplits to get deep trees. The default for bagging is size(X,1) - 1. The default for boosting is 1. • MinLeafSize — Each leaf has at least MinLeafSize observations. Set small values of MinLeafSize to get deep trees. The default for classification is 1 and 5 for regression. • MinParentSize — Each branch node in the tree has at least MinParentSize observations. Set small values
\section{Introduction} Decades after their discovery, the fine details of the mechanisms behind jet formation and its connexion to the accretion disc are still unclear. Revealing the disc-jet connexion would help answering major questions still open concerning accreting black holes of all sizes, their growth and the role they play in galaxy evolution. Conical compact jet models have been successful in reproducing the flat, or slightly inverted radio spectra usually seen in X-ray binary sources \citep{Corbeletal2000, Fenderetal2000, CorbelFender2002}. However they all require a dissipation process to compensate for the adiabatic losses \citep{BlandfordKonigl1979}. One proposed process is the conversion of jet kinetic energy to internal energy through internal shocks. Internal shock jet models have been proposed to model the multi-wavelength emission from $\gamma$-ray burst \citep{ReesMeszaros1994, DaigneMochkovitch1998}, active galactic nuclei \citep{Rees1978a, Spadaetal2001, BottcherDermer2010} and microquasars \citep{KaiserSunyaevSpruit2000,JamilFenderKaiser2010, Malzac2013}. One key point of these models is that their resulting spectral energy distributions (SED) are very sensitive to the shape of the assumed fluctuations of the jet velocity \citep{Malzac2014}. \citet{Malzac2013} has shown that internal shocks powered by flicker noise fluctuations of the bulk Lorentz factor can entirely compensate for the adiabatic expansion losses. Interestingly, the X-ray power spectrum of X-ray binaries, which traces the variability of the accretion flow in the vicinity of the compact object, is close to a flicker noise process \citep{Lyubarskii1997, Kingetal2004, MayerPringle2006}. \mbox{GX~{339-4}}\ is a recurrent X-ray transient and is believed to harbour a black hole in a binary system with a low-mass companion star. Although the black hole mass and the system inclination angle, and distance are still unknown, they range between 5.8 and 10 $M_{\odot}$\ \citep{Hynesetal2003, Munozdaraisetal2008, Shidatsuetal2011}, $20^\circ$ and $50^\circ$ \citep{Milleretal2006, DoneDiazTrigo2010, Shidatsuetal2011}, and 6 and 15 kpc \citep{Hynesetal2004, Zdziarskietal2004, Shidatsuetal2011}, respectively. The source exhibits multi-wavelength variability on a broad range of timescales \citep{Motchetal1982, Fenderetal1999, Corbeletal2003, Dunnetal2008, Gandhi2009, Casellaetal2010, Corbeletal2013}. In addition, it also shows evidence of relativistic jets \citep{Fenderetal1997,Corbeletal2000, Markoffetal2003, Gandhietal2008}. The data we used in the present work are part of a multi-wavelength study of \mbox{GX~{339-4}} \citep{CadolleBeletal2011, Corbeletal2013}, and in particular of the first mid-infrared study of the source published in \citet{Gandhietal2011} and performed in 2010 March 11. \mbox{GX~{339-4}}\ was observed with the\textit{Wide-field Infrared Survey Explorer} \citep[\textit{WISE};][]{Wrightetal2010} satellite in 4 bands($1.36 \times 10^{13}$, $2.50 \times 10^{13}$, $6.52 \times 10^{13}$ and $8.82 \times 10^{13}$ Hz, respectively W4, W3, W2 and W1), at 13 epochs, sampled at multiples of the satellite orbital period of 95 minutes and with a shortest sampling interval of 11~s, when \textit{WISE}\ caught the source on two consecutive scans. Radio data were obtained with the \textit{Australian Telescope Compact Array} (\textit{ATCA}) during two days - closest to but not simultaneous with \textit{WISE}\ data~- on 2010 March 7 and 2010 March 14. The mean fluxes are $9.1 \pm 0.1$ and $9.7 \pm 0.1$~mJy at 5.5 and 9 GHz, respectively. X-ray data were nearly simultaneous with \textit{WISE}, taken between epochs 12 and 13 with the \textit{Rossi X-ray Timing Explore}\ (\textit{RXTE}) satellite. \citet{Gandhietal2011} confirms the detection in the mid-infrared of a synchrotron break associated with the compact jet in \mbox{GX~{339-4}} \citep{CorbelFender2002}, and reports the first clear detection of its strong variability. This detection of the jet's intrinsic variability and the overall properties of \mbox{GX~{339-4}}\ make it the perfect source to test our model. The objective of this paper is to determine whether an internal shock jet model driven by accretion flow variability reproduces spectral and timing observations of an X-ray binary source in the hard spectral state, known to be associated with compact jets. Our work differs from previous internal shock jet models in that we use observed X-ray power spectral density (PSD) of the studied source, \mbox{GX~{339-4}}, to constrain the fluctuations of the bulk Lorentz factor $\gamma$ of the ejecta constituting the jet. In Section~\ref{sec:ishmodel}, we introduce the internal shock jet model used to perform our simulations and the assumptions chosen to model the source. Section~\ref{sec:sed} and Section~\ref{sec:timing} present the spectral and timing analyses carried out during this study, and the results obtained. We conclude this paper with a discussion of these results and suggestions for future developments in Section~\ref{sec:discussion}. \section{Internal shock model} \label{sec:ishmodel} \citet{Malzac2014} presents a newly developed numerical code which simulates the hierarchical merging and the emission of ejecta constituting a jet. In this model, a new shell of gas is ejected at each time step $\Delta t$, comparable to the dynamical timescale at $r_{dyn}$, the initial radius of the ejecta. The Lorentz factor of each created shell varies, depending on the time of ejection. Its fluctuation follows a specified PSD shape. Throughout the duration of the simulation, the injected shells -- and any subsequent shells resulting from mergers -- are tracked until they interact and merge with other ejecta. The ejecta loose internal energy via adiabatic losses when propagating outwards. However, during mergers, a fraction of their kinetic energy is converted into internal energy. The details of the physics and the description of the main parameters of the model are presented in the original paper. The aim of this study is to investigate the possibility to reproduce \mbox{GX~{339-4}}\ broad-band spectra and infrared light curves measured in \citet{Gandhietal2011} with such a jet model, using the X-ray PSD as input for the fluctuations of the bulk Lorentz factor $\gamma$ of the jet. Following \citet{Gandhietal2011}, we take as initial parameters representative of the source, a mass of the central object of 10 $M_{\odot}$\ and a distance of 8 kpc \citep{Zdziarskietal2004, Shidatsuetal2011}. We let our simulations run for $t_{simu} = 10^5\,\mathrm{s}$ \mbox{($\sim 1$ day)}, to allow the jet to develop. Due to the uncertainty on the inclination angle $\theta$, we examine different values between 20 and 50 degrees. We set the jet opening angle $\phi$ to $1^\circ$. We simulate a counter-jet in this study. However, its contribution to the total spectral energy distribution is less than $10\%$ in the energy range of interest. The total power available to the jet is an important parameter of the model. To estimate that parameter, we follow the method described in \citet{Kordingetal2006a} and use equation (6) and equation (8) of this paper to relate the observed X-ray luminosity of an X-ray binary source to the power available to its jets: \begin{align} P_{jet} & \approx 1.57 \times 10^{37} \left(\frac{L_{2-10 \mathrm{keV}}}{10^{36} \mathrm{erg} \mathrm{s}^{-1}}\right)^{0.5}\, \mathrm{erg/s} \label{equ:pjet} \end{align} \citet{Gandhietal2011} reports an X-ray luminosity during the observations of $L_{2-10 \, \mathrm{keV}} = 2.0 \times 10^{37} \mathrm{erg}\, \mathrm{s}^{-1}$. As a consequence, we estimate the total power of the jet during the observations to be $P_{jet} \simeq 0.05 \, L_{Edd}$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{xray_psd} \caption[X-ray PSD]{X-ray power spectral density (PSD) of \mbox{GX~{339-4}} in the \mbox{3-20 keV} band, used to constrain the fluctuations of the bulk Lorentz factor of the ejecta. The PSD was extracted for the RXTE PCA observations with ObsId 95409-01-09-03 which was quasi-simultaneous with WISE (goodtime exposure $\sim$ 1360~s). Standard procedures were used for computing the PSD (for details, see section 4.2 of \citet{Gandhietal2010}).} \label{fig:psd} \end{figure} Finally, the most important parameter of our model is the distribution of the fluctuations of the jet's bulk Lorentz factor. We choose for the distribution of the fluctuations of the kinetic energy $\gamma - 1$ to follow the shape of \mbox{GX~{339-4}}\ quasi-simultaneous X-ray PSD, observed by \textit{RXTE} and shown in Fig.~\ref{fig:psd}, as X-ray PSD is thought to trace the variability of the accretion flow in X-ray binary sources. Moreover the fractional rms amplitude of $\gamma - 1$ is set to be equal to that of the X-ray PSD, in this case 35.6\%. By imposing the distribution of the fluctuations of the jet's bulk Lorentz factor to follow the X-ray PSD, we connect the physics of the jets to the variability of the inner part of the accretion flow. \begin{table} \centering \caption{Parameters explored} \begin{tabular}{l | l} Parameters & Range of values\\ \hline Inclination angle, $\theta$ & $20^{\circ}$, $30^{\circ}$, $40^{\circ}$, $45^{\circ}$, $50^{\circ}$\\ Mean Lorentz factor, $\gamma_{mean}$ & 1.5, 2, 4\\ Electron equipartition, $\xi_{e}$& 0.5, 1\\ Proton equipartition, $\xi_{p}$ & 0, 0.5, 1\\ Ejecta scheme & constant shell kinetic energy,\\ & constant shell mass,\\ & random shell mass\\ Shock propagation scheme & slow, fast \end{tabular} \label{tab:params} \end{table} In order to find the preferred set of parameters which reproduces correctly the broad-band spectra of the source, we have investigated the following parameters of the model: the mean jet Lorentz factor $\gamma_{mean}$, the electron and proton equipartition factors $\xi_{e}$ and $\xi_{p}$, the ejecta scheme, and the shock propagation scheme. $\gamma_{mean}$ sets the amplitude of the overall spectra. The model provides three methods to generate the ejected shells: the ejecta have either a constant kinetic energy, or a constant mass, or their masses randomly vary,
# NAG FL Interfacef08hpf (zhbevx) ## ▸▿ Contents Settings help FL Name Style: FL Specification Language: ## 1Purpose f08hpf computes selected eigenvalues and, optionally, eigenvectors of a complex $n×n$ Hermitian band matrix $A$ of bandwidth $\left(2{k}_{d}+1\right)$. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues. ## 2Specification Fortran Interface Subroutine f08hpf ( jobz, uplo, n, kd, ab, ldab, q, ldq, vl, vu, il, iu, m, w, z, ldz, work, info) Integer, Intent (In) :: n, kd, ldab, ldq, il, iu, ldz Integer, Intent (Inout) :: jfail(*) Integer, Intent (Out) :: m, iwork(5*n), info Real (Kind=nag_wp), Intent (In) :: vl, vu, abstol Real (Kind=nag_wp), Intent (Out) :: w(n), rwork(7*n) Complex (Kind=nag_wp), Intent (Inout) :: ab(ldab,*), q(ldq,*), z(ldz,*) Complex (Kind=nag_wp), Intent (Out) :: work(n) Character (1), Intent (In) :: jobz, range, uplo #include <nag.h> void f08hpf_ (const char *jobz, const char *range, const char *uplo, const Integer *n, const Integer *kd, Complex ab[], const Integer *ldab, Complex q[], const Integer *ldq, const double *vl, const double *vu, const Integer *il, const Integer *iu, const double *abstol, Integer *m, double w[], Complex z[], const Integer *ldz, Complex work[], double rwork[], Integer iwork[], Integer jfail[], Integer *info, const Charlen length_jobz, const Charlen length_range, const Charlen length_uplo) The routine may be called by the names f08hpf, nagf_lapackeig_zhbevx or its LAPACK name zhbevx. ## 3Description The Hermitian band matrix $A$ is first reduced to real tridiagonal form, using unitary similarity transformations. The required eigenvalues and eigenvectors are then computed from the tridiagonal matrix; the method used depends upon whether all, or selected, eigenvalues and eigenvectors are required. ## 4References Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia https://www.netlib.org/lapack/lug Demmel J W and Kahan W (1990) Accurate singular values of bidiagonal matrices SIAM J. Sci. Statist. Comput. 11 873–912 Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore ## 5Arguments 1: $\mathbf{jobz}$Character(1) Input On entry: indicates whether eigenvectors are computed. ${\mathbf{jobz}}=\text{'N'}$ Only eigenvalues are computed. ${\mathbf{jobz}}=\text{'V'}$ Eigenvalues and eigenvectors are computed. Constraint: ${\mathbf{jobz}}=\text{'N'}$ or $\text{'V'}$. 2: $\mathbf{range}$Character(1) Input On entry: if ${\mathbf{range}}=\text{'A'}$, all eigenvalues will be found. If ${\mathbf{range}}=\text{'V'}$, all eigenvalues in the half-open interval $\left({\mathbf{vl}},{\mathbf{vu}}\right]$ will be found. If ${\mathbf{range}}=\text{'I'}$, the ilth to iuth eigenvalues will be found. Constraint: ${\mathbf{range}}=\text{'A'}$, $\text{'V'}$ or $\text{'I'}$. 3: $\mathbf{uplo}$Character(1) Input On entry: if ${\mathbf{uplo}}=\text{'U'}$, the upper triangular part of $A$ is stored. If ${\mathbf{uplo}}=\text{'L'}$, the lower triangular part of $A$ is stored. Constraint: ${\mathbf{uplo}}=\text{'U'}$ or $\text{'L'}$. 4: $\mathbf{n}$Integer Input On entry: $n$, the order of the matrix $A$. Constraint: ${\mathbf{n}}\ge 0$. 5: $\mathbf{kd}$Integer Input On entry: if ${\mathbf{uplo}}=\text{'U'}$, the number of superdiagonals, ${k}_{d}$, of the matrix $A$. If ${\mathbf{uplo}}=\text{'L'}$, the number of subdiagonals, ${k}_{d}$, of the matrix $A$. Constraint: ${\mathbf{kd}}\ge 0$. 6: $\mathbf{ab}\left({\mathbf{ldab}},*\right)$Complex (Kind=nag_wp) array Input/Output Note: the second dimension of the array ab must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. On entry: the upper or lower triangle of the $n×n$ Hermitian band matrix $A$. The matrix is stored in rows $1$ to ${k}_{d}+1$, more precisely, • if ${\mathbf{uplo}}=\text{'U'}$, the elements of the upper triangle of $A$ within the band must be stored with element ${A}_{ij}$ in ${\mathbf{ab}}\left({k}_{d}+1+i-j,j\right)\text{​ for ​}\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,j-{k}_{d}\right)\le i\le j$; • if ${\mathbf{uplo}}=\text{'L'}$, the elements of the lower triangle of $A$ within the band must be stored with element ${A}_{ij}$ in ${\mathbf{ab}}\left(1+i-j,j\right)\text{​ for ​}j\le i\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(n,j+{k}_{d}\right)\text{.}$ On exit: ab is overwritten by values generated during the reduction to tridiagonal form. The first superdiagonal or subdiagonal and the diagonal of the tridiagonal matrix $T$ are returned in ab using the same storage format as described above. 7: $\mathbf{ldab}$Integer Input On entry: the first dimension of the array ab as declared in the (sub)program from which f08hpf is called. Constraint: ${\mathbf{ldab}}\ge {\mathbf{kd}}+1$. 8: $\mathbf{q}\left({\mathbf{ldq}},*\right)$Complex (Kind=nag_wp) array Output Note: the second dimension of the array q must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$ if ${\mathbf{jobz}}=\text{'V'}$, and at least $1$ otherwise. On exit: if ${\mathbf{jobz}}=\text{'V'}$, the $n×n$ unitary matrix used in the reduction to tridiagonal form. If ${\mathbf{jobz}}=\text{'N'}$, q is not referenced. 9: $\mathbf{ldq}$Integer Input On entry: the first dimension of the array q as declared in the (sub)program from which f08hpf is called. Constraints: • if ${\mathbf{jobz}}=\text{'V'}$, ${\mathbf{ldq}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; • otherwise ${\mathbf{ldq}}\ge 1$. 10: $\mathbf{vl}$Real (Kind=nag_wp) Input 11: $\mathbf{vu}$Real (Kind=nag_wp) Input On entry: if ${\mathbf{range}}=\text{'V'}$, the lower and upper bounds of the interval to be searched for eigenvalues. If ${\mathbf{range}}=\text{'A'}$ or $\text{'I'}$, vl and vu are not referenced. Constraint: if ${\mathbf{range}}=\text{'V'}$, ${\mathbf{vl}}<{\mathbf{vu}}$. 12: $\mathbf{il}$Integer Input 13: $\mathbf{iu}$Integer Input On entry: if ${\mathbf{range}}=\text{'I'}$, il and iu specify the indices (in ascending order) of the smallest and largest eigenvalues to be returned, respectively. If ${\mathbf{range}}=\text{'A'}$ or $\text{'V'}$, il and iu are not referenced. Constraints: • if ${\mathbf{range}}=\text{'I'}$ and ${\mathbf{n}}=0$, ${\mathbf{il}}=1$ and ${\mathbf{iu}}=0$; • if ${\mathbf{range}}=\text{'I'}$ and ${\mathbf{n}}>0$, $1\le {\mathbf{il}}\le {\mathbf{iu}}\le {\mathbf{n}}$. 14: $\mathbf{abstol}$Real (Kind=nag_wp) Input On entry: the absolute error tolerance for the eigenvalues. An approximate eigenvalue is accepted as converged when it is determined to lie in an interval $\left[a,b\right]$ of width less than or equal to $abstol+ε max(|a|,|b|) ,$ where $\epsilon$ is the machine precision. If abstol is less than or equal to zero, then $\epsilon {‖T‖}_{1}$ will be used in its place, where $T$ is the tridiagonal matrix obtained by reducing $A$ to tridiagonal form. Eigenvalues will be computed most accurately when abstol is set to twice the underflow threshold , not zero. If this routine returns with ${\mathbf{info}}>{\mathbf{0}}$, indicating that some eigenvectors did not converge, try setting abstol to . See Demmel and Kahan (1990). 15: $\mathbf{m}$Integer Output On exit: the total number of eigenvalues found. $0\le {\mathbf{m}}\le {\mathbf{n}}$. If ${\mathbf{range}}=\text{'A'}$, ${\mathbf{m}}={\mathbf{n}}$. If ${\mathbf{range}}=\text{'I'}$, ${\mathbf{m}}={\mathbf{iu}}-{\mathbf{il}}+1$. 16: $\mathbf{w}\left({\mathbf{n}}\right)$Real (Kind=nag_wp) array Output On exit: the first m elements contain the selected eigenvalues in ascending order. 17: $\mathbf{z}\left({\mathbf{ldz}},*\right)$Complex (Kind=nag_wp) array Output Note: the second dimension of the array z must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$ if ${\mathbf{jobz}}=\text{'V'}$, and at least $1$ otherwise. On exit: if ${\mathbf{jobz}}=\text{'V'}$, then • if ${\mathbf{info}}={\mathbf{0}}$, the first m columns of $Z$ contain the orthonormal eigenvectors of the matrix $A$ corresponding to the selected eigenvalues, with the $i$th column of $Z$ holding the eigenvector associated with ${\mathbf{w}}\left(i\right)$; • if an eigenvector fails to converge (${\mathbf{info}}>{\mathbf{0}}$), then that column of $Z$ contains the latest approximation to the eigenvector, and the index of the eigenvector is returned in jfail. If ${\mathbf{jobz}}=\text{'N'}$, z is not referenced. Note:  you must ensure that at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$ columns are supplied in the array z; if ${\mathbf{range}}=\text{'V'}$, the exact value of m is not known in advance and an upper bound of at least n must be used. 18: $\mathbf{ldz}$Integer Input On entry: the first dimension of the array z as declared in the (sub)program from which f08hpf is called. Constraints: • if ${\mathbf{jobz}}=\text{'V'}$, ${\mathbf{ldz}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; • otherwise ${\mathbf{ldz}}\ge 1$. 19: $\mathbf{work}\left({\mathbf{n}}\right)$Complex (Kind=nag_wp) array Workspace 20: $\mathbf{rwork}\left(7×{\mathbf{n}}\right)$Real (Kind=nag_wp) array Workspace 21: $\mathbf{iwork}\left(5×{\mathbf{n}}\right)$Integer array Workspace 22: $\mathbf{jfail}\left(*\right)$Integer array Output Note: the dimension of the array jfail must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. On exit: if ${\mathbf{jobz}}=\text{'V'}$, then • if ${\mathbf{info}}={\mathbf{0}}$, the first m elements of jfail are zero; • if ${\mathbf{info}}>{\mathbf{0}}$, jfail contains the indices of the eigenvectors that failed to converge. If ${\mathbf{jobz}}=\text{'N'}$, jfail is not referenced. 23: $\mathbf{info}$Integer Output On exit: ${\mathbf{info}}=0$ unless the routine detects an error (see Section 6). ## 6Error Indicators and Warnings ${\mathbf{info}}<0$ If ${\mathbf{info}}=-i$, argument $i$ had an illegal value. An explanatory message is output, and execution of the program is terminated. ${\mathbf{info}}>0$ The algorithm failed to converge; $⟨\mathit{\text{value}}⟩$ eigenvectors did not converge. Their indices are stored in array jfail. ## 7Accuracy The computed eigenvalues and eigenvectors are exact for a nearby matrix $\left(A+E\right)$, where $‖E‖2 = O(ε) ‖A‖2 ,$ and $\epsilon$ is the machine precision. See Section 4.7 of Anderson et al. (1999) for further details. ## 8Parallelism and Performance f08hpf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library. f08hpf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information. Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information. The total number of floating-point operations is proportional to ${k}_{d}{n}^{2}$ if ${\mathbf{jobz}}=\text{'N'}$, and is proportional to ${n}^{3}$ if ${\mathbf{jobz}}=\text{'V'}$ and ${\mathbf{range}}=\text{'A'}$, otherwise the number
point energy is lowered from the sphaleron energy. At some critical turning point energy, this trend must reverse in order to connect the bounce solution to the kink-antikink at very long periods. As noted above, this model is straightforward to integrate numerically. The results are shown in figure~\ref{pend-fig}, for three different values of the parameter~$\lambda$. The plot shows the action of Euclidean classical solutions of the theory as functions of the period. As indicated by the derivative of the action~(\ref{dS-db}), the slope of a curve on such a plot is the turning point energy of the corresponding classical solution. The dotted line is the sphaleron solution, which, as a static configuration, has the same energy at any value of the period. At the critical value of~$\lambda = 1/16$, the bounce still increases monotonically in period as its turning point energy is decreased, giving the single curve of solutions starting from the sphaleron at point~$Q$. Above this value of~$\lambda$, the bounce initially decreases in period as its turning point energy is decreased, then reaches a bifurcation point, after which the bounce period begins to increase again as the turning point energy is further decreased. For~$\lambda = 3/32$, the bounce begins at the sphaleron at point $R$, then proceeds to the bifurcation point~$X$ as the turning point energy is lowered, after which further decreases in turning point energy take one towards longer periods. As~$\lambda$ is increased, the length of the branch~$RX$ increases, becoming the branch~$SY$ at~$\lambda = 1/8$. \begin{figure} \begin{center} \setlength {\unitlength} {1cm} \begin{picture}(0,0) \put(-1.0,4.0){$S_E$} \put(5.75,-0.5){$\beta$} \end{picture} \epsfysize=7.5cm \epsfxsize=12.0cm \epsfbox{pend.eps} \end{center} \vspace{0.75cm} \caption{Curves of action vs.\ period for periodic ``bounce'' solutions in the modified pendulum potential~(\ref{V-1d}), in units where~$E_S = 1$. The slope of a curve is the turning point energy. The straight dotted line represents the static sphaleron. The curve merging with the sphaleron at point~$Q$ represents bounces with the critical value of the parameter~$\lambda = 1/16$. Above this value of~$\lambda$, the curves split at bifurcation points. At~$\lambda = 3/32$, for example, the bounces pass from point~$R$ at the sphaleron, towards shorter periods, until reaching point~$X$, beyond which further decreases in the turning point energy take one towards longer periods. The branch~$RX$ grows longer for larger values of~$\lambda$, becoming the branch~$SY$ for~$\lambda = 1/8$. \label{pend-fig}} \end{figure} The (linearized) stability of a classical solution is determined by the number of eigenmodes of the curvature of the action~$\delta^2 S$ (evaluated at the solution of interest) with negative eigenvalues. Bifurcations, such as~$X$ or~$Y$ in figure~\ref{pend-fig}, correspond to points in the space of solutions where a curvature eigenvalue passes through zero. When different branches of solutions merge at a bifurcation such as~$Y$, the branch of solutions with larger action has one more negative mode than the lower branch. In the case of the pendulum, the negative mode of the upper branch can be identified with a deformation which moves the turning points further in or out, while holding the period fixed. Such a deformation causes no first order change in action, since the momentum vanishes at the turning point. But to second order, such a deformation can cause the action to change. The eigenvalue of this mode passes through zero at the bifurcation point, and becomes positive on the lower branch of solutions emerging from the bifurcation. Points~$Q$, $R$, or~$S$ represent bifurcations where a branch of periodic bounce solutions emerges from the static sphaleron. Each point on the branch~$SY$ (or~$RX$) represents a one-parameter family of solutions related by time translations, {\em i.e.},~$x(t{-}t_0)$. The time derivative~$\dot x(t{-}t_0)$ is a zero mode in the small fluctuation spectrum and represents a deformation which is an infinitesimal shift in~$t_0$. The sphaleron, being static, has no such zero mode. At periods longer than that of the bifurcation~$S$ (or~$R$) the sphaleron has three negative modes: one is the static unstable mode, corresponding to a deformation~$x \to x + \epsilon$ with time-independent~$\epsilon$, while the other two are periodic oscillations,~$\delta x \propto \sin (2\pi t/\beta)$ or $\cos (2\pi t/\beta)$. At the bifurcation point, the two oscillating negative modes become zero modes, and at shorter periods, positive modes of the sphaleron. On the branch~$SY$ (or~$RX$) of bounces emerging from the sphaleron, one of the zero modes remains the time-translation zero mode of the bounce, while the other becomes a negative mode --- the same negative mode which later passes through zero at the bifurcation~$Y$ (or~$X$) at the other end of this branch of solutions. The bounce solutions also have one additional negative mode resembling the static unstable mode of the sphaleron. This connection between bifurcations and stability of solutions will be important for the discussion in section IX\@. Several figures illustrating the ``topography'' of the action near bifurcations of the $SU(2)$-Higgs sphaleron, and a more detailed discussion of the negative and zero modes of the various solutions, appear in Section VIII\@. At any temperature, the solution which characterizes the most probable barrier crossings (and whose action controls the rate of barrier crossing) is the bounce, or sphaleron, with the smallest action for period~$\beta=1/T$. As is clear from figure~\ref{pend-fig}, this means that the branches, such as~$RX$ and~$SY$, which stay above the action of the sphaleron, do not control the rate of barrier crossings at any temperature. Instead, for~$\lambda > 1/16$, there is an abrupt change from sphaleron-dominated barrier crossing to bounce-dominated crossing at the temperature where the curve of bounces crosses the sphaleron line. Physically, this is a transition between classical thermally-activated transitions over the barrier, for temperatures above the crossover, and quantum tunneling with a most-probable energy~$E < E_S$ for temperatures below the crossover.% \footnote{In the leading WKB approximation, the transition rate is discontinuous at the crossover. This discontinuity is smoothed into a narrow transition region of width~${\mathcal O}(g^2)$ in the exact transition rate.} In contrast, for~$\lambda < 1/16$, as one increases the temperature from zero the most probable energy for tunneling grows until it reaches~$E_S$, at which point the turning points of the most probable tunneling path merge at the top of the barrier (and the WKB transition rate smoothly interpolates between quantum tunneling and classical thermal activation~\cite{Affleck_81}). \section{The $SU(2)$-Higgs Model With Spherical Symmetry} We now turn to the $SU(2)$~gauge theory in $3{+}1$~dimensions with a single Higgs scalar in the fundamental representation. This model represents the bosonic sector of the Standard Model of the weak interactions in the limit of small weak mixing angle. The action may be written in the form% \footnote{For convenience, we have rescaled the gauge and Higgs fields so that the action has an overall~$1/g^2$. The conventional Higgs vacuum expectation value is~$v=2M_W/g$, and the usual quartic coupling is~$\lambda = \frac{1}{8}g^2 M_H^2 / M_W^2$. As usual, the weak fine-structure constant is~$\alpha_W = g^2/4\pi$. In Minkowski space (with a space-like metric), the usual action is minus (\ref{action-4d}).} \begin{equation} S = \frac{1}{g^2} \int d^4 x \left[ -\frac{1}{2} \mbox{Tr} (F_{\mu\nu}F^{\mu\nu}) + (D_\mu\Phi)^\dagger D^\mu\Phi + \frac{M_H^2}{8 M_W^2}(\Phi^\dagger\Phi - 2M_W^2)^2 \right]. \label{action-4d} \end{equation} The scalar field~$\Phi$ is an $SU(2)$~doublet with covariant derivative~$D_\mu = (\partial_\mu + A_\mu)$, where~$A_\mu \equiv A^a_\mu \tau^a / 2 i$. The gauge field strength is the commutator~$F_{\mu\nu} = [D_\mu, D_\nu]$. The action~(\ref{action-4d}) has an explicit $SU(2)_L$~gauge symmetry, represented by $SU(2)$~matrices acting on the Higgs doublet~$\Phi$ (or its conjugate~$\Phi^c \equiv -i\tau^2\Phi^*$) from the left. It also has a custodial~$SU(2)_R$ global symmetry, given by $SU(2)$ matrices multiplying the matrix~$(\Phi, \Phi^c)$ from the right. Spherically symmetric field configurations are those for which the effect of a rotation can be undone by a gauge transformation combined with a custodial~$SU(2)_R$ transformation. Imposing spherical symmetry reduces the four-dimensional theory to a two-dimensional theory~\cite{Ratra&Yaffe_88,Yaffe_89}, which can be parameterized in terms of six real fields~$\alpha$, $\beta$, $\mu$, $\nu$, $a_0$, and $a_1$. In terms of these two-dimensional fields, one may write the original fields of the four-dimensional theory as \begin{mathletters} \begin{eqnarray} A_0 (\vec{r},t) & = & a_0 \, \hat{r} \cdot \vec{\tau} / 2 i\,, \\ \vec{A}(\vec{r},t) & = & \frac{1}{2i} \left[ \frac{1}{r} (\alpha{-}1)\, \vec{\tau} \times \hat{r} + \frac{1}{r} \beta \,(\vec{\tau} - (\hat{r} \cdot \vec{\tau})\hat{r}) + a_1 (\vec{\tau} \cdot \hat{r}) \, \hat{r} \right] \label{defs-2d}, \\ \Phi (\vec{r}, t) & = & (\mu + i \nu \vec{\tau} \cdot \hat{r}) \, \xi \,. \end{eqnarray}% \end{mathletters} Here $\hat{r} \equiv \vec{r}/r$~is a unit radial vector, and $\xi$~is a unit doublet which can be rotated arbitrarily by combined global~$SU(2)_L$ and custodial~$SU(2)_R$ transformations. It is helpful to define complex linear combinations~$\chi \equiv \alpha + i \beta$ and~$\phi \equiv \mu + i \nu$. The field~$\chi$ then represents the spherically symmetric degrees of freedom of the tangential gauge fields, and~$\phi$ those of the Higgs field. After imposing spherical symmetry in this form, the two-dimensional theory which results has a $U(1)$~gauge symmetry, which is the subgroup which remains of the original $SU(2)_L$~group. As elements of the original $SU(2)$~group, these $U(1)$~gauge transformations are given
the maximum in our definition. The choice of defining the composition of the extremal graph on exactly $m+1$ vertices relative to $m$ edges is not arbitrary; it corresponds to the largest possible size of the lex component in an extremal graph. The following theorem proves the stronger claim that there is a single size of the lex component in a $J$-extremal graph on $m$ edges once $n$ is large enough. \begin{theorem} The graph $\S{n}{\l(m)}{m}$ on $n$ vertices and $m$ edges is $J$-extremal for $n\geq\l (m)$. \end{theorem} \begin{proof} Since we have required $n\geq \l(m)$ we may always add or remove $c=m+1-n$ isolated vertices to the graph $\S{n}{\l(m)}{m}$ to produce the graph $\S{m+1}{\l(m)}{m}$, in accordance with the sign of $c$. A simple counting argument on these isolates yields \[ \bigj{\S{n}{\l(m)}{m}}=3^{-c}\bigj{\S{m+1}{\l(m)}{m}}, \] whereby the $J$-extremality of $\S{n}{\l(m)}{m}$ is now seen to be a consequence of the $J$-extremality of $\S{m+1}{\l(m)}{m}$ which holds by the definition of $\l(m)$. \end{proof} Having established the significance of $\l(m)$ in the construction of $J$-extremal graphs, we will concentrate on establishing an upper bound on $\l(m)$. As the number of homomorphisms to $J$ is so closely related to the number of independent sets in the lex component, the matter of bounding $\l(m)$ from above can be approached only once reasonable bounds are found for $i(\L(n,m))$. Before embarking on these calculations, we pause to investigate some of the properties of the lex graph. The lex graph $L(n,m)$ is related to (and sometimes isomorphic to) the \emph{split graph}. We define the \emph{split graph $S(n,k)$} is the graph with $n$ vertices defined as the join of $K_k$ and $E_{n-k}$. Thus, the number of edges in $S(n,k)$ is \[ e(S(n,k))=\binom{k}2+k(n-k). \] If in the split graph $S(n,k)=K_k\vee E_{n-k}$, we label the vertices of the $K_k$ with $\set{1,2,\ldots,k}$ and those of the $E_{n-k}$ with $\set{k+1,k+2,\ldots,n}$, then we can use this to describe the structure of the lex graph. If $m=\binom{k}2+k(n-k)+w$, we note that the lex graph with $n$ vertices and $m$ edges is \[ L(n,k,w):=L(n,m)=S(n,k)+\setof{(k+1)x}{k+2\leq x\leq k+w+2}. \] Given $n$ and $m$, we would like to determine $k$ and $w$ so that $L(n,m)=L(n,k,w)$. Throughout this paper, we assume that $0\leq w\leq n-k-2$ so that there is no ambiguity about the value of $k$ and $w$ for a given $n$ and $m$. Essentially, the lex graph $L(n,k,w)$ consists of the split graph $S(n,k)$ along with a star with $w$ edges inside the $E_{n-k}$. (See Figure~\ref{fig:lex}.) We note that in the threshold code of the lex graph, the center of the star of size $w$ in $L(n,k,w)$ corresponds to the lone one inside of the string of initial zeroes. From this point forward, we refer to the two types of vertices in the lex graph as those from the \emph{complete part} and those from the \emph{empty part} (even though the empty part may contain a star). \begin{figure} \includegraphics{isoind-18.pdf}\hspace{.5in}\includegraphics{isoind-19.pdf} \caption{Schematics of the split graph and the lex graph}\label{fig:lex} \end{figure} Given the lex graph $L(n,k,w)$, it is easy to determine the number of independent sets in it. We state it as the following lemma without proof. \begin{lemma}\label{lem:lexind} For the lex graph $L(n,k,w)$, we have \[ i(L(n,k,w))=2^{n-k-1}+2^{n-k-w-1}+k. \] \end{lemma} \begin{proof} Note first that if any of the vertices in the complete part of the lex graph $L(n,k,w)$ are included in an independent set, then no other vertices can be in the independent set since these vertices are dominating. There are $k$ such independent sets. If, on the other hand, none of the vertices in the complete side are in an independent set $I$ and the center of the star of size $w$ in the empty part is also not in $I$, then there are $2^{n-k-1}$ such independent sets (including the empty set) since the rest of the vertices form an independent set. If the center of the star of size $w$ in the empty set is in $I$, then none of the other vertices of the star can be in $I$, and so there are $2^{n-k-w-1}$ such $I$. These account for all independent sets in $L(n,k,w)$, proving the lemma. \end{proof} By extension we may also determine $\j(\S{n}{n^\prime}{m})$ in a general form. Noting that isolates can be mapped to any vertex of $J$ by an element of $\Hom(\S{n}{n^\prime}{m})$, we have by Lemma~\ref{lem:lexind}, \[ \j(\S{n}{n^\prime}{m})=3^{n-n^\prime}\left[k+2^{n^\prime-k-1}+2^{n^\prime-k-w-1}\right] \] where $k=k(n^\prime,m)$ and $w=w(n^\prime,m)$. Now, we are finally ready to state and prove the main theorem of the paper, which gives an upper bound on $\l(m)$. \begin{theorem}\label{thm:main} For any $m\in \mathbb{N}$, we have \[ \l(m) \leq \frac{5+\sqrt{9+24m}}2. \] \end{theorem} \begin{proof} Fix $m\in \mathbb{N}$. The proof will roughly go as follows: starting with $R(m+1,m+1;m)=L(m+1,m)$, we remove vertices from the lex component of the graph $R(m+1,n;m)$ (either one or two at a time), and create a $R(m+1,n';m)$ for some $n'<n$. If we show that the number of homomorphisms into $J$ from this new graph is more than the earlier ones, then we can continue this process. In the end, we will determine how long this process can be run, giving the bound in the statement of the theorem. Note that in removing a vertex from the lex component of $R(m+1,n;m)$, we can think of forming the lex component of $R(m+1,n-1;m)$ by removing a vertex from the empty side of the lex component of $R(m+1,n;m)$ and then distributing the edges adjacent to this removed vertex. If there is enough room to build a star of the right size in the empty part, then the complete part remains unchanged in size. If not, then the complete part increases in size. Our proof splits according to which of these occur. In order to describe the above process precisely, we need to introduce a bit of notation. Fix some $n\leq m+1$. Let $R=R(m+1,n;m)$. In each step, we compare $j(R)$ to either $j(R')$ or $j(R')$ and $j(R'')$, where $R'=R(m+1,n-1;m)$ and $R''=R(m+1,n-2;m)$. If we show that $j(R')>j(R)$ or $j(R'')>\max\set{j(R),j(R')}$, then we show that either $\ell(m)\leq n-1$ or $\ell(m)\leq n-2$, respectively, and then run through the process again. We note that in comparing $j(R)$ with either $j(R')$ or $j(R'')$, we need only to compare the lex component of $R$, which is $L(n,m)$, with the lex component of $R'$ along with one isolated vertex, or the lex component of $R''$ along with two isolated vertices. This is equivalent to dividing each of these by $3^{m+1-n}$. To this end, let $G=L(n,m)$, $G'=L(n-1,m)\cup E_1$, and $G''=L(n-2,m)\cup E_2$. Suppose that $k$ and $w$ are integers so that $L(n,m)=L(n,k,w)$. Also, let $w',k',w'',k''$ be integers such that $L(n-1,m)=L(n-1,k',w')$ and $L(n-2,m)=L(n-2,k'',w'')$. Note that if $n=m+1$, then $G=L(n,m)=K_{1,m}=L(n,1,0)$, so that $k=1$ and $w=0$, and $G'=L(n-1,m)\cup E_1=L(n-1,1,1)\cup E_1$, so that $k'=1$ and $w'=1$ (provided $m\geq 3$). If $m$ is large, and we continue to remove vertices from the lex component, then the star inside of the empty part continues to grow while the empty part shrinks. Eventually (after about $m/2$ steps), there will be no more room in the empty part, and so $k$ increases to two and $w$ resets to either $0$ or $1$. The process speeds up now since the removal of any vertex leaves two edges that need to be placed in the star. So far, each step of this process either leaves $k$ unchanged or increases it by one. However, as the process continues, $k$ can increase by more than one. Our process will work provided the change in $k$ is at most one at every step. This barrier will give the bound in the theorem, as will be made precise below. Finally, we are ready to outline the cases of the argument. They are split essentially so that we can determine $w'$, $k'$, $w''$, and $k''$ in terms of $n$ and $k$. Throughout, we will keep track of conditions on $n$ and $k$ that are required. \begin{description} \item[Case 1 ($k=k'$)] Note that in this case, there is room in the empty side of $G=L(n,m)$ to place the $k$ edges that are incident with the new isolated vertex in $G'$. \begin{description} \item[Subcase 1a ($w\geq 1$)] In this case, we compare $j(G)$ and $j(G')$. By definition, we know that $w\leq n-k-2$ which
The family of sets $\{A_x((\phi^n)_n)\}$ for $x\in \cV((\phi^n)_n)$ (resp.\ for $x\in \bB$) forms a convex partition of $\cV((\phi^n)_n)$ (resp.\ $\bB$). \end{proposition} \begin{example} \label{ } We give an example of $(\phi^n)_n$-components which shows that, while in dimension $1$ the $(\phi^n)_n$-components form a countable partition of $\cV$ made of open intervals, even in dimension two and with constant sequence $\phi^n=\phi$ the non-trivial $(\phi^n)_n$-components can have less than full dimension and can be uncountable. Let $\Gamma$ be the half disk $\Gamma:=\{ (x,y)\in \bR^2: x^2+y^2\leq 1, y\geq 0\}$ and define $\phi:=d_\Gamma$. Then the family of all $\phi$-components forms the following partition of $\bR^2$: $$\{ \accentset{\circ}{\Gamma}, (-1,1)\times \{0\}, (-1,1)\times (-\infty,0) \} \cup \cF \cup \cG \cup \cH^+ \cup \cH^{-} , \text{ where}$$ \begin{itemize} \item $\cF$ is the family of all singletons $\{(x,y)\}$ such that $x^2+y^2=1, y\geq 0$ \item $\cG$ is the family of all half lines $\{t(x,y):t>1\}$ such that $x^2+y^2=1, y\geq 0$ \item $\cH^+$ is the family of all half lines $\{(1,0)+t(x,y):t> 0\}$ such that $x^2+y^2=1, y< 0 \leq x$ \item $\cH^{-}$ is the family of all half lines $\{(-1,0)+t(x,y):t> 0\}$ such that $x^2+y^2=1, y< 0 , x\leq 0$. \end{itemize} \end{example} The following key proposition shows that these sets are intimately linked with the components of the domain of $\cM(\mu,\nu)$. \begin{proposition}\label{prop:as_aff_comp} Fix $x\in \bB$ and $(\phi^n)_{n\in \bN} \subseteq \cC$. If $\an{\nu -\mu, \phi^n}\to 0$ then for any $\theta \in \cM(\mu,\nu) $ and disintegration $\theta=\mu\otimes \gamma$, $ \gamma(x, \cdot)$ is concentrated on $\overline{A_x((\phi^n)_n)}$ for $\mu$ a.e. $x$. \end{proposition} Proposition \ref{prop:as_aff_comp} has the following interesting corollary. \begin{corollary} \label{MeasEq} If $\an{\nu -\mu, \phi^n}\to 0$ for $(\phi^n)_{n\in \bN} \subseteq \cC $ then $\mu_{|\bB \setminus \overline{\cV((\phi^n)_n)}}=\nu_{|\bB \setminus \overline{\cV((\phi^n)_n)}}$. \end{corollary} \begin{proof} Recall that by Propositions \ref{prop:PropOfA_x} and \ref{prop:as_aff_comp}, the sets $A_x$ form a convex partition such that for $\theta=\mu\otimes \gamma\in \cM(\mu,\nu)$, $\gamma(x,\overline{A_x})=1$ $\mu(dx)$-a.e. In particular, $A_x=\{x\}$ if $x\in \bB\setminus \cV$ and $A_x\subseteq \cV$ if $x\in \cV$ so that $ \gamma(x, \cdot)=\delta_x$ for $\mu$ a.e. $x\in \bB\setminus \cV$ and $\gamma(x,\cdot)$ is concentrated on $\bar{\cV}$ for $\mu$ a.e. $x\in \cV$. Consequently, if $B\subseteq \bB $ is Borel we get that \begin{align*} \nu(B)=\int_{\bB} \mu(dx) \gamma(x,B)= \int_\cV \mu(dx) \gamma(x,B\cap \bar{\cV}) +\int_{\bB \setminus \cV} \mu(dx) \delta_x(B) \end{align*} and if moreover $B\subseteq \bB \setminus \bar{\cV}$ we get $ \nu(B)=\int_{\bB \setminus \cV} \mu(dx) 1_B(x)=\mu(B) .$ \end{proof} We now easily obtain the result about the convex order stated in the introduction. \begin{proof}[Proof of Corollary \ref{MeasEqintro}] Let $\phi_n$ be the infimal convolution between $\phi$ and $n|| \cdot ||$, then $\phi_n \in \cC$ and $\phi_n \uparrow \phi$ (see \cite[Proposition 2.2.4 Chapter 1 and Proposition 3.1.4 Chapter 4]{HiLe93V1}). Let $a$ be an affine function such that $a\leq \phi_1$ and $\lambda \in \cM$, then $(\phi_n)^- \leq |a| \in L^1(\lambda)$ and $\phi_n^+ \uparrow \phi^+$ and so by dominated convergence and by monotone convergence we get that $\int \phi_n d\lambda \uparrow \int \phi d\lambda$. Since we assumed $\int \phi d \mu=\int \phi d \nu<\infty$ we get that so $\an{\nu -\mu, \phi^n}\to 0$. Since $\phi$ is strictly convex and $\phi_n \to \phi$ we get that $ \cV((\phi^n)_n)=\emptyset$, thus Corollary \ref{MeasEq} gives the thesis. \end{proof} We close this section with application of Proposition \ref{prop:as_aff_comp} to our motivating examples. \begin{example}[Examples \ref{ex:discrete}--\ref{ex:mixed} continued.] \label{ex:convex_components} We continue the discussion of our motivating examples. Recall the pairs of measures in convex order: $\mu^k\preceq_c\nu^k$ and $\tilde \mu^k\preceq_c\nu^k$. We study the sets $A((\phi^n))_x$ for $\an{\nu -\mu, \phi^n}\to 0$. In fact, for this example, we restrict out attention to constant sequences $\phi^n=\phi$ with $\an{\nu -\mu, \phi}=0$. Consider $\phi((t,s))=f(t)$ for a strictly convex and \Lz $f$ so that, in particular, $A(\phi)_{(t,s)}=\{t\}\times \bR$. Analogously, consider $\psi((t,s))=g(s)$, where $g$ is Lipschitz, convex, equal to $0$ on $[-1,1]$ and strictly convex on $(\infty,-1)$ and on $(1,\infty)$. It follows that $A(\psi)_{(t,s)}=\bR\times (-1,1)$ for $s\in (-1,1)$ and $t\in \bR$. It is easy to compute the difference of integrals of $\phi$ or $\psi$ against our measures. First, $$\an{\nu^k-\mu^k, \phi}=\iint \mu^k(dx) \gamma^k(x,dy) (\phi(y)-\phi(x))=0,$$ where $\theta^k=\mu^k\otimes \gamma^k\in\cM(\mu^k,\nu^k)$ was exhibited in Examples \ref{ex:discrete}--\ref{ex:continuous}, and the above follows since $\gamma^k((t,s),\cdot )$ is concentrated on $\{t\} \times \{-1,1\}$ and $\phi$ is constant on $\{t\} \times \bR$. Second, we have $\an{\nu^k-\mu^k, \psi}=0$ simply since $\mu^k,\nu^k$ are supported on $[0,1]\times [-1,1]$. By Proposition \ref{prop:as_aff_comp}, for any $\theta\in \cM(\mu^k,\nu^k)$ with disintegration $\theta=\mu^k\otimes \gamma$ we have $\gamma((t,s),\cdot)$ is supported on $$\overline{A(\phi)_{(t,s)}\cap A(\psi)_{(t,s)}}= \{t\}\times [-1,1]$$ $\mu(d(t,s))$--a.e. Combined with our explicit construction of $\gamma^k$ this shows that Theorem \ref{thm:main_result} holds with $C_{(t,s)}=\{t\}\times (-1,1)$ for $s\in (-1,1)$ and $(t,0)$ in the support of $\mu^k$ and $C_{(t,s)}=\{(t,s)\}$ otherwise, $k\leq \infty$. Similarly to above, $\an{\nu^k-\tilde\mu^k, \psi}=0$ and also $\an{\nu^k-\tilde\mu^k, \chi}=0$, where $\chi((t,s))=h(t)$ for a Lipschitz, convex function $h$ equal to $0$ on $[0,1]$ and strictly convex on $(\infty,0)$ and on $(1,\infty)$ so that $A(\chi)_{(t,s)}=(0,1)\times \bR$ for $t\in (0,1)$, while $A(\chi)_{(0,s)}=\{0\}\times \bR$ and $A(\chi)_{(1,s)}=\{1\}\times \bR$ since the point $x$ has to belong to the relative interior of $A(\phi)_x$, see \eqref{defA_x}. It follows that $C_{(t,s)}(\tilde\mu^k,\nu^k)\subset (0,1)\times (-1,1)$ for $(t,s)\in (0,1)\times (-1,1)$ and $C_{(0,s)}(\tilde\mu^k,\nu^k)\subset \{0\}\times (-1,1)$, $C_{(1,s)}(\tilde\mu^k,\nu^k)\subset \{1\}\times (-1,1)$. From our example of $\tilde \theta^k\in \cM(\tilde \mu^k,\nu^k)$ we see that the inclusions may not be strict so that the convex components are indeed as asserted in Example \ref{ex:mixed}. \end{example} \subsection{Convex components describing support of martingale transports}\label{sec:convex_components} We saw in Proposition \ref{prop:as_aff_comp} that for any martingale transport $\theta\in\cM(\mu,\nu)$ the mass from $x$ is diffused within $\overline{A_x((\phi^n)_n)}$ for $\mu$--a.e.\ $x$. This holds for any sequence $(\phi^n)_{n\in \bN} \subseteq \cC$ with $\an{\nu -\mu, \phi^n}\to 0$. In consequence, one may be inclined to ask if $\gamma(x,\cdot)$, where $\theta=\mu\otimes \gamma$, is concentrated on the intersection of $\overline{A_x((\phi^n)_n)}$ over all such sequences $(\phi^n)_{n\in \bN} \subseteq \cC$? This, in general, is false. Indeed, for a fixed $x$, we can typically find a sequence $(\phi^n)_{n\in \bN} \subseteq \cC$ such that $\overline{A_x((\phi^n)_n)}$ is too small. In other words, the union over all $(\phi^n)_{n\in \bN} \subseteq \cC$ such that $\an{\nu -\mu, \phi^n}\to 0$ of the $\mu$-null set $\cN_{\mu}((\phi_n)_n)$ on which it does not happen that $\gamma(x,\cdot)$ is concentrated on $\overline{A_x((\phi^n)_n)}$ is not a $\mu$-null set. To understand this we come back to Examples \ref{ex:discrete}--\ref{ex:mixed}. \begin{example}[Examples \ref{ex:discrete}--\ref{ex:continuous} continued.] \label{ex:Ax_toosmall} We continue the discussion of our motivating examples and their convex components as computed in Example \ref{ex:convex_components}. We argue that $C_z$ may not be defined as the intersection of $A((\phi^n))$ over all sequences with $\an{\nu^k -\mu^k, \phi^n}\to 0$. Fix $x_0\in (0,1)$. Now let $\phi^n((x,y))=(y-n(x-x_0))^+$ so that $\phi^n$ is affine on $\{x\}\times [-1,1]$ for $x\notin (x_0-\frac{1}{n},x_0+\frac{1}{n})$. It follows that $\an{\nu^k -\mu^k, \phi^n}\to 0$ for any $2\leq k\leq \infty$. However $\phi^n((x_0,y))=y^+$ from which we see that $A((\phi^n))_{(x_0,0)}=\{(x_0,0)\}$ and hence $$\bigcap_{(\phi^n)_{n\in \bN} \subseteq \cC: \an{\nu^k -\mu^k, \phi^n}\to 0}\overline{A((\phi^n))}_{(x,0)}=\{(x,0)\}\subsetneq C_x(\mu^k,\nu^k),\quad 2\leq k\leq \infty.$$ \end{example} To circumvent the above problem, we would need to restrict ourselves to a suitable countable family of sequences of functions in $\cC$. This can be achieved by considering a suitably defined essential intersection instead of the simple intersection above. To describe our construction we need some additional definitions. Let $CL(\bB)$ be the set of non-empty closed subsets of $\bB$. There is number of well understood topologies one may put on $CL(\bB)$. For our purposes it is most convenient to equip $CL(\bB)$ with the Wijsman topology \cite{Wijsman:66} which is the weak topology generated by mappings $d_{\cdot}(x):CL(\bB)\to \bR$, $x\in \bB$. This topology is weaker than the Vietoris topology and stronger than the Fell topology. To us, is has two main advantages. First, it makes $CL(\bB)$ into a Polish space as shown by Beer \cite{Beer:91}. Second, it generates the Effros $\sigma$--algebra which implies that weak measurability of closed--valued multifunctions can be treated similarly to regular functions, see \cite{Beer:91} and the references therein. Finally, let $\cc$ be the set of closed convex subsets of $\bB$. Then $\cc$ is a closed subset of $CL(\bB)$ and hence also Polish with its Wijsman topology \cite{Wijsman:66}. We equip it with partial ordering given by set inclusion. Then, in a recent work, Larsson \cite{Larsson:17} showed that one can build a strictly increasing measurable map from $\cc$ to $\bR$. With such a map, one can follow the usual arguments, to establish existence of essential infimum of a family of $\cc$--valued
end-goal. Closeness towards end-goal is percentage of task completed like 20\%, 40\%, 60\% and so on. In Greedy approach, the accuracy constantly increases and we reach faster towards the ends goal. We reach the end goal with slightly better accuracy using greedy approach using a lot of resources and performing many updates. On the other hand if we choose DSOC approach, there would be slight improvement in overall accuracy as we progress towards end-goal and there would be slower progress towards the end goal, but it uses less resources and performs fewer updates. In DSOC approach, we reach the end goal with lesser updates and slightly lesser accuracy compared to greedy model. \section{INTRODUCTION} Autonomous systems like Self-driving cars, autonomous aerial systems, smart restaurants and smart traffic lights have attracted a lot of interest from both academia and Industry~\cite{transportresearch}~\cite{boubin2019managingdeprecated}. Many top companies like Google, Uber, Intel, Apple, Tesla, Amazon have significantly invested in researching and building Autonomous Systems~\cite{selfdriving}. An autonomous system is a critical decision making system which makes decisions without human intervention. An autonomous system is comprised of complex technologies which learns the environment, makes decision and accomplishes the goal~\cite{transportresearch}. In this paper, we focus on "Microservices Orchestration" for coordinating multiple autonomous applications that are working towards a common objective. Microservices is an architectural style which structures an application as a collection of different services that are loosely coupled, independently deployable and highly maintainable. Large, complex applications can be deployed using Microservices where each service will have different logical function and contribute to the bigger application~\cite{micro_article}. When working towards a particular goal, we might need to deploy multiple applications which need to take up a sub task and coordinate with other applications in order to efficiently complete the task at hand. Efficient coordination of multiple different applications is really crucial for building fully autonomous systems Configuring, controlling and keeping track of each microservice would be really hard~\cite{orchestrate}. An efficient way to track and manage multiple applications would be using an orchestrator. Orchestration is a process which involves automated configuration, management and coordination of multiple computer systems and application software~\cite{it_orchestrate}. There are various orchestration tools for Microservices like ansible~\cite{ansible}, Kubernetes~\cite{kubernetes}, Docker Swarm~\cite{swarm_docker}. \begin{figure} \centering \includegraphics[width=8cm]{figures/intro_graph.pdf} \caption{Why Coordination and Patching?} \label{fig:introduc} \end{figure} When working with Artificial Intelligence based applications, the performance of each microservice may degrade over time and there needs to be an updated code or Machine Learning model in order to restore the performance~\cite{naveeniotonline}. When multiple applications seek an update, allowing all updates would degrade bandwidth, increase throughput and may not yield much performance gain~\cite{naveeniotonline}. If any microservice application need an update, it would be a tedious task to identify individual application and perform the update. While performing such updates we need to consider individual application performance, progress towards end goal and system performance~\cite{7886112}. It is practically impossible to consider every application's performance parameters and pick the model to be updated at run-time~\cite{poster_Aiiot}. A Patch could be a code update that fixes a bug or yields performance improvement, Machine Learning model update and would be referred as "Classifier". The term Classifier is used in rest of this paper. Figure~\ref{fig:introduc} approximates the usage of Classifiers in AI-based applications and patching in real world Autonomous Systems such as Self Driving cars, Smart Traffic Systems, Aerial vehicles and Smart surveillance. These autonomous applications make use of nearly 40-140 total classifiers out of which at least 40 percent have frequent classifier update to improve performance~\cite{naveeniotonline}. The frequency of update in individual application is calculated by performing a literature survey of updates using incremental software releases~\cite{autopilot}~\cite{dji}~\cite{trlights}. Out of the total updates, at least 50 percent of them are correlated updates. For example, an update to an application's model would impact the performance of another interdependent model or code fragment. If multiple applications are coordinating with one another towards a common objective, the choice of update significantly impacts the performance of the system and rate of completion towards the end goal. This paper proposes Docker Swarm Orchestration Component called "DSOC" which is responsible for orchestrating multiple applications and efficiently prioritizing classifier updates. To the best of our knowledge, this is the first work to propose an efficient method of using Docker Swarm for multiple AI based application coordination involving classifier updates. \section{Related Work} To the best of our knowledge, so far no approaches with focus of this paper (Efficient Patching for coordinating Edge Applications) have been published. In this section, we discuss related work which are closely related to the research problem of this paper. Lele Ma et al.~\cite{service_handoff} proposed efficient service hand-off across edge servers using Docker containers migration. Researchers give an in-depth explanation of leveraging Docker features to the full extent. The paper incorporates migration algorithm for service hand-off which gives insights on the process of patching an application. Taherizadeh et al.~\cite{dml_scaling} proposed an auto-scaling method for time-critical cloud applications considering system performance and application level monitoring. The researchers built a Dynamic Multi-level autoscaling system using Kubernetes as an orchestrator. Kaewkasi et al.~\cite{7886112} worked on building Ant colony optimization based scheduling algorithm which outperforms the built-in scheduling provided by Docker Swarm. This research gave hints on carefully considering resource utilization and available resources for coordinating applications. \section{System Architecture} As depicted in Figure~\ref{Design2}, the System architecture consists of 3 main components: Application, Coalescer and Stratagem. Applications are lightweight and containerized units which are deployed to achieve a particular sub-task. The main focus in this research is to choose updates efficiently when a group of different applications coordinate to achieve a common objective. Such a group of applications working towards a common objective is called "Swarm"~\cite{swarm_docker}. The single point of contact for multiple applications is the "Coalescer". Coalescer is the orchestrator unit in our design which helps in coordinating multiple applications to achieve a particular goal. A Coalescer has multiple functionalities: It tracks application changes, it processes migration request, it tracks progress and performance per application. If there is performance degradation in any of the application(s), Coalescer makes sure the expected performance of application would be restored. Coalescer handles coordination of multiple applications, updates to application and makes sure overall performance of the system is preserved. Stratagem is a component which records application changes and updates the application with suitable code/model in order to satisfy performance criteria. Stratagem prioritizes updates considering different performance metrics and migrates the required difference, between source code/model and updated code/model, to the appropriate application. \begin{figure} \centering \includegraphics[width=8.5cm]{figures/Microservice.pdf} \caption{Interaction of different nodes in DSOC} \label{figX} \end{figure} Figure~\ref{figX} explains the logical components that are used to build Coalescer and Stratagem. "Manager" is a logical component present in Coalescer. A Manager node tracks one or more applications and makes sure the performance of those applications are optimum. "Worker" is a logical component present in Stratagem. A Manager node creates multiple worker nodes to track the deployed applications. A manager creates a worker node per application to track changes and makes sure the performance, progress expectation of that application is being met over time. As Autonomous systems such as Self driving cars, Aerial Vehicles, Smart traffic system, smart restaurants are becoming increasingly popular~\cite{swarm_docker}, they have not focused on building DSOC type application which efficiently progresses towards the goal, using a strategy which is easy to deploy and maintain. An autonomous application which is deployed in production will comprise of several smaller applications coordinating together to achieve an end goal~\cite{IOT_growth}. This is a Microservices based architecture where each independent component would have a logical function and contributes towards the end goal~\cite{micro_article}. Building such an efficient system which tracks and makes timely progress towards end goal is really crucial. During deployment, there might be updates to individual applications which improves their performance. If all the update hungry applications are allowed to update their model, it would lead to increased throughput and bandwidth
# -*- coding: utf-8 -*- """Ferramentas para manipulação de imagens utilizando a biblioteca scipy Neste modulo estão implementadas diversas funções úteis para o processamento de imagens medicas. Estas foram baseadas nos exemplos presentes no seguinte link: http://www.scipy-lectures.org/advanced/image_processing/index.html Sendo eles: 2.6.1. Opening and writing to image files 2.6.2. Displaying images 2.6.3. Basic manipulations 2.6.4. Image filtering 2.6.5. Feature extraction 2.6.6. Measuring objects properties: ndimage.measurements Outro link importante: https://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html Estão também as funções separadas por categorias: - Displaysing images - Inserting noises - Applying filters - Applying mathematical morphologies - Applying segmentation methods License ------- *THE BEERWARE LICENSE* (Revision 42): Italo Fernandes wrote this code. As long as you retain this notice, you can do whatever you want with this stuff. If we meet someday, and you think this stuff is worth it, you can buy me a beer in return. Author ------ Italo Gustavo Sampaio Fernandes Contact: [email protected] Git: www.github.com/italogfernandes Janeiro de 2018 Organization ------------ FEDERAL UNIVERSITY OF UBERLANDIA Faculty of Electrical Engineering Biomedical Engineering Lab Examples ------- Os exemplos de uso foram immplementados na função test, está é chamada se o arquivo for executado diretamente e não importado:: $ python scipy_toolbox.py É preciso selecionar qual exemplo você deseja visualizar a execução. Leia o código da função test() para mais informações. """ from scipy import misc # Opening and clossing import matplotlib.pyplot as plt # Showing the images import numpy as np # Image manipulation as nparray from scipy import ndimage # Gaussian filter # ------------------------------------------------------------------------------ ## Atributes # ------------------------------------------------------------------------------ noise_names = ('uniform', 'gaussian', 'rayleight', 'exponential', 'gamma', 'salt_and_pepper') """str tuple: tuple with the possible noises that you can add to a image""" noise_params = {'uniform':{'low':0.0,'high':80.0,'amount':1.0}, 'gaussian':{'mean':5.0,'std':30.0,'amount':1.0}, 'rayleight':{'scale':20.0,'amount':1.0}, 'exponential':{'scale':5.0,'amount':1.0}, 'gamma':{'shape':1.0,'scale':8.0,'amount':1.0}, 'salt_and_pepper':{'s_vs_p':0.5,'amount':0.004}} """dict: for each possible noise, here there are the params and default values""" # ------------------------------------------------------------------------------ filter_names = ('gaussian', 'uniform', 'median', 'maximum', 'minimum', 'sharpening', 'percentile', 'wiener', 'sobel') """str tuple: tuple of all implemented filters, that can be used with filter_image""" filter_params = {'gaussian': {'sigma': 3}, 'uniform': {'size': 3}, 'median': {'size': 3}, 'maximum': {'size': 3}, 'minimum': {'size': 3}, 'sharpening': {'alpha': 30, 'filter_sigma': 1}, 'percentile': {'percentile':75,'size': 3}, 'wiener':{}, 'sobel':{}} """dict: for each possible filter, here there are the params and default values""" # ------------------------------------------------------------------------------ mathematical_morphologies_names = ('erosion', 'dilation', 'opening', 'closing', 'propagation', 'reconstruction', 'open/close', 'full_reconstruction') """str tuple: tuple of all implemented mathematical morphologie that you can apply""" mathematical_morphologies_options = ('binary','grey') """str tuple: tuple of option for each mathematical morphologie""" # ------------------------------------------------------------------------------ segmentation_names = ('histogram', 'spectral_clustering') """str tuple: tuple of all implemented segmentation methods""" segmentation_params = {'histogram':{'mode':'mean'}, 'spectral_clustering':{'qnt_clusters':4}} """dict: for each possible segmentation, here there are the params and default values""" # ------------------------------------------------------------------------------ ## Functions # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ ## Displaying images (2.6.2) # ------------------------------------------------------------------------------ def show_image(input_image,title=None, colorbar=False): if input_image.dtype == np.float64: min_value = min(input_image.ravel()) max_value = max(input_image.ravel()) else: min_value = np.iinfo(input_image.dtype).min max_value = np.iinfo(input_image.dtype).max im = plt.imshow(input_image, cmap=plt.cm.gray,clim=(min_value, max_value)) if colorbar: plt.colorbar(orientation='vertical') if not title is None: plt.title(title) return im def show_hist(input_image,title=None): if input_image.dtype == np.float64: min_value = min(input_image.ravel()) max_value = max(input_image.ravel()) else: min_value = np.iinfo(input_image.dtype).min max_value = np.iinfo(input_image.dtype).max plt.hist(input_image.ravel(), bins=256, range=(min_value, max_value), normed=True) if not title is None: plt.title(title) def show_image_and_hist(input_image,im_title=None, hist_title=None,colorbar=True): plt.subplot(2,1,1) im = show_image(input_image,im_title) plt.subplot(2,1,2) show_hist(input_image,hist_title) if colorbar: plt.colorbar(im, orientation='horizontal') def show_images_and_hists(input_images,titles=[],hist_titles=[],colorbar=True): qnt = len(input_images) if not titles == [] and hist_titles == []: hist_titles = ['Histogram of ' + title for title in titles] for i in range(qnt): plt.subplot(2,qnt,i+1) im = show_image(input_images[i],titles[i] if not titles == [] else None) plt.subplot(2,qnt,qnt+i+1) show_hist(input_images[i],hist_titles[i] if not hist_titles == [] else None) if colorbar: plt.colorbar(im, orientation='horizontal') # ------------------------------------------------------------------------------ ## Inserting Noise # ------------------------------------------------------------------------------ def insert_uniform_noise(input_image, low=0, high=80, amount=1.0): return (input_image + amount * np.random.uniform(low,high,input_image.shape)) def insert_gaussian_noise(input_image, mean=5, std=30, amount=1.0): return (input_image + amount * np.random.normal(mean,std,input_image.shape)) def insert_rayleight_noise(input_image, scale=20, amount=1.0): return (input_image + amount * np.random.rayleigh(scale,input_image.shape)) def insert_exponential_noise(input_image, scale=5, amount=1.0): return (input_image + amount * np.random.exponential(scale,input_image.shape)) def insert_gamma_noise(input_image, shape=1, scale=8, amount=1.0): return (input_image + amount * np.random.gamma(shape,scale,input_image.shape)) def insert_salt_and_pepper_noise(input_image,s_vs_p=0.5,amount=0.004): if input_image.dtype == np.float64: min_value = min(input_image.ravel()) max_value = max(input_image.ravel()) else: min_value = np.iinfo(input_image.dtype).min max_value = np.iinfo(input_image.dtype).max output_image = np.copy(input_image) # Salt mode num_salt = np.ceil(amount * input_image.size * s_vs_p) coords = [np.random.randint(0, i - 1, int(num_salt)) for i in input_image.shape] output_image[coords] = max_value # Pepper mode num_pepper = np.ceil(amount* input_image.size * (1. - s_vs_p)) coords = [np.random.randint(0, i - 1, int(num_pepper)) for i in input_image.shape] output_image[coords] = min_value return (output_image) def insert_noise(input_image, noise_type, show_result=False, *args, **kwargs): """ Insert a selected noise to a image. Parameters ---------- input_image : nparray Represents the image that you want to add noise. filter_type: str Must in one of: 'uniform', 'gaussian', 'rayleight', 'exponential', 'gamma', 'salt_and_pepper' show_result: Boolean If True, the result is plotted using matplotlib, default is False. *args: Arguments of the selected noise, see details for more information. **kwargs: The key arguments of the selected noise, see details for more information. Returns ------- nparray The image with the noise as the same format of the input. Details ------- Arguments for the noise and the default values: ================= ===================================== Filter Kwargs ================= ===================================== 'uniform' 'low':0.0,'high':80.0,'amount':1.0 'gaussian' 'mean':5.0,'str':30.0,'amount':1.0 'rayleight' 'scale':20.0,'amount':1.0 'exponential' 'scale':5.0,'amount':1.0 'gamma' 'shape':1.0,'scale':8.0,'amount':1.0 'salt_and_pepper' 's_vs_p':0.5,'amount':0.004 ================= ===================================== This details also are defined in this module as a argument. """ if noise_type not in noise_names: raise(NotImplemented) if noise_type == 'uniform': output_image = insert_uniform_noise(input_image,*args, **kwargs) elif noise_type == 'gaussian': output_image = insert_gaussian_noise(input_image,*args,**kwargs) elif noise_type == 'rayleight': output_image = insert_rayleight_noise(input_image,*args,**kwargs) elif noise_type == 'exponential': output_image = insert_exponential_noise(input_image,*args,**kwargs) elif noise_type == 'gamma': output_image = insert_gamma_noise(input_image,*args,**kwargs) elif noise_type == 'salt_and_pepper': output_image = insert_salt_and_pepper_noise(input_image,*args,**kwargs) if show_result: show_images_and_hists([input_image,output_image], titles=['Input', 'Output Image - %s%s\n%s - %s' % ("Noise: ", noise_type, str(args), str(kwargs))],colorbar=True) output_image = output_image.astype(input_image.dtype) # input format return output_image # ------------------------------------------------------------------------------ ## Image filtering # ------------------------------------------------------------------------------ def sharpenning_filter(input_image, alpha = 30, filter_sigma=1,show_result=False): filter_blurred_f = ndimage.gaussian_filter(input_image, filter_sigma) sharpened = input_image + alpha * (input_image - filter_blurred_f) if show_result: show_images_and_hists([input_image,sharpened],['Input', 'Sharpened']) return sharpened def sobel_filter(input_image): sx = ndimage.sobel(input_image, axis=0, mode='constant') sy = ndimage.sobel(input_image, axis=1, mode='constant') sob = np.hypot(sx, sy) return sob def apply_filter(input_image, filter_type, show_result=False, *args, **kwargs): """ Apply a selected filter to a image. Parameters ---------- input_image : nparray Represents the image to be filtered filter_type: str Must in one of: 'gaussian', 'uniform', 'median', 'maximum', 'minimum', 'sharpening', 'percentile', 'wiener', 'sobel' show_result: Boolean If True, the result is plotted using matplotlib, default is False. *args: Arguments of the selected filter, see details for more information. **kwargs: The key arguments of the selected filter, see details for more information. Returns ------- nparray The filtered image as the same format of the input. Details ------- Arguments for the filters and the default values: ============= =========================== Filter Kwargs ============= =========================== 'gaussian' sigma: 3 'uniform' size: 3 'median' size: 3 'maximum' size: 3 'minimum' size: 3 'sharpening' alpha: 30, filter_sigma: 1 'percentile' percentile: 75, size: 3 'wiener' NotImplement 'sobel' None ============= =========================== This details also are defined in this module as a argument. """ if filter_type not in filter_names: raise(NotImplemented) if filter_type == 'gaussian': output_image = ndimage.gaussian_filter(input_image, *args, **kwargs) elif filter_type == 'uniform': output_image = ndimage.uniform_filter(input_image, *args, **kwargs) elif filter_type == 'median': output_image = ndimage.median_filter(input_image, *args, **kwargs) elif filter_type == 'maximum': output_image = ndimage.maximum_filter(input_image, *args, **kwargs) elif filter_type == 'minimum': output_image = ndimage.minimum_filter(input_image, *args, **kwargs) elif filter_type == 'sharpening': output_image = sharpenning_filter(input_image, *args, **kwargs) elif filter_type == 'percentile': output_image = ndimage.percentile_filter(input_image, *args, **kwargs) elif filter_type == 'wiener': # TODO: finish the wiener filter raise(NotImplemented) elif filter_type == 'sobel': output_image = sobel_filter(input_image) if show_result: show_images_and_hists([input_image,output_image], titles=['Input', 'Output Image - %s%s\n%s - %s' % ("Filter: ", filter_type, str(args),str(kwargs))], colorbar=True) output_image = output_image.astype(input_image.dtype) # input format return output_image # -----------file------------------------------------------------------------------- ## Mathematical Morphologie # ------------------------------------------------------------------------------ def apply_math_morphologie(input_image, morph_type ,morph_op='binary', size=3, show_result=False): """ Apply a mathematical morphologie to a image. Parameters ---------- input_image : nparray Represents the image that you want to apply the morphologie morph_type: str Must be one of: 'erosion', 'dilation', 'opening', 'closing', 'propagation', 'reconstruction', 'open/close', 'full_reconstruction' morph_op: str 'binary' or 'grey', representing the two types of supported image size: int The morphologie structure is a matrix of ones with sizeXsize show_result: Boolean If True, the result is plotted using matplotlib, default is False. Returns ------- nparray The image after morphologia, as the same format of the input. Note ---- Some
csd.units) nt.assert_array_almost_equal(C_i, csd) def test_StepiCSD_01(self): """test using non-standard SI units 1""" # set some parameters for ground truth csd and csd estimates., e.g., # we will use same source diameter as in ground truth # contact point coordinates z_j = np.arange(21) * 1E-4 * pq.m # source coordinates z_i = z_j # current source density magnitude C_i = np.zeros(z_i.size) * pq.A / pq.m**3 C_i[7:12:2] += np.array([-.5, 1., -.5]) * pq.A / pq.m**3 # source radius (delta, step) R_i = np.ones(z_i.size) * 1E-3 * pq.m # source height (cylinder) h_i = np.ones(z_i.size) * 1E-4 * pq.m # conductivity, use same conductivity for top layer (z_j < 0) sigma = 0.3 * pq.S / pq.m sigma_top = sigma # flag for debug plots plot = False # get LFP and CSD at contacts phi_j, C_i = get_lfp_of_cylinders(z_j, z_i, C_i, R_i, h_i, sigma, plot) step_input = { 'lfp': phi_j * 1E3 * pq.mV / pq.V, 'coord_electrode': z_j, 'diam': R_i.mean() * 2, 'sigma': sigma, 'sigma_top': sigma, 'h': h_i, 'tol': 1E-12, # Tolerance in numerical integration 'f_type': 'gaussian', 'f_order': (3, 1), } step_icsd = icsd.StepiCSD(**step_input) csd = step_icsd.get_csd() self.assertEqual(C_i.units, csd.units) nt.assert_array_almost_equal(C_i, csd) def test_StepiCSD_02(self): """test using non-standard SI units 2""" # set some parameters for ground truth csd and csd estimates., e.g., # we will use same source diameter as in ground truth # contact point coordinates z_j = np.arange(21) * 1E-4 * pq.m # source coordinates z_i = z_j # current source density magnitude C_i = np.zeros(z_i.size) * pq.A / pq.m**3 C_i[7:12:2] += np.array([-.5, 1., -.5]) * pq.A / pq.m**3 # source radius (delta, step) R_i = np.ones(z_i.size) * 1E-3 * pq.m # source height (cylinder) h_i = np.ones(z_i.size) * 1E-4 * pq.m # conductivity, use same conductivity for top layer (z_j < 0) sigma = 0.3 * pq.S / pq.m sigma_top = sigma # flag for debug plots plot = False # get LFP and CSD at contacts phi_j, C_i = get_lfp_of_cylinders(z_j, z_i, C_i, R_i, h_i, sigma, plot) step_input = { 'lfp': phi_j, 'coord_electrode': z_j * 1E3 * pq.mm / pq.m, 'diam': R_i.mean() * 2 * 1E3 * pq.mm / pq.m, 'sigma': sigma, 'sigma_top': sigma, 'h': h_i * 1E3 * pq.mm / pq.m, 'tol': 1E-12, # Tolerance in numerical integration 'f_type': 'gaussian', 'f_order': (3, 1), } step_icsd = icsd.StepiCSD(**step_input) csd = step_icsd.get_csd() self.assertEqual(C_i.units, csd.units) nt.assert_array_almost_equal(C_i, csd) def test_StepiCSD_03(self): """test using non-standard SI units 3""" # set some parameters for ground truth csd and csd estimates., e.g., # we will use same source diameter as in ground truth # contact point coordinates z_j = np.arange(21) * 1E-4 * pq.m # source coordinates z_i = z_j # current source density magnitude C_i = np.zeros(z_i.size) * pq.A / pq.m**3 C_i[7:12:2] += np.array([-.5, 1., -.5]) * pq.A / pq.m**3 # source radius (delta, step) R_i = np.ones(z_i.size) * 1E-3 * pq.m # source height (cylinder) h_i = np.ones(z_i.size) * 1E-4 * pq.m # conductivity, use same conductivity for top layer (z_j < 0) sigma = 0.3 * pq.S / pq.m sigma_top = sigma # flag for debug plots plot = False # get LFP and CSD at contacts phi_j, C_i = get_lfp_of_cylinders(z_j, z_i, C_i, R_i, h_i, sigma, plot) step_input = { 'lfp': phi_j, 'coord_electrode': z_j, 'diam': R_i.mean() * 2, 'sigma': sigma * 1E3 * pq.mS / pq.S, 'sigma_top': sigma * 1E3 * pq.mS / pq.S, 'h': h_i, 'tol': 1E-12, # Tolerance in numerical integration 'f_type': 'gaussian', 'f_order': (3, 1), } step_icsd = icsd.StepiCSD(**step_input) csd = step_icsd.get_csd() self.assertEqual(C_i.units, csd.units) nt.assert_array_almost_equal(C_i, csd) def test_StepiCSD_units_04(self): """test non-continous z_j array""" # set some parameters for ground truth csd and csd estimates., e.g., # we will use same source diameter as in ground truth # contact point coordinates z_j = np.arange(21) * 1E-4 * pq.m # source coordinates z_i = z_j # current source density magnitude C_i = np.zeros(z_i.size) * pq.A / pq.m**3 C_i[7:12:2] += np.array([-.5, 1., -.5]) * pq.A / pq.m**3 # source radius (delta, step) R_i = np.ones(z_i.size) * 1E-3 * pq.m # source height (cylinder) h_i = np.ones(z_i.size) * 1E-4 * pq.m # conductivity, use same conductivity for top layer (z_j < 0) sigma = 0.3 * pq.S / pq.m sigma_top = sigma # flag for debug plots plot = False # get LFP and CSD at contacts phi_j, C_i = get_lfp_of_cylinders(z_j, z_i, C_i, R_i, h_i, sigma, plot) inds = np.delete(np.arange(21), 5) step_input = { 'lfp': phi_j[inds], 'coord_electrode': z_j[inds], 'diam': R_i[inds] * 2, 'sigma': sigma, 'sigma_top': sigma, 'h': h_i[inds], 'tol': 1E-12, # Tolerance in numerical integration 'f_type': 'gaussian', 'f_order': (3, 1), } step_icsd = icsd.StepiCSD(**step_input) csd = step_icsd.get_csd() self.assertEqual(C_i.units, csd.units) nt.assert_array_almost_equal(C_i[inds], csd) def test_SplineiCSD_00(self): """test using standard SI units""" # set some parameters for ground truth csd and csd estimates., e.g., # we will use same source diameter as in ground truth # contact point coordinates z_j = np.arange(21) * 1E-4 * pq.m # source coordinates z_i = z_j # current source density magnitude C_i = np.zeros(z_i.size) * pq.A / pq.m**3 C_i[7:12:2] += np.array([-.5, 1., -.5]) * pq.A / pq.m**3 # source radius (delta, step) R_i = np.ones(z_i.size) * 1E-3 * pq.m # source height (cylinder) h_i = np.ones(z_i.size) * 1E-4 * pq.m # conductivity, use same conductivity for top layer (z_j < 0) sigma = 0.3 * pq.S / pq.m sigma_top = sigma # construct interpolators, spline method assume underlying source # pattern generating LFPs that are cubic spline interpolates between # contacts so we generate CSD data relying on the same assumption f_C = interp1d(z_i, C_i, kind='cubic') f_R = interp1d(z_i, R_i) num_steps = 201 z_i_i = np.linspace(float(z_i[0]), float( z_i[-1]), num_steps) * z_i.units C_i_i = f_C(np.asarray(z_i_i)) * C_i.units R_i_i = f_R(z_i_i) * R_i.units h_i_i = np.ones(z_i_i.size) * np.diff(z_i_i).min() # flag for debug plots plot = False # get LFP and CSD at contacts phi_j, C_i = get_lfp_of_cylinders(z_j, z_i_i, C_i_i, R_i_i, h_i_i, sigma, plot) spline_input = { 'lfp': phi_j, 'coord_electrode': z_j, 'diam': R_i * 2, 'sigma': sigma, 'sigma_top': sigma, 'num_steps': num_steps, 'tol': 1E-12, # Tolerance in numerical integration 'f_type': 'gaussian', 'f_order': (3, 1), } spline_icsd = icsd.SplineiCSD(**spline_input) csd = spline_icsd.get_csd() self.assertEqual(C_i.units, csd.units) nt.assert_array_almost_equal(C_i, csd, decimal=3) def test_SplineiCSD_01(self): """test using standard SI units, deep electrode coordinates""" # set some parameters for ground truth csd and csd estimates., e.g., # we will use same source diameter as in ground truth # contact point coordinates z_j = np.arange(10, 31) * 1E-4 * pq.m # source coordinates z_i = z_j # current source density magnitude C_i = np.zeros(z_i.size) * pq.A / pq.m**3 C_i[7:12:2] += np.array([-.5, 1., -.5]) * pq.A / pq.m**3 # source radius (delta, step) R_i = np.ones(z_i.size) * 1E-3 * pq.m # source height (cylinder) h_i = np.ones(z_i.size) * 1E-4 * pq.m # conductivity, use same conductivity for top layer (z_j < 0) sigma = 0.3 * pq.S / pq.m sigma_top = sigma # construct interpolators, spline method assume underlying source # pattern generating LFPs that are cubic spline interpolates between # contacts so we generate CSD data relying on the same assumption f_C = interp1d(z_i, C_i, kind='cubic') f_R = interp1d(z_i, R_i) num_steps = 201 z_i_i = np.linspace(float(z_i[0]), float( z_i[-1]), num_steps) * z_i.units C_i_i = f_C(np.asarray(z_i_i)) * C_i.units R_i_i = f_R(z_i_i) * R_i.units h_i_i = np.ones(z_i_i.size) * np.diff(z_i_i).min() # flag for debug plots plot = False # get LFP and CSD at contacts phi_j, C_i = get_lfp_of_cylinders(z_j, z_i_i, C_i_i, R_i_i, h_i_i, sigma, plot) spline_input = { 'lfp': phi_j, 'coord_electrode': z_j, 'diam': R_i * 2, 'sigma': sigma, 'sigma_top': sigma, 'num_steps': num_steps, 'tol': 1E-12, # Tolerance in numerical integration 'f_type': 'gaussian', 'f_order': (3, 1), } spline_icsd = icsd.SplineiCSD(**spline_input) csd = spline_icsd.get_csd() self.assertEqual(C_i.units, csd.units) nt.assert_array_almost_equal(C_i, csd, decimal=3) def test_SplineiCSD_02(self): """test using non-standard SI units""" # set some parameters for ground truth csd and csd estimates., e.g., # we will use same source diameter as in ground truth # contact point coordinates z_j = np.arange(21)
the given ATRS $\atrsone$. The construction of the tree grammar $\tgone$ follows itself closely the algorithm outlined by \citet{Jones:TCS:07}. Recall that the \nth{$i$} rule $l_i \to r_i \in \atrsone$ constitutes dead code if the \nth{$i$} component $Z_i$ of the collecting semantics of $\atrsone$ is empty, by Lemma~\eref{l:colsem:aux}{deadcode}. Based on the constructed tree grammar, the implementation identifies rule $l_i \to r_i$ as dead code when $\tgone$ does not define a production $\RL{i} \to \termone$ and thus $Z_i = \varnothing$. All such rules are eliminated, in accordance to Proposition~\ref{p:usablerules}. On the remaining rules, our implementation performs instantiation as follows. We suppose \emph{$\epsilon$-productions} $N \to M$, for non-terminals $M$, have been eliminated by way of a standard construction, preserving the set of terms from non-terminals in $\tgone$. Thus productions in $\tgone$ have the form $N \to \funone(\seq[k]{\termone})$. Fix a rule $l_i \to r_i \in \atrsone$. The primary goal of this stage is to get rid of head variables, with respect to the $\eta$-saturated ATRS $\etasaturate{\atrsone}$, thereby enabling uncurrying so that the ATRS $\atrsone$ can be brought into functional form. For all such head variables \lstinline[style=atrs,mathescape]|z|, then, we construct a set of binders \[ \{ {\lstinline[style=atrs,mathescape]|z|_i} \mapsto \mathsf{fresh}(\funone(\seq[k]{\termone})) \mid {\lstinline[style=atrs,mathescape]|z|_i} \to \funone(\seq[k]{\termone}) \in \tgone \} \tkom \] where the function $\mathsf{fresh}$ replaces non-terminals by fresh variables, discarding binders where the right-hand contains defined symbols. For variables \lstinline[style=atrs,mathescape]|z| which do not occur in head positions, we construct such a binder only if the production ${\lstinline[style=atrs,mathescape]|z|_{i}} \to \funone(\seq[k]{\termone})$ is unique. With respect to the tree grammar of Figure~\ref{fig:cfa}, head variables \lstinline[style=atrs,mathescape]|f|, \lstinline[style=atrs,mathescape]|g| of the rule~\ref{atrsrev':C1} the implementation generates binders \begin{align*} \{ {\lstinline[style=atrs,mathescape]|f|_1} \mapsto {\lstinline[style=atrs,mathescape]!C2!}, {\lstinline[style=atrs,mathescape]|f|_1} \mapsto {\lstinline[style=atrs,mathescape]!C1(f',C3(x'))!} \} \text{ and } \{ {\lstinline[style=atrs,mathescape]|g|_1} \mapsto {\lstinline[style=atrs,mathescape]!C3(x'')!} \} \tpkt \end{align*} The product-combination of all such binders gives then a set of substitution $\{\sigma_{i,1},\dots,\sigma_{i,i_k}\}$ that leads to sufficiently many instantiations $l_i\sigma_{i,j} \to r_i\sigma_{i,j}$ of rule $l_i \to r_i$, by Lemma~\eref{l:colsem:aux}{instantiate}. Our implementation replaces every rule $l_i \to r_i \in \atrsone$ by instantiations constructed this way. The definition of binder was chosen to keep the number of computed substitutions minimal, and hence the generated head variable free ATRS small. Putting things together, we see that the instantiation is sufficiently exhaustive, and thus the overall transformation is complexity reflecting and preserving by Theorem~\ref{t:instantiate}. By \lstinline[style=strategy,mathescape]|cfaDCE| we denote the variation of \lstinline[style=strategy,mathescape]|cfa| that performs dead code elimination, but no instantiations. \subsection{Combining Transformations}\label{s:strategy} \begin{figure} \centering \begin{lstlisting}[style=strategy,frame=single] simplify = simpATRS; toTRS; simpTRS where simpATRS = exhaustive inline(lambda-rewrite); exhaustive inline(match); exhaustive inline(constructor); usableRules toTRS = cfa; uncurry; usableRules simpTRS = exhaustive ((inline(decreasing); usableRules) <> cfaDCE \end{lstlisting} \shortv{\nocaptionrule}\caption{Transformation Strategy in \protect\hoca.}\label{fig:strategy} \end{figure} We have now seen all the building blocks underlying our tool \hoca.\@ But \emph{in which order} should we apply the introduced program transformations? In principle, one could try to blindly iterate the proposed techniques and hope that a FOP can cope with the output. Since transformations are closed under composition, the blind iteration of transformations is sound, although seldom effective. In short, a \emph{strategy} is required that combines the proposed techniques in a sensible way. There is no clear notion of a perfect strategy. After all, we are interested in non-trivial program properties. However, it is clear that any sensible strategy should at least (i) yield overall a transformation that is effectively computable, (ii)~not defeat its purpose by generating TRSs whose runtime complexity is not at all in relation to the complexity of the analysed program, and (iii)~produce ATRSs that FOPs are able to analyse. In Figure~\ref{fig:strategy} we render the current transformation strategy underlying our tool \hoca.\@ More precise, Figure~\ref{fig:strategy} defines a transformation \lstinline[style=strategy,mathescape]|simplify| based on the following \emph{transformation combinators}: \begin{varitemize} \item \lstinline[style=strategy,mathescape]|$\transone_1$;$\transone_2$| denotes the composition $\transone_2 \circ \underline{\transone_1}$, where $\underline{\transone_1}(\atrsone) = \transone_1(\atrsone)$ if defined and $\underline{\transone_1}(\atrsone) = \atrsone$ otherwise; \item the transformation \lstinline[style=strategy,mathescape]|exhaustive$\transone$| iterates the transformation $\transone$ until inapplicable on the current problem; and \item the operator \lstinline[style=strategy,mathescape]|<>| implements left-biased choice: \lstinline[style=strategy,mathescape]|$\transone_1$ <> $\transone_2$| applies transformation $\transone_1$ if successful, otherwise $\transone_2$ is applied. \end{varitemize} It is easy to see that all three combinators preserve the two crucial properties of transformations, viz, complexity reflection and complexity preservation. The transformation \lstinline[style=strategy,mathescape]|simplify| depicted in Figure~\ref{fig:strategy} is composed out of three transformations \lstinline[style=strategy,mathescape]|simpATRS|, \lstinline[style=strategy,mathescape]|toTRS| and \lstinline[style=strategy,mathescape]|simpTRS|, each itself defined from transformations \lstinline[style=strategy,mathescape]|inline($P$)| and \lstinline[style=strategy,mathescape]|cfa| describe in Sections~\ref{s:impl:inline} and~\ref{s:impl:cfa}, respectively, the transformation \lstinline[style=strategy,mathescape]|usableRules| which implements the aforementioned computationally cheap, unification based, criterion from~\cite{GTSK:FROCOS:05} to eliminate dead code (see Section~\ref{s:simpl:de}), and the transformation \lstinline[style=strategy,mathescape]|uncurry|, which implements the uncurrying-transformation from Section~\ref{s:simpl:uncurrying}. The first transformation in our chain, \lstinline[style=strategy,mathescape]|simpATRS|, performs inlining driven by the specific shape of the input ATRS obtained by defunctionalisation, followed by syntax driven dead code elimination. The transformation \lstinline[style=strategy,mathescape]|toTRS| will then translate the intermediate ATRSs to functional form by the uncurrying transformation, using control flow analysis to instantiate head variables sufficiently and further eliminate dead code. The transformation \lstinline[style=strategy,mathescape]|simpTRS| then simplifies the obtained TRS by controlled inlining, applying syntax driven dead code elimination where possible, resorting to the more expensive version based on control flow analysis in case the simplification stales. To understand the sequencing of transformations in \lstinline[style=strategy,mathescape]|simpTRS|, observe that the strategy \lstinline[style=strategy,mathescape]|inline(decreasing)| is interleaved with dead code elimination. Dead code elimination, both in the form of \lstinline[style=strategy,mathescape]|usableRules| and \lstinline[style=strategy,mathescape]|cfaDCE|, potentially restricts the set $\narrowings{p}{l \to r}$, and might facilitate in consequence the transformation \lstinline[style=strategy,mathescape]|inline(decreasing)|. Importantly, the rather expensive, flow analysis driven, dead code analysis is only performed in case both \lstinline[style=strategy,mathescape]|inline(decreasing)| and its cheaper cousin \lstinline[style=strategy,mathescape]|usableRules| fail. \shortlongv{ The overall strategy \lstinline[style=strategy,mathescape]|simplify| is well-defined on all inputs obtained by defunctionalisation, i.e., terminating~\cite{EV}. }{ To see termination, it suffices to realize that all exhaustive applications of transformations in \lstinline[style=strategy,mathescape]|simplify| are terminating: \begin{varitemize} \item For \lstinline[style=strategy,mathescape]|inline(match)| this claim is immediate by the shape of input ATRSs. Each application of \lstinline[style=strategy,mathescape]|inline(match)| removes one occurrence of a closure-constructor obtained from the transformation of a match-expression in right-hand sides. \item Similar, exhaustive application of \lstinline[style=strategy,mathescape]|inline(constructor)| is terminating, since at each step the number of defined symbol in right-hand sides is reduced. \item For iterated application of \lstinline[style=strategy,mathescape]|inline(lambda-rewrite)| the claim is less obvious. Intuitively, termination holds because the rewritings performed on right-hand sides correspond to steps with respect to a very restricted fragment of \PCF, which is itself terminating: the simply typed $\lambda$-calculus. Note that the restriction to rewrites is essential, as soon as we allow inlining by narrowing, termination is not guaranteed. \item Concerning the final case, by way of contradiction suppose that \[ \lstinline[style=strategy,mathescape]{(inline(decreasing); usableRules) <> cfaDCE} \tkom \] is applied infinitely often. Dead code elimination cannot be the culprit, indeed, \lstinline[style=strategy,mathescape]|inline(decreasing)| can then be applied infinitely often. In such a sequence, the case \emph{proper inlining} underlying the definition of the predicate \lstinline[style=strategy,mathescape]|decreasing| cannot hold infinitely often, as the number of defined symbols in right-hand sides decrease after each application. Hence ultimately, an infinite application of \lstinline[style=strategy,mathescape]|inline(decreasing)| has to happen due to the \emph{size decreasing} condition. But in such a sequence, the multiset of sizes of right-hand sides is decreasing with respect to the multiset extension of the strict order $>$ on naturals, which itself is well-founded. Contradiction! \end{varitemize} } Although we cannot give precise bounds on the runtime complexity in general, in practice the number of applications of inlinings is sufficiently controlled to be of practical relevance. Importantly, the way inlining and instantiation is employed ensures that the sizes of all intermediate TRSs are kept under tight control. \section{Introduction} \label{s:intro} \usetikzlibrary{calc} \usetikzlibrary{backgrounds} \newcommand{\ibox}[5][ibox]{ \def#4*0.25{#4*0.25} \coordinate (#1) at #2 {}; \coordinate (#1-east) at ($#2 + (0.5*#3,0)$) {}; \coordinate (#1-west) at ($#2 + (-0.5*#3,0)$) {}; \coordinate (#1-northwest) at ($#2 + (-0.5*#3,0.5*#4)$) {}; \coordinate (#1-northeast) at ($#2 + (0.5*#3,0.5*#4)$) {}; \coordinate (#1-southwest) at ($#2 + (-0.5*#3,-0.5*#4)$) {}; \coordinate (#1-southeast) at ($#2 + (0.5*#3,-0.5*#4)$) {}; \draw[fill=white] (#1-northwest) -- (#1-northeast) -- ($(#1-southeast)+(0,#4*0.25)$) -- ($(#1-southeast)+(-#4*0.25,0)$) -- (#1-southwest) -- (#1-northwest); \draw[fill=black!10!white] ($(#1-southeast)+(0,#4*0.25)$) -- ($(#1-southeast)+(-#4*0.25,0)$) -- ($(#1-southeast)+(-#4*0.25,#4*0.25)$) -- ($(#1-southeast)+(0,#4*0.25)$); \node[anchor=north west,inner sep=2pt, txtlabel] (#1-text) at (#1-northwest) {\scriptsize\textsf{#5}}; } Automatically checking programs for correctness has attracted the attention of the computer science research community since the birth of the discipline. Properties of interest are not necessarily functional, however, and among the non-functional ones, noticeable cases are bounds on the amount of resources (like time, memory and power) programs need when executed. Deriving upper bounds on the resource consumption of programs is indeed of paramount importance in many cases, but becomes undecidable as soon as the underlying programming language is non-trivial. If the units of measurement become concrete and close to the
actually want is the last element of the steal array. Remember our magic step for this problem? It was when deciding whether to steal from house i, all we had to do was compare loot[i] + (most you could steal from house i+2 on) and (most your could steal from house i+1 on). But note that steal[i] tells us the best we can do from the last i houses. This suggests we can find steal[i] using the following technique: # we are trying to decide whether to rob house with loot[-i-1] or not steal[i] = max( loot[-i-1] + steal[i - 2], steal[i-1] ) This won’t give us steal[0] or steal[1], but these are our base cases. Note how much simpler our code becomes: # a dynamic programming solution def houseRobber(loot): if len(loot) == 0: return 0 if len(loot) == 1: return loot[0] # go from the end back to the beginning loot.reverse() # the base cases: take everything in the only house, or # rob the house with more valuables steal = [loot[0], max(loot[0],loot[1])] for currValue in loot[2:]: # steal[-2] and steal[-1] are second-to-last and last # elements of steal take = currValue + steal[-2] leave = steal[-1] steal.append(max(take,leave)) return steal[-1] This is much tidier, but we can reduce this code even more. Notice that steal[i] depends only on the current value of the loot and the last two values of steal. This means we don’t need to keep an array steal. Instead, we just need to keep track of the last two entries. Warning: At the moment, we are still going to accomplish our goal of finding the maximum value that our robber could take. If you wanted to be able to produce a list of houses for the robber to pilfer to achieve the maximum, the steal array is incredibly useful. There is a “backtracking” algorithm to reconstruct which houses the robber should steal from. The optimized version below reduces the space complexity of our solution from O(N) to O(1), but at the cost of being able to efficiently reconstruct the list of houses that have been hit. Our new code is: def houseRobber(loot): if len(loot) == 0: return 0 if len(loot) == 1: return loot[0] # go from the end back to the beginning loot.reverse() # the base cases: take everything in the only house, or # rob the house with more valuables oldBest, newBest = loot[0], max(loot[0],loot[1]) for currValue in loot[2:]: # steal[-2] and steal[-1] are second-to-last and last # elements of steal take = currValue + oldBest # oldBest is the same as leaving most recent house oldBest = newBest newBest = max(take,oldBest) return newBest So far we have implemented the bottom-up technique literally. We have made our way from the “end of the street” and worked backward. However, all that matters in this problem are which houses are next to each other. There isn’t a reason to reverse the houses. We could consider steal[i] as being the most we could steal from the first i houses (instead of the last i houses). The only reason we did it that way is it’s a little easier to conceptualize “running out of houses” at the end of the list. def houseRobber(loot): if len(loot) == 0: return 0 if len(loot) == 1: return loot[0] # the base cases: take everything in the only house, or # rob the house with more valuables oldBest, newBest = loot[0], max(loot[0],loot[1]) for currValue in loot[2:]: # steal[-2] and steal[-1] are second-to-last and last # elements of steal take = currValue + oldBest # oldBest is the same as leaving most recent house oldBest = newBest newBest = max(take,oldBest) return newBest We can make our problem look a little more like a standard problem with the following observation: if we initialize oldBest and newBest to zero, and go through every house, then the first pass through the loop sets oldBest and newBest to the correct value. This simplifies our base cases and makes the problem more familiar: # eliminate bases cases and simplify def houseRobber(loot): oldBest, newBest = 0,0 # go through every house for currValue in loot: take = currValue + oldBest oldBest = newBest newBest = max(take, oldBest) return newBest ## Similar problems This problem is a recursively defined series in disguise. This is a series where you calculate the next value using the previous values. One of the most famous examples is the Fibonacci numbers: 0, 1, 1, 2, 3, 5, 8, 13, ... where the first two numbers are 0 and 1. After the first two numbers, we get the next number by summing the previous two numbers in the sequence. For example, the seventh number in the sequence is 8 because 5 + 3 = 8. The mathematical expression for the Fibonacci numbers F[n] is given by $F[n] = F[n-1] + F[n-2], {\quad}{\quad}{\quad}{\quad} n \geq 2$ and where F[0] = 0 and F[1] = 1. This definition leads to a recursive function call: # Warning: this function is O(2^n) def fib(n): "Returns the nth Fibonacci number" if n == 0 or n == 1: return n return fib(n - 1) + fib(n-2) Because we see each call to fib(n) (except the base cases n=0 and n=1) call fib two more times, we are not surprised to find an O(2^n) running time. We can memoize this function (and this is good practice!), but we can also write an iterative or dynamic programming solution: # This is O(n) def fib(n): "Returns the nth Fibonacci number in reasonable time" if n == 0 or n == 1: return n old, new = 0,1 for _ in range(n-1): old, new = new, old + new return new Note how similar the program is to houseRobber. Many problems that involve making a choice to explore two different paths, such as rob house i or not, will given rise to these recursive sequences. This is a good pattern to know! The overall approach: 1. Usually we are iterating through our problem, and have to decide whether or not to include the current element. In this case our decision was whether or not to rob a house, but in other instances it might be whether or not to pack a certain item, or whether we use a road to get to our destination. 2. Start by relating this problem to “smaller” instances of the problem. If it helps, write a recursive solution first (although usually these will have bad run times). Usually these smaller instances are related to the different possible choices you could have made. In this case, the smaller instances were “making the best value from the remaining houses”. 3. Look at how many of the “smaller” instances you have to use to solve the next instance. In this case, we only used the last two instances. This can help you write the solution as an iterative solution. ## Tell us… Have you ever gotten a question like this in a technical interview? What strategy did you take to solve it? Let us know on the CodeSignal forum! ## CodeSignal Explainer: Prefix Sums Prefix sums, or cumulative sums, allow us to quickly find the sum of any contiguous slice of an array. As a quick example, suppose you copy down some driving directions. They tell you to drive down Penny Lane for 3 miles, then make a left onto Abbey Road, which we travel down for 5 miles before continuing on Lake Shore Drive for 4 miles, and so on. We convert these directions into the number of miles driven on each road. # 3 miles on Penny Lane # 5 miles on Abbey Road # 4 miles on Lake Shore drive # 6 miles on Sunset Boulevard # 2 miles where the streets have no name distances = [3,5,4,6,2] The prefix sum for the distances is: prefix_distances = [3,8,12,18,20] This tells us that it took 3 miles to get to the end of the first direction (Penny Lane), and 8 miles in total to get to the end of the second direction (Abby Road). If we want to know how long it took to get from the end of Abbey Road (mile 8 of our trip) to the end of Sunset Blvd (mile 18), we do one subtraction on prefix_distances rather than two additions on distances: # distance between end of Sunset Blvd
# Question about linear algebraic groups split vs isotropic I am reading notes on linear algebraic groups and I'm getting confused with some definitions and I would appreciate any clarification. They define $$G$$ to be split if there exists a maximal torus $$T$$ of $$G$$ that is split. Then later on they say: If there is no split torus contained in $$G$$ then $$G$$ is said to be anisotropic. Otherwise $$G$$ is said to be isotropic. If $$G$$ is isotropic then there exists a maximal torus $$T$$ contained in $$G$$ unique up to conjugation. 1) With this definition, to me it looks like split and isotropic mean the same thing... What am I missing here? 2) When $$G$$ is split we have the decomposition of the Lie algebra of $$G$$ as $$\mathfrak{g} = \mathfrak{t} \oplus \oplus_{\alpha \in \Phi(G,T)} \mathfrak{g}_{\alpha}$$ where $$\mathfrak{t}$$ is the Lie algebra of $$T$$ (and the rest with usual notation of roots of $$T$$ in $$G$$ with root spaces). But when isotropic we have $$\mathfrak{g} = \mathfrak{m} \oplus \oplus_{\alpha \in \Phi(G,T)} \mathfrak{g}_{\alpha}$$ where $$\mathfrak{m}$$ is the $$0$$ eigenspace. If someone could also explain (or provide me with some idea) me where this difference is coming from, I would greatly appreciate it. Thank you. PS Further clarification regarding the second question: when $$G$$ is split we have that the Lie algebra of $$G$$ has the decomposition $$\mathfrak{g} = \mathfrak{t} \oplus \oplus_{\alpha \in \Phi(G,T)} \mathfrak{g}_{\alpha}$$ where $$\mathfrak{t}$$ is the Lie algebra of $$T$$ and each $$\mathfrak{g}_{\alpha}$$ is $$1$$ dimensional. However in the situation between split and anisotropic, my understanding is that we don't have exactly the same situation. In the above notation we have $$\mathfrak{m}$$ is not (necessarily?) the Lie algebra of $$T$$ and each $$\mathfrak{g}_{\alpha}$$ is not (necessarily?) $$1$$ dimensional anymore. I guess I was hoping I could get some idea on why this happens to be the case... (Even though maybe the only difference is that when there is a maximal split torus the situation is "nice" and not as nice otherwise) The point is that being split is one extreme: $$G$$ contains a split maximal torus. Being anisotropic is the other extreme: $$G$$ contains no split torus. Being isotropic(which is not really a term I've ever heard one use, and I work with algebraic groups a lot) is just that it's somewhere between these two: $$G$$ contains a split torus, but perhaps no split maximal torus. So, for example: • The group $$\mathrm{GL}_n$$ over any field $$k$$ is split. Indeed it contains the diagonal split torus $$\mathbb{G}_m^n$$. Of course, be careful that this does not mean that every torus or even maximal torus in $$\mathrm{GL}_n$$ is split. In fact, the maximal tori in $$\mathrm{GL}_n$$ are of the form $$\displaystyle \prod_{i=1}^m \mathrm{Res}_{L_i/k}\mathbb{G}_{m,L_i}$$ where $$L_i/k$$ are finite separable extensions and $$\displaystyle \sum_i^m [L_i:k]=n$$. • Let $$D$$ be a central division algebra over $$k$$. Consider the reductive group over $$k$$, usually denoted $$D^\times$$, given by sending a $$k$$-algebra $$R$$ to $$(R\otimes_k D)^\times$$. This group has a split connected center: $$Z(D^\times)\cong\mathbb{G}_m$$. The group $$D^\times/Z(D^\times)$$ is anisotropic. • Let $$D$$ be as in the last example and and assume that $$\dim_k D>1$$. Then, $$D^\times$$ is isotropic but not split. Indeed, $$D^\times$$ is not anisotropic since it contains the split torus $$Z(D^\times)=\mathbb{G}_m$$ but it is not split (or even quasi-split). Indeed, if $$D^\times$$ were split then the same would be true of $$D^\times/Z(D^\times)$$ but the latter being anisotropic and split would imply that $$D^\times/Z(D^\times)$$ has a maximal torus of rank $$0$$. By reductiveness this implies that $$D^\times/Z(D^\times)$$ is trivial, which is absurd. More concretely, the dimension of a maximal split torus (i.e. the rank) in $$D^\times$$ is $$n$$ since it's a form of $$\mathrm{GL}_n$$, and so the rank of $$D^\times/Z(D^\times)$$ is $$n-1$$. Since $$\dim_k D>1$$ we see that $$D^\times/Z(D^\times)$$ is anisotropic of positive rank so not split. As for your Lie algebra question, I'm not exactly sure what you have written there. Can you clarify? • Thank you for this very clear answer! Now it makes sense! I have added some clarification regarding the Lie algebra part. I would greatly appreciate any comment regarding this, if you happen to have any, as well! – Takeshi Gouda Apr 16 at 13:19 The other answer is very good. I just add some more examples, on the Lie algebra level because I'm more familiar with it. I leave it to you to write down the corresponding linear algebraic groups. However, I think your root space decomposition for the non-split case is a little off. I would expect something like $$\mathfrak{g} \simeq \mathfrak{z}(\mathfrak{a}) \oplus \bigoplus_{\lambda \in R(\mathfrak{a})} \mathfrak{g}_\lambda$$ where $$\mathfrak a$$ is a maximal split torus (I say "torus" for short instead of "toral subalgebra" even in the Lie algebra setting), and $$\mathfrak{z}(\mathfrak{a})$$ is its centraliser (= $$0$$-eigenspace), which probably is your $$\mathfrak{m}$$ (even though you should be aware that it's not the $$0$$-eigenspace of the entire maximal torus you call $$T$$). But then also the root system $$R(\mathfrak a)$$ is what is often called a "rational" root system, again not of the maximal torus $$T$$, but of the maximal split torus $$\mathfrak{a}$$. Such an $$R(\mathfrak a)$$ can be empty (in the anisotropic case) or non-reduced i.e. e.g. of type $$BC_n$$. And yes, the root spaces $$\mathfrak{g}_\lambda$$ can have dimension $$> 1$$. Namely, let's look at the following three Lie algebras over $$\Bbb R$$: $$\mathfrak{g}_1 = \mathfrak{sl}_3(\Bbb R) = \lbrace \begin{pmatrix} a & c & e\\ f & b & d\\ h & g & -a-b \end{pmatrix} : a, ..., h \in \mathbb{R} \rbrace$$; $$\mathfrak{g}_2 = \mathfrak{su}_{1,2} := \lbrace \begin{pmatrix} a+bi & c+di & ei\\ f+gi & -2bi & -c+di\\ hi & -f+gi & -a+bi \end{pmatrix} : a, ..., h \in \mathbb{R} \rbrace$$; $$\mathfrak{g}_3 = \mathfrak{su}_{3} := \lbrace \begin{pmatrix} ia & c+di & g+hi\\ -c+di & ib & e+fi\\ -g+hi & -e+fi & -ai-bi \end{pmatrix} : a, ..., h \in \mathbb{R} \rbrace$$. They are all simple and $$8$$-dimensional -- indeed, they all have isomorphic complexification $$(\mathfrak{g}_i)_\Bbb C = \Bbb C \otimes_\Bbb R \mathfrak{g}_i \simeq \mathfrak{sl}_3(\Bbb C)$$, meaning that they are "real forms" of $$\mathfrak{sl}_3$$, or expressed with root systems, real forms of type $$A_2$$. (And over $$\Bbb R$$, they are the only such forms up to isomorphism.) Indeed, in each of them, the diagonal matrices would form a maximal torus (or rather, toral subalgebra in the Lie algebra setting), which is two-dimensional; and its roots in the complexification form a root system of type $$A_2$$; but only for the first one is this torus a split torus. So $$\mathfrak{g}_1$$ is split. On the other extreme, $$\mathfrak{g}_3$$ is anisotropic, as it contains no split torus $$\neq 0$$ at all: The only maximal split torus is $$\mathfrak a = 0$$, and $$R(0) = \emptyset$$; hence, $$\mathfrak{g}_3 = \mathfrak m = \mathfrak z(0)$$. These are the extreme cases. Now $$\mathfrak{g}_2$$ lies between them, so it would be "isotropic" in your nomenclature. (Indeed, this one has a much more special property called quasi-split, vaguely meaning that it's closer to being split than to being anisotropic, but there are more complicated examples of something that is not quasi-split and not anisotropic either.) In this answer I mentioned the maximal split tori of it, which have dimension $$1$$; the most obvious one being $$\mathfrak{a} = \begin{pmatrix} a & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & -a \end{pmatrix}.$$ Notice that its $$0$$-eigenspace = centraliser (what you seem to call $$\mathfrak{m}$$) is $$\mathfrak{a} \oplus \mathfrak{t}_0$$ where $$\mathfrak{t}_0 = \begin{pmatrix} bi & 0 & 0\\ 0 & -2bi & 0\\ 0 & 0 & bi \end{pmatrix}$$; in this case (this is a special feature of quasi-split Lie algebras though and not true for all "isotropic" cases), this sum happens to be exactly a maximal (but "only half split") torus and becomes the standard maximal split = split maximal torus in the complexification $$(\mathfrak{g}_{2})_\mathbb{C} \simeq \mathfrak{sl}_3(\mathbb{C})$$. (In general, this $$\mathfrak
# Does a 1MOhm Resistor Provide any ESD Protection? #### intoxicated_pilot Joined Feb 11, 2021 5 I have very limited knowledge on electricity, so please bear with me on this one.... So I understand that a 1MOhm resistor is included in ESD wrist straps as a standard safety measure. My question is, does this 1MOhm resistor serve any additional purpose in terms of ESD protection? I've heard an equal amount of people claim that is does, and that it doesn't. The theory I've heard is that the 1MOhm resistor slows the static discharge enough to prevent damage to ICs, and other sensitive components. For my application, I will be using two ESD mats (two layer), each connected to ground without a 1MOhm resistor. I will also be connected to ground via a wrist strap, with a 1MOhm resistor. Per this document from Desco; "ANSI/ESD S6.1 recommends that a non-resistor ground cord be used to ground worksurfaces and floor mats. However, cord may have a 1 megohm resistor for non ESD puropses. Selection of the ground cord is determined by user needs and specifications." So my understanding is that the 1MOhm resistor is not required for the mats, simply because the top rubber layer provides more than enough resistance to slow the current? Additionally, I'm wondering if connecting the chassis or the PC I'm building to ground via a 1MOhm resistor would provide additional protection (rather than grounding via PSU connected straight to mains). Say for example, the motherboard I'm installing happens to have a static charge built up, it will likely discharge through the case to ground. Assuming the 1MOhm resistor would slow the discharge, could this provide some additional protection? Yes, I know this is a bit excessive in terms of ESD protection, but we all know that ESD can cause premature, or intermittent failures that are difficult to troubleshoot to say the least, and if I can do everything I can to prevent this on every system that I build (not cheap $), I'd rather take the additional steps. I don't intend to start the decade old debate regarding the actual risks involved with ESD, but rather want to answer the question for my own understanding on the subject. Thanks! #### MrChips Joined Oct 2, 2009 24,998 Providing ESD protection is essential. It is not being excessive. 1MΩ is a typical series resistance and it works. This resistor is not about the effectiveness of ESD grounding. It is there for protecting personnel from being electrocuted in the event that the grounding strap inadvertently comes into contact with high live voltage. Grounding of electrical equipment to EARTH is required unless the equipment has a double layer of insulation from AC mains. If you are building your own equipment a direct connection between the chassis and EARTH connection is required (i.e. no resistors). In other words, in North America, AC mains connection has three wires. The EARTH connection is the GREEN cable that is connected to the round pin on a standard 3-pin plug. The green wire must be bonded to the chassis of the PSU. Your computer must also have a connection from the computer chassis to the chassis of the PSU. The mother board COMMON GROUND should be connected to this same EARTH ground. North America BLACK - LINE VOLTAGE WHITE - NEUTRAL GREEN - EARTH #### crutschow Joined Mar 14, 2008 28,507 Yes, the 1 megohm resistor is just to prevent electrocution of the person wearing the wrist-strap. It has nothing to do with ESD protection. Thread Starter #### intoxicated_pilot Joined Feb 11, 2021 5 Yes, the 1 megohm resistor is just to prevent electrocution of the person wearing the wrist-strap. It has nothing to do with ESD protection. Providing ESD protection is essential. It is not being excessive. 1MΩ is a typical series resistance and it works. This resistor is not about the effectiveness of ESD grounding. It is there for protecting personnel from being electrocuted in the event that the grounding strap inadvertently comes into contact with high live voltage. Grounding of electrical equipment to EARTH is required unless the equipment has a double layer of insulation from AC mains. If you are building your own equipment a direct connection between the chassis and EARTH connection is required (i.e. no resistors). In other words, in North America, AC mains connection has three wires. The EARTH connection is the GREEN cable that is connected to the round pin on a standard 3-pin plug. The green wire must be bonded to the chassis of the PSU. Your computer must also have a connection from the computer chassis to the chassis of the PSU. The mother board COMMON GROUND should be connected to this same EARTH ground. North America BLACK - LINE VOLTAGE WHITE - NEUTRAL GREEN - EARTH View attachment 230175 So I did some more research on the subject, and found a pretty detailed (but rather complicated) answer. I'll copy parts of his answer below. I guess I'm just trying to get a better explanation behind why this theory is or isn't true. It's hard to pick a side when an equal number of people online are giving conflicting answers. But if you could provide any further explanation on this, it would be much appreciated. I like to understand the "why" side of things whenever possible. Thanks! ------------- https://electronics.stackexchange.com/questions/265920/should-there-really-be-1-mΩ-resistance-between-an-anti-static-wrist-strap-and-a "The reason for the acceptable range of 1M to 10 M is to current limit static discharge for wrist straps. in addition it reduces current to live voltages. " "It does this by bleeding body charge slowly to the same potential of the case or gnd reference that the 1M resistor is clamped to, while static charge may be generated by motion or change in body capacitance with a fixed charge, V=C/Q ." "For Electrostatic Over Stress or EOS prevention all surfaces must be "Static Dissipative" to prevent rapid discharge." ------------------ #### MrChips Joined Oct 2, 2009 24,998 Sorry, I have not read anywhere where people are in disagreement with the purpose of the 1MΩ resistor. 1) For the purpose of ESD protection it does not matter if the resistor is 0Ω or 10MΩ. The 0Ω reference is made as being similar to a person touching a piece of metal or metal chassis that is grounded. Yes, you will feel a tingle or even observe an electrical discharge spark but that does not occur here. 2) The purpose of the ESD wrist strap is to bring the body of the person wearing the strap to EARTH potential. If the person is wearing and using the ESD wrist strap correctly the body will be at EARTH potential. Any physical activity such as shuffling feet on carpeted floors, combing hair, or changing clothing, etc. may attempt to cause a build up of static electricity. If the body is properly grounded by continually wearing the ESD wrist strap there will be no build up of electrostatic charge. There is no discharge spark to observe because there is no high voltage. The ESD wrist strap is to prevent build up of electrostatic charge that may cause harm to sensitive electronic components. 3) Comments made that the 1MΩ is to discharge the high voltage slowly via a tiny bleed current is irrelevant. The 1MΩ resistor has nothing to do with the discharge rate. The purpose of the 1MΩ resistor is to protect the person from electrocution in the event that the grounding strap comes into contact with a continuous high voltage source (which would likely be 120-240VAC mains). It limits the current to a safe value from voltage sources up to 5kV. Summary The purpose of the ESD wrist strap is to prevent build up of electrostatic charge that may cause damage to sensitive
discrepancy between the data around MJD = 57100--57350 and those around MJD = 56000 is the cause of the large rate of change shown for the OAOWFC data in Table \ref{tab:res}. If we assume that the faint magnitudes around MJD = 57100--57350 are caused by calibration problems and evaluate the rate of change since the second UKIDSS observation without these faint data, the rate of change is derived to be $-0.021\pm0.007$ mag yr$^{-1}$. This is consistent with the values derived from 2MASS and UKIDSS observations. \begin{table} \centering \caption{Observed rates of color change.} \label{tab:ccr} \begin{tabular}{ccc} \tablewidth{0pt} \hline \hline Object & Color & Rate of change \\ & & (mag yr$^{-1}$)\\ \hline OH25.1$-$0.3 & $H-K$ & $+0.009\pm0.010$ \\ & $J-K$ & $-0.002\pm0.017$ \\ OH53.6$-$0.2 & $H-K$ & $+0.008\pm0.006$ \\ & $J-K$ & $+0.013\pm0.006$ \\ OH77.9+0.2 & $H-K$ & $-0.002\pm0.006$ \\ & $J-K$ & $+0.007\pm0.007$ \\ \hline \end{tabular} \end{table} As for OH77.9+0.2, any periodic variability like OH25.1$-$0.3 is not seen in the OAOWFC light curve. Although the overall magnitudes observed by OAOWFC is slightly fainter than those expected from the 2MASS-UKIDSS trend, the OAOWFC magnitudes are significantly brighter than the 2MASS magnitude. Since the brightening trend is seen only in the target light curve, it can be concluded that OH77.9+0.2 also brightened since the 2MASS era. The rate of change derived from the OAOWFC light curve is consistent with those derived from the 2MASS and UKIDSS observations as in Table \ref{tab:res}. Therefore, the magnitude offset between the 2MASS-UKIDSS trend and OAOWFC light curve may be caused by systematic error (see \S\ref{sec:oaowfcunc}). As shown in Table \ref{tab:res}, we have multi-color multi-epoch data for three objects, OH25.1$-$0.3, OH53.6$-$0.2, and OH77.9+0.2, enabling us to study their color changes. Figure~\ref{fig:colorchange} shows the time variations of $(H-K)$ and $(J-K)$ colors between the 2MASS and UKIDSS observations. The rates of the color change are summarized in Table \ref{tab:ccr}. Conventionally, we have expected that a non-variable OH/IR star becomes brighter and bluer as its mass-loss rate drops at the end of the AGB evolution. However, contrary to this picture, we find that the three non-variable OH/IR stars kept their colors constant or became redder over a few thousand days. This is the second finding of this study. OH25.1$-$0.3 and OH77.9+0.2 does not show significant color change in both $(H-K)$ and $(J-K)$. On the other hand, the color of OH53.6$-$0.2 in the UKIDSS observation is redder than that in the 2MASS data. The significance levels of the color changes are 1.4 and 2.2$\sigma$ in $(H-K)$ and $(J-K)$, respectively. \section{Origin of the Brightness Evolution} \label{sec:disc} To investigate the origin of the observed long-term brightness evolution, we make a simple model based on the conventional picture and compare it with the observed rates of brightening and color change. \subsection{A simple model} In the conventional picture, it is thought that non-variable OH/IR stars have ceased the dust supply to the circumstellar dust shell, and the dust shell is expected to expand its inner cavity. Therefore, we assume a spherical dust shell created by a stellar wind with a constant expansion velocity, $v_{\rm exp}$, and a constant dust mass-loss rate, $\dot{M}_{\rm d}$, and that its inner radius, $r_{\rm in}$, is increasing with time, $t$, after the cessation of dust production. At a wavelength, $\lambda$, the apparent magnitude of a star with an intrinsic apparent magnitude, $m_{\lambda,{\rm i}}$ is written as, \begin{equation} m_\lambda = m_{\lambda,{\rm i}} + A_{\lambda,{\rm c}} + A_{\lambda,{\rm i}}, \label{eq:mag} \end{equation} where $A_{\lambda,{\rm c}}$ and $A_{\lambda,{\rm i}}$ are the extinction caused by the circumstellar dust and interstellar dust, respectively. Here, we omit the thermal emission from the circumstellar dust shell, because it is difficult to model without mid- to far-infrared data simultaneously taken with the NIR data. This assumption is thought to be reasonable for a situation where enough time elapsed since the cessation of the dust supply and hot dust grains ($\gtrsim$700 K) disappeared (e.g. $t\gtrsim$10 yr for a star with $L=10^4 {\rm L}_{\sun}$ and $v_{\rm exp}=10$ km s$^{-1}$). If such hot dust grains exist in the dust shell, their thermal emission can contribute to the NIR brightness especially at longer wavelengths. Such a possible contribution of the dust emission will be addressed in \S\ref{sec:beyond}. $A_{\lambda,{\rm c}}$ is described as, \begin{equation} A_{\lambda,{\rm c}} = -2.5 \log{[\exp{(-\sigma_{\lambda} N)}]}\approx1.09\sigma_{\lambda} N, \label{eq:ext} \end{equation} where $\sigma_{\lambda}$ and $N$ are the absorption cross-section and the column density of dust grains in the dust shell. $\dot{M_{\rm d}}$ is related with the number density of dust grains, $n_{\rm d}$, as, \begin{equation} \dot{M_{\rm d}} = 4\pi r^2 v_{\rm exp} m_{\rm d} n_{\rm d}, \end{equation} where $r$ is the distance from the star, and $m_{\rm d}$ is the mass of a single dust particle. Since we assume a constant $v_{\rm exp}$ and a constant $\dot{M}_{\rm d}$, $n_{\rm d} r^2$ becomes a constant, $C=\dot{M_{\rm d}}(4\pi v_{\rm exp}m_{\rm d})^{-1}$. Then, $N$ is written as, \begin{equation} N = \int_{r_{\rm in}}^{r_{\rm out}} n_{\rm d}(r) dr = C\left(\frac{1}{r_{\rm in}}-\frac{1}{r_{\rm out}}\right), \end{equation} where $r_{\rm in}$ and $r_{\rm out}$ are the inner and outer radii of the dust shell. If we assume that $r_{\rm out}$ is sufficiently larger than $r_{\rm in}$, $N$ is approximated to be \begin{equation} N \approx \frac{C}{r_{\rm in}} = \frac{C}{r_{\rm in,0}+v_{\rm exp}t}, \label{eq:approxN} \end{equation} where $r_{\rm in,0}$ is the inner radius of the dust shell at $t$=0. By substituting Equation~(\ref{eq:approxN}) to Equation~(\ref{eq:ext}) and introducing a crossing-time parameter, $t_{\rm cross}$ = $r_{\rm in,0}/v_{\rm exp}$, we get an expression of $A_{\lambda,{\rm c}}$ as, \begin{equation} A_{\lambda,{\rm c}}(t) = \frac{A_{\lambda,{\rm c}}(0)}{1+t/t_{\rm cross}}. \label{eq:ext-last} \end{equation} If we assume that the properties of the central star and interstellar extinction (i.e. $m_{\lambda, {\rm i}}$ and $A_{\lambda,{\rm i}}$) are constant, the rate of change of the apparent magnitude is described as, \begin{equation} \frac{dm_\lambda}{dt}\left(t\right)=\frac{dA_{\lambda,{\rm c}}}{dt} = -\frac{A_{\lambda,{\rm c}}(0)}{t_{\rm cross}\left(1+t/t_{\rm cross}\right)^{2}}. \label{eq:CR} \end{equation} Another important observable is the rates of color change. In the model, the color in a wavelength range of $\lambda$--$\lambda'$ is, \begin{equation} C_{\lambda\lambda'} = C_{\lambda\lambda',{\rm i}} + E_{\lambda\lambda',{\rm c}} + E_{\lambda\lambda',{\rm i}}, \label{eq:CCEE} \end{equation} where $C_{\lambda\lambda',{\rm i}}$ is the intrinsic color of the central star. $E_{\lambda\lambda',{\rm c}}$ and $E_{\lambda\lambda',{\rm i}}$ are the color excess in $\lambda$--$\lambda'$ caused by the circumstellar and interstellar dust, respectively. If we assume that the stellar properties and interstellar extinction (i.e. $C_{\lambda\lambda',{\rm i}}$ and $E_{\lambda\lambda',{\rm i}}$) are constant, the rate of color change is described by using Equation~(\ref{eq:ext}) and (\ref{eq:CCEE}) as, \begin{equation} \frac{dC_{\lambda\lambda'}}{dt}\left(t\right) = \left(\frac{\sigma_{\lambda}}{\sigma_{\lambda'}}-1\right)\frac{dA_{\lambda',c}}{dt}. \label{eq:dct} \end{equation} The wavelength dependence of the absorption cross-section is often expressed as a power-law function. If the power-law index is written as $p$ $(\sigma_\lambda\propto\lambda^{p})$, the rate of color change is written as, \begin{equation} \frac{dC_{\lambda\lambda'}}{dt}\left(t\right) = \left[\left(\frac{\lambda}{\lambda'}\right)^{p}-1\right]\frac{dA_{\lambda',c}}{dt}. \label{eq:dct2} \end{equation} We compare the observed results with this model in the following sections. \subsection{{\it K}-band brightening rate}\label{sec:disc-brighteningrate} The {\it K}-band brightening rate can be calculated based on Equation~(\ref{eq:CR}) by substituting appropriate values of the crossing time and initial {\it K}-band circumstellar extinction, $A_{K,{\rm c}}(0)$, to $t_{\rm cross}$ and $A_{\lambda,{\rm c}}(0)$, respectively. According to the SED analysis by \citet{1992ApJ...389..400J}, the inner radii of OH/IR stars are 1.3--15$\times10^{12}$ m, and the expansion velocities are in a range of 2.3--24.2 km s$^{-1}$. The combinations of these values lead to the crossing times of 2.8--24 yr. The actual expansion velocities of the six objects can be estimated from the velocity differences of the \replaced{OH-maser}{OH maser} emission peaks. Based on the observations by \citet{HH85}, their expansion velocities are estimated to be 11.0--14.7 km s$^{-1}$ and actually in the parameter range above. The optical depth at 9.7 $\mu$m, \added{$\tau_{9.7}$,} the peak of the silicate feature, is also given by \citet{1992ApJ...389..400J} and ranges from 0.03 to 19.6. The extinction in the {\it K} band can be estimated from \replaced{the}{this} 9.7-$\mu$m optical depth to be $\lesssim$10 mag by assuming the silicate opacity proposed for OH/IR stars by \citet{1999MNRAS.304..389S}. \begin{figure} \epsscale{1.15} \plotone{Fig13.pdf} \vspace{-0.2cm} \caption{Comparison of {\it K}-band brightening rates between observation and model. Horizontal axis shows the elapsed time from the cessation of dust production. Thin solid and dashed curves show models with initial extinction $A_{K, {\rm c}}(0)$ = 10 mag and crossing time $t_{\rm cross}$ = 2.8 and 24 yr, respectively. Thick solid line shows the maximum rates under the assumption of $A_{K, {\rm c}}(0)$ = 10 mag. Shaded region shows the expected range in the model. {\it K}-band
$B_R(G)$ denotes the ball in $\ensuremath{\mathbb{R}}^3$ with center at a point $G\in \ensuremath{\mathbb{R}}^3$ and radius $R$. The ball centered at the origin of a coordinates system $\{\V e_1, \V e_2, \V e_3\}$ will be simply denoted by $B_R$. If $A$ is an open set of $\ensuremath{\mathbb{R}}^n$, $s\in \ensuremath{\mathbb{R}}$ and $p\in [1,\infty]$, then $L^p(A)$, $W^{s,p}(A)$, $W^{s,p}_0(A)$ denote the Lebesgue and (generalized) Sobolev spaces, with norms $\norm{\cdot}_{L^p(A)}$ and $\norm{\cdot}_{W^{s,p}(A)}$, respectively\footnote{Unless confusion arises, we shall use the same symbol for spaces of scalar, vector and tensor functions. }. For a bounded, Lipschitz domain $A$, with outward unit normal $\V n$, we will often use the following well-known Helmholtz-Weyl decomposition (e.g., \cite[Section III.1]{Ga}): \begin{equation}\label{eq:HW} L^q(A) = H_q(A) \oplus G_q(A), \end{equation} where $q\in (1,\infty)$, $H_q(A):=\{\V u\in L^q(A):\; \mathop{\mathrm{div}} \V u=0\text{ in }A\text{, and } \V u\cdot \V n=0 \text{ on }\partial A\}$ ($\mathop{\mathrm{div}} \V u$ and $\V u \cdot \V n$ have to be understood in the sense of distributions), and $G_q(A):=\{w \in L^q(A):\; w=\nabla \pi, \text{ for some }\pi\in W^{1,q}(A)\}$. In the case of $q=2$, we will simply write $H(A)$ and $G(A)$, respectively. If $(X, \norm{\cdot}_X)$ is a Banach space, for an interval $I$ in $\ensuremath{\mathbb{R}}$ and $1\le p<\infty$, $L^p(I;X)$ (resp. $W^{k,p}(I;X)$, $k\in \ensuremath{\mathbb{N}}$) will denote the space of functions $f$ from $I$ to $X$ for which $\left(\int_I \norm{f(t)}^p_X\; \d t\right)^{1/p}<\infty$ (resp. $\sum^k_{\ell=0}\left(\int_I \norm{\partial^\ell_t f (t)}^p_X\; \d t\right)^{1/p}<\infty$). Similarly, $C^k(I;X)$ indicates the space of functions which are $k$-times differentiable with values in $X$, and having $\max_{t\in I}\norm{\partial^\ell_t \cdot}_X < \infty$, for all $\ell = 0,1,...,k$. Finally, $C_w(I;X)$ is the space of functions $f$ from $I$ to $X$ such that that the map $t \in I \mapsto \phi(f(t))\in \ensuremath{\mathbb{R}}$ is continuous for all bounded linear functionals $\phi$ defined on $X$. We conclude this section by recalling the following Gr\"onwall-type lemma that will be used in the paper. For its proof, we refer the interested reader to \cite{Ma2}. \begin{lemma}\label{lem:gronwall1} Suppose that a function $y\in L^\infty(0,\infty)$, $y\ge 0$, satisfies the following inequality for a.~a. $s\ge 0$ and all $t\ge s$: \begin{equation*} y(t)\le y(s)-k\int_s^ty(\tau)\,d\tau+\int_s^tF(\tau)\,d\tau\,. \end{equation*} Here, $k>0$, and $F\in L^q(a,\infty)\cap L^1_{{\rm loc}}(0,\infty)$, for some $a>0$ and $q\in [1,\infty)$, satisfies $F(t)\ge 0$ for a.~a. $t\ge 0$. Then $$ \lim_{t\to\infty}y(t)=0\,. $$ If $F\equiv 0$, then $$ y(t)\le y(s)\,{\rm e}^{-k(t-s)} \,,\ \ \mbox{for all $t\ge s$}\,. $$ \end{lemma} We are now ready to introduce the equations governing the motion of the system of rigid bodies with a liquid-filled gap. \section{A preliminary mathematical formulation of the problem}\label{sec:preliminary_formulation} Consider $\mathcal B_1:=\mathcal V_1\setminus \overline{\mathcal V}$, with $\mathcal V_1$ and $\mathcal V$ bounded domains in $\ensuremath{\mathbb{R}}^3$, $\overline{\mathcal V}\subset \mathcal V_1$, $\overline{B_R(G)}\subset \mathcal V$, and $\mathcal B_2:=B_R(G)$. Let us denote $\mathcal C:=\partial \mathcal V$, $\mathcal S:=\partial \mathcal B_2$~\footnote{$\mathcal S$ is the sphere in $\ensuremath{\mathbb{R}}^3$ centered at $G$ with radius $R$.}, and ${\mathscr{L}}:=\mathcal V\setminus \overline{B_R(G)}$ be the volume occupied by the liquid at each time. Throughout the paper, we will assume that ${\mathscr{L}}$ is of class $C^2$. Let $\mathcal{F}\equiv\{G, \V e_1,\V e_2,\V e_3\}$ be the {\em non-inertial} reference frame with origin at $G$, and axes coinciding with the {\em central axes of inertia} of the coupled system $\mathscr S_C$; these axes are directed along the eigenvectors of the inertia tensor $\T I_{C}$ of $\mathscr S_C$ with respect to $G$, and with corresponding (positive and time-independent) eigenvalues $\lambda_1$, $\lambda_2$, and $\lambda_3$ (also called {\em central moment of inertia}). Let us denote by $\T I_{\mathcal B}$ the inertia tensor of the rigid body $\mathcal B_1$ with respect to $G$. Since $\mathcal B_2$ is a homogeneous rigid ball with center at $G$, then any axis passing through its center is also a central axis of inertia. Thus, the inertial tensor of $\mathcal B_2$ with respect to $G$ is simply $\lambda (\V e_1\otimes \V e_1+\V e_2\otimes \V e_2+\V e_3\otimes \V e_3)$ with $\lambda=2/5\; mR^2$, and $m$ the mass of the rigid ball. With respect to the reference frame $\mathcal F$, all the volumes considered above are time-independent. The following system of differential equations describes the dynamics of the given system in the reference frame $\mathcal F$ \footnote{We refer to \cite{Ma}, and \cite{Ma2} for more details about this kind of formulation obtained for similar problems in liquid-solid interactions. }. \begin{equation}\label{eq:motion} \begin{aligned} &\left.\begin{split} &\rho\left(\frac{\partial \V u}{\partial t}+\V v\cdot \nabla \V u+\V \omega_1\times \V u\right) =\mathop{\mathrm{div}} \T T(\V u,p) \\ &\mathop{\mathrm{div}} \V u=0 \end{split}\right\}&&\text{ on }\mathscr{L}\times (0,\infty), \\ &\T I_{\mathcal B}\cdot\dot{\V \omega}_1 +\V \omega_1\times \T I_{\mathcal B}\cdot \V \omega_1 =-\int_{ \mathcal C}\V x\times \T T(\V u,p)\cdot \V n\; \d \sigma \qquad &&\text{ in }(0,\infty), \\ &\lambda(\dot{\V \omega}_2 +\V \omega_1\times \V \omega_2) =-\int_{\mathcal S}\V x\times \T T(\V u,p)\cdot \V n\; \d \sigma \qquad &&\text{ in }(0,\infty), \\ &\V u=\V \omega_1\times \V x\qquad &&\text{ on }\mathcal C, \\ &\V u=\V \omega_2\times \V x\qquad &&\text{ on }\mathcal S. \end{aligned} \end{equation} Here, $\V u,p, \mu$ and $\rho$ denote the Eulerian absolute velocity and pressure of the liquid, its shear viscosity and (constant) density, respectively. In addition, $\V v$ indicates the Eulerian velocity of the liquid relative to $\mathcal B_1$ \begin{equation}\label{eq:relative_velocity} \V v:=\V u-\V \omega_1\times \V x. \end{equation} We notice that $\mathop{\mathrm{div}} \V v=0$, and it enjoys the following boundary conditions \begin{equation}\label{eq:bc_v} \V v=\V 0\qquad \text{on }\mathcal C,\quad \text{and }\quad \V v\cdot \V n=0\qquad \text{on }\mathcal S. \end{equation} Moreover, $\T T(\V u,p)$ denotes the Cauchy stress tensor for a viscous incompressible fluid \begin{equation}\label{eq:Cauchy_stress} \T T(\V u,p):=-p\T 1+2\mu \T D(\V u), \qquad \text{where } \; \T D(\V u):=\frac 12 (\nabla \V u+(\nabla \V u)^T). \end{equation} Finally, $\V \omega_1$ and $\V \omega_2$ are the angular velocities of $\mathcal B_1$ and $\mathcal B_2$, respectively. Equations \eqref{eq:motion}$_{1,2}$ with \eqref{eq:relative_velocity} and \eqref{eq:Cauchy_stress} are the {\em Navier-Stokes equations} in the non-inertial reference frame $\mathcal F$. These equations describe the dynamics of the liquid. Equations \eqref{eq:motion}$_{3,4}$ are the {\em balances of angular momentum} (with respect to $G$) of $\mathcal B_1$ and $\mathcal B_2$, respectively. In particular, the surface integrals in \eqref{eq:motion}$_{3,4}$ represent the total torque exerted by the liquid on the cavity surface $\mathcal C$ and on the sphere $\mathcal S$, respectively. The equations of motion are augmented with the {\em no-slip} boundary conditions \eqref{eq:motion}$_{5,6}$ at $\mathcal C$ and $\mathcal S$, respectively. Equations \eqref{eq:motion} feature a combination of {\em dissipative} and {\em conservative} components. The {\em dissipative} role is played by the liquid variable through equations \eqref{eq:motion}$_{1,2,5,6}$. Whereas, the {\em conservative} feature comes from the coupling with the equations \eqref{eq:motion}$_{3,4}$ describing the dynamics of the solids. As a matter of fact, the energy dissipates only in the liquid variable (see equation \eqref{eq:energy} below), and the total angular momentum (with respect to $G$) of the whole system is conserved at all times (see equation \eqref{eq:conservation0} below). These properties are satisfied for ``sufficiently regular'' solutions. \begin{lemma}[Energy Balance] \label{lem:energy} Consider $t_0\ge 0$, and assume that the quadruple $(\V u, p, \V \omega_1,\V \omega_2)$ satisfies the following regularity properties for all $T>0$: \begin{equation}\label{eq:regularity}\begin{split} &\V u\in C^0([t_0, t_0+T];W^{1,2}(\mathscr{L})\cap H(\mathscr{L})) \cap L^2(t_0,t_0+T;W^{2,2}(\mathscr{L})), \\ &\quad\frac{\partial \V u}{\partial t}\in L^2(t_0,t_0+T;L^2(\mathscr{L})),\; \quad p\in L^2(t_0,t_0+T;W^{1,2}(\mathscr{L})),\; \\ &\qquad \qquad\qquad\qquad\V \omega_1,\V \omega_2 \in W^{1,\infty}(t_0,t_0+T). \end{split}\end{equation} If $(\V u,p,\V \omega_1,\V \omega_2)$ satisfies \eqref{eq:motion} a.e. in $(t_0,\infty)$, then the following {\em energy balance} holds. \begin{equation}\label{eq:energy} \frac 12 \frac{\d }{\d t}\left[\rho \norm{\V u}_{L^2(\mathscr{L})}^2+\V \omega_1\cdot \T I_{\mathcal B}\cdot \V \omega_1 +\lambda |\V \omega_2|^2\right]+2\mu\norm{\T D(\V u)}^2_{L^2(\mathscr{L})}=0. \end{equation} \end{lemma} \begin{proof} Let us take the $L^2$-inner product of \eqref{eq:motion}$_2$ with $\V u$, we find that \[ \frac \rho2 \frac{\d }{\d t}\norm{\V u}_{L^2(\mathscr{L})}^2+\int_{\mathscr{L}}(\V v\cdot \nabla \V u)\cdot \V u\; \d V -\int_{\mathscr{L}}\V u\cdot \mathop{\mathrm{div}}\T T\; \d V=0. \] Since $\mathop{\mathrm{div}} \V v=\mathop{\mathrm{div}} \V u=0$ by \eqref{eq:motion}$_2$, using \eqref{eq:bc_v} and Gauss' Theorem, we can infer the following \[ \int_{\mathscr{L}}(\V v\cdot \nabla \V u)\cdot \V u\; \d V=0. \] By \eqref{eq:motion}$_{5,6}$ and \eqref{eq:Cauchy_stress}, and again by Gauss' Theorem, we get \[ \frac \rho2 \frac{\d }{\d t}\norm{\V u}_{L^2(\mathscr{L})}^2 -\V \omega_1\cdot\int_{\mathcal C}\V x\times \T T\cdot \V n\; \d \sigma -\V \omega_2\cdot\int_{\mathcal S}\V x\times \T T\cdot \V n\; \d \sigma +2\mu\norm{\T D(\V u)}^2_{L^2(\mathscr{L})}=0. \] From the latter displayed equation, \eqref{eq:energy} immediately follows by using \eqref{eq:motion}$_{3,4}$ dot-multiplied by $\V \omega_1$ and $\V \omega_2$, respectively. \end{proof} With the same hypotheses of the previous lemma, we can show the following. \begin{lemma}[Conservation of total angular momentum]\label{lem:conservation} If the quadruple $(\V u,p,\V \omega_1,\V \omega_2)$ satisfies \eqref{eq:regularity} for some $t_0\ge 0$, and \eqref{eq:motion} a.e.
worksheet. Theory Of Plate Tectonics Worksheet Answer Key Section 3. Record in Table 2, Column C. The map shows the major tectonic plates. The key stage 2 English grammar, punctuation and spelling test materials comprise: • Paper 1: questions (50 marks) • Paper 2: spelling (20 marks). Set up the square for each of the crosses listed below. NTSE Answer Key 2020 - NCERT will release stage 2 NTSE 2020 answer key for MAT and SAT on ncert. The trait being studied is round seeds (dominant) and wrinkled seeds (recessive) Rr x rr R r r Rr rr r Rr. ) Geology and paleoclimatology, and paleomagnetism were the sciences that Wegener used to formulate his Continental Drift hypothesis, but Wegener was not officially schooled in any of them. Benchmark: LA. 2) Describe how new seafloor is created. Monday Tuesday Wednesday Thursday Underline the NOUNS in the sentence. Concept 2: Explain the differences among divergent, convergent, and transform plate boundaries, including the major processes that occur at these boundaries. How to use answer in a sentence. How tectonic plates are interacting can be determined by land features. earth and Plate Tectonics. In this plate tectonics worksheet, students answer questions about plate tectonics including topics such as the lithosphere, the asthenosphere, rising and sinking convection currents, continental drift and the types of boundaries. 5th Grade Science Force And Motion Study Guide. NGSS MS-ESS2-3 Plate Tectonics Worksheet with Answer KeyWorksheet. The official body releases the provisional answer key first and then, after considering the errors in the provisional answer key, a final answer key is released. Friday, September 4, 2020. Answer Key; CBSE CLASS 12 MATHS CODE 041 QUESTION PAPER 2020 SET 2 65/3/2. The specific rotation of pure (R)-2-butanol is -13. Plate tectonics is the movement of the Earth's crust through convection currents that occur in the mantle. 30) The below table will provide GATE 2020 Answer key Civil Engineering (CE) (SET- 2). asked Jan 6, 2014 in SQL Queries by Imed (4,830 points) 2 Answers. ; number line drawn 2. is the surface found under the ocean floor also made of dense rock like basalt. Clerk Exam Question Paper (Dt. Subduction is one of the two major processes of plate tectonics, the other being seafloor spreading. 4 Soil Agronomy Projects Study Guides 6. Going for great active learning teaching ideas!. To get started, all you have to do is set up your teacher account. Then click on link CTET Paper 1 and Paper 2 answer key. ANSWER KEY MODULE 5 LESSON 29. This world map of tectonic plates is a two-page PDF suitable for printing on legal-size (8. Close examination of a globe often results in the observation that most of the continents seem to fit together like a puzzle: the west African coastline seems to snuggle nicely into the east coast of South America and the Caribbean sea; and a similar fit appears across the Pacific. Key N 020 40 WE S Miles 10 30 50 0204060 80 Kilometers Tectonic Plates Tasman Hot Spot M a r i a n T r e n c h T o g T e n c h. corda;pers are 're 4hry ho So d amount. The plates are all moving in different directions and at different speeds (from 2 cm to 10 cm per year--about the speed at which your fingernails grow) in relationship to each other. Ed (Course Code :152), M. It took place in the Story section of the official forums. 10 Draw a picture graph and a bar graph (with single-unit scale) to represent a data set with up to four categories. 1 A 2 C 3 C 4 D 5 A 6 D 7 D 8 B 9 C 10 B 11 B. 2 Plate Tectonics. 3 L; tHe *hey. Plate tectonics, the gradual drift of continents across the Earth's surface that causes earthquake, began around 2. Earthquakes and volcanoes are examples of GRELA-CSLEA LARGE-SCALE SSMTYE. Record in Table 2, Column C. Study Island Bot -- Automatically answer questions on Study Island * for demonstration purposes only * - studyisland. English: The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which float on the fluid-like (visco-elastic solid) asthenosphere. Directions ( Q. According to the generally accepted plate-tectonics theory, scientists believe that Earth's surface is broken into a number of shifting slabs or plates, which average about 50 miles in thickness. In the map of North America (figure 14), where are the mountain ranges located? Using what you have learned about plate tectonics, try to answer the following questions: What is the geologic origin of the Cascades Range? The Cascades are a chain of volcanoes in the Pacific Northwest. From there … 6 need/want 7 give me a call 1. Sounds and their corresponding symbols are taught in phonics lessons that are systematically organized, use direct and explicit instruction, provide blending and segmenting practice, and provide word manipulation practice. Benchmark: LA. Locate the boundary line between the North American plate and Eurasian plate as well as the. 3x10**-6 g/g, and K 2. You may want to write the number of correct answers for each section at. Points A through H represent locations. 10 km/h per second 4. I will investigate how new crust is. The collection is licensed under a Creative Commons Attribution-ShareAlike 4. Resources for Teachers can be found under the Chapter #1 Copymaster. {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} 1 {x | x is in N and x 13} 2 {x | x is in N and x 12} 3 none of these. Rxn Worksheet Half-Life PS #1 Half-Life PS #2 At. Record in Table 2, Column C. Selected Answers. Exam Name: Civil Services (Preliminary) Examination. State Eligibility Test 2018 - Revised Final Answer Key (Hindi) Revised Final Answer Key - 1. The set is even smaller; it contains only five elements: Here are some more examples: The interval can be written as the set looks like this in set notation: or like this Exercise 1. You can follow the same process to create as many pages as you want. Physical Setting/Earth Science Reference Tables — 2011 Edition 6 E r o s i o n W e a t h e r i n g. An AV before the page number indicates that the answer is found on that page in American Voices. LESSON 2: Layers of the Earth LESSON 3: Plate Tectonics LESSON 4: Properties of Minerals LESSON 5: Rock Cycle LESSON 6: Weathering, Erosion, and Deposition LESSON 7: Hydrosphere: Water on Earth LESSON 8: Water Cycle LESSON 9: The Atmosphere and Air Pressure LESSON 10: Climate Zones LESSON 11: Key Influences on Climate. The correct answer is D (to let readers know that Chester realizes the truth about the new house). Plate tectonics. grammar, punctuation and spelling test. The key here is to know the definition of all terms and then determine R or S configuration for each chiral carbon. The questions are set up in a standardized format with multiple choice answers. Determine which values of the input cause the denominator to equal zero, and set your domain to be everything else. If you need answers to MyMathLab or are looking for help with your homework, follow these tips to register for MyMathLab easily and ace your next assignment. Shared Flashcard Set. Unofficial answer keys will be released by some coaching institutes soon after the conclusion of the NDA exam 2020. The correct answer is C (to suggest that the evening covers everything). Case 2: Cotton is an input into T-shirts; an increase in its price will cause T-shirt manufacturers
these homes, Spanish is the language spoken—about 16 million children. Essentially, the reason is because if one electron acquires the same set of quantum numbers as another electron, including the same spin, its wave function would be exactly the opposite of the other one. No two learners are the same. The teacher still broadcasts the information. Yet for many, the thought of repeating the same task between the hours of 9-5 day every day sounds deeply unpleasant. In fact there are compelling reasons why even beginning learners Is this OK? Source: DECCAN HERALD . How many ways are there that no two students will have the same birth date in a class of size 60? Sign up for our monthly newsletter to get the latest on studying abroad, boarding school life, language and culture. You don’t need to look further than yourself and your immediate circle of friends to spot the differences in your learning styles. Q: What do tigers have that no other animals have? 1 … (2015, December 4). That’s okay as long as in class, you give your teacher your full attention and try to contribute to discussions as much as possible. You could also try recording yourself running through lists of information or working through a complicated theory or topic. Just imagine the possibilities if we can digitise the feedback loop and put it in the hands of every teacher in the form of a tablet. Their best pilots were not able to control the aircraft. 1. There are no two same people in the world. The influence also may lead to some ongoing errors in English, which will become evident with time and repeated use by students who have the same native language. So, use an external file as a temporary place to store all the names entered, and use string comparison with relational operators, (<, >) to check for their alphabetical order. Solitary learners are introspective, independent and very self-aware. Create a drawing or clay model of two adult insects including three body parts, legs, wings, eyes, and antenna. Customised learning in the classroom can be done efficiently through a credible assessment–feedback-remedial loop. When participants in t… Isn't it the same with children at school? All of this leads to improved thinking skills, which in turn boosts the learning process. This entry was posted in Latest Updates. To learn more about what we offer, please read on. Even two snowflakes forming right next to each other experience the trip differently. Add to that the influence of the time of year in which we were born, as shown by the star-signs, and then the year itself. In school, logical learners might be involved in math, robotics or computers. At school, aural learners may play musical instruments, write songs, or sing with a choir. There are not two the s... Stack Exchange Network. The students are simply there to learn through lectures and direct instruction, and the focus is mainly on passing tests and assessments. 1. Depending on the nature of the wrong answer, the teacher would correct the students. New concepts can be especially difficult for students to grasp and may need to be retaught or revisited for several days. In the same way, learners automatically transfer knowledge acquired in one language to another language as soon as they have learned sufficient vocabulary in the new language. Isn't it true that no two students are the same and yet the teaching is designed for the class, much like the average cockpit? One of the most common specific learning difficulties, some estimates indicate that 10-15% of the population struggles with some form of dyslexia. While you study, use your class notes and then push yourself to explore further, find new links, understand what surrounds the content. Verbal learners learn best through the words they hear. When learning new skills or topics, they’d rather jump in straight away. For example, imagine your way through the physical actions of a chemistry experiment. ‎Show Learning Is The New Working, Ep If No Two People Listen The Same, Why Do We Think They THINK The Same? This 'clicker' mechanism ensured that the feedback was live and the remediation was immediate. Consequently, many Polynesian learners do not readily hear, or reproduce in their spoken English, the difference between each sound in the pair. We appreciated it when our individual needs were met. The officials tried to find out the cause of these mishaps. If you’re a solitary learner, your first impulse will already be to retreat and manage your studying on your own. Therfore the prbability for none of them to be born the same day of the week is $$P(\bar X)=\frac{20}{21}$$ Therefore the probability for at least two of them to be born the same day of the week is: Am I or should it be? Note that in some states—California, for example—the preferred term is EL , which in the future might become more widely adopted. There is only one subcategor… It’s also been thoroughly debunked. This learning theory emphasizes problem-solving, as well. Don’t speak too fast, and if a student tells you they didn’t understand what you said, never, ever repeat the same thing in a louder voice. No one of us, save the students and the librarian, is to express and be held accountable for a general education—even as a "general education" is the ultimate goal for the students. Q: Why is the number six afraid? The operating principle was 'adjustable.' Isn't it time that the systems be made for the child? (2) Find a live course or a “synchronous cohort” of learners taking the same course at the same time — the camaraderie can serve as a huge motivator. No two days are the same Some people flourish in a job situation where they know exactly what is expected. Consider how each of these genres meets the various For the most part, that's always been the case with education: expecting all children in the same grade to master the same work at the same level and pace. If something is bothering this type of student, they often find a walk or run is enough to clear their head. No two days the same for teachers who love the job. 1 : not different — used when referring to a particular person or thing that is connected to more than one person, time, ... two people of the same sex [=two people who are both male or both female] The buildings are the same age/style. Two-way bilingual education/dual-language immersion About half of the students are native English speakers; half are English-language learners who speak the same native tongue. At least two top international MBA business schools require all incoming students to know a second language in addition to English. These learners do best in the classroom when the whiteboard is in use, and they tend to have beautiful revision notes. They also tend to enjoy video games that allow them to practice strategy and problem-solving. For instance, Carl Weiman, a Noble Prize winner, used the wireless 'clicker' system. For physical learners, I’d recommend staying active throughout periods of study. Make me a believer. A lively and energetic speech will make it easier to recall details later on. They tend to do well in the classroom because they have great listening skills. Turning this picture into reality would require practical tools to ensure that every child is treated as an individual and not as a cog in the wheel. No matter what type of learner you are, at EF Academy the learning experience is catered to suit each individual student. [1] Unlike individual learning, people engaged in collaborative
\left(0,1,1,2\right)-\langle n_c\rangle^{eq}\left(1,1,1,1\right)$ and ${\bf v}_s = \left(0,1,-1,0\right)$, with the equilibrium occupation number $\langle n_c\rangle^\mathrm{eq}$. This yields the expressions for the time-dependent expectation value of the charge $\langle n_c \rangle(t)= \left(0,1,1,2\right) \cdot \mathbf{P}(t)$ and the spin $\langle n_s \rangle(t)= \left(0,1,-1,0\right) \cdot \mathbf{P}(t)$ on the dot \begin{eqnarray*} \langle n_c \rangle(t) & = & \langle n_c\rangle^\mathrm{eq}\left(1-e^{-t/\tau_c}\right)+\langle n_c \rangle^\mathrm{in}e^{-t/\tau_c}\\ \langle n_s \rangle(t) & = & \langle n_s \rangle^\mathrm{in}e^{-t/\tau_s}\ , \end{eqnarray*} with the initial occupation numbers indicated by the superscript "in". The spin relaxation time can be measured by preparing the dot in a spin-polarized state by applying a magnetic field and looking at the spin relaxation after the field has been switched off. If we retain only the first-order contribution in $\Gamma$, the inverse relaxation times are given by,~\cite{splett08-1} \begin{subequations} \begin{eqnarray} \gamma_\mathrm{s}^{(1)} & = & \Gamma\left[1-f(\epsilon)+f(\epsilon+U)\right]\\ \gamma_\mathrm{c}^{(1)} & = & \Gamma\left[1+f(\epsilon)-f(\epsilon+U)\right], \end{eqnarray} \end{subequations} with the Fermi functions $f(\omega)$. \begin{figure}[t] \includegraphics[width=3.in]{fig2.eps} \caption{Charge relaxation rate, $\gamma_\mathrm{c}$, and spin relaxation rate, $\gamma_\mathrm{s}$, in first and second order in the tunnel coupling $\Gamma$ for $U=10\Gamma$ as a function of the dot's level position $\epsilon$ in units of $\Gamma$, $k_\mathrm{B}T=1.5\Gamma$.} \label{fig_rel} \end{figure} For vanishing Coulomb interaction, $U=0$, the relaxation times are independent of the level position $\epsilon$ and both equal to the inverse of the tunnel coupling $\Gamma$. In the presence of a finite $U$, when the level position is in the interval $-U<\epsilon<0$ the dot is predominantly singly occupied. In this parameter range, $\gamma_\mathrm{c}$ and $\gamma_\mathrm{s}$ differ strongly, even though in first order in the tunnel coupling a change in the spin occupation goes always along with a change in the charge occupation. There are two possible states for the dot to relax to single occupation, namely $|\uparrow\rangle, \ |\downarrow\rangle$, and consequently the charge relaxation time is decreased by a factor $1/2$. On the other hand, in first order in tunneling the spin is blocked to a value different from its equilibrium value $0$ due to the lack of spin-flipping processes leading to an enhancement of $\tau_\mathrm{s}$. This is shown in Fig. (\ref{fig_rel}), where the inverse relaxation times for spin, $\gamma_\mathrm{s}$, and charge, $\gamma_\mathrm{c}$, are plotted as a function of the new dot's level position after the switching, for finite $U$. Note that spin and charge have independent dynamics; this is not the case in the presence of a Zeeman splitting in the dot. In fact, in the presence of a Zeeman splitting, $\epsilon_\uparrow\neq\epsilon_ \downarrow$, and for infinite $U$, the first order relaxation rates are given by $1/\tau_1=1+\sqrt{f(\epsilon_\uparrow)f(\epsilon_\downarrow)}$ and $1/\tau_2=1-\sqrt{f(\epsilon_\uparrow)f(\epsilon_\downarrow)}$.\\ When we include tunneling contributions in second order in the tunnel coupling, we find corrections of different origin to the relaxation times. Cotunneling, describing real tunneling processes via energetically forbidden intermediate states, leads to coherent processes flipping the spin of the dot, $W_\mathrm{sf}$, or to transitions between zero and double occupation, $W_{0\mathrm{d}}$ and $W_{\mathrm{d}0}$. These terms, which agree with results from a standard second-order perturbation expansion, contain expressions $\phi(\epsilon)=\frac{1}{2\pi}\mathrm{Re}\Psi\left(\frac{1}{2}+i\frac{\beta\epsilon}{2\pi}\right)$ and $\sigma(\epsilon,U)=\Gamma[\phi(\epsilon+U)-\phi(\epsilon)]$, where $\Psi(x)$ is the digamma function and $\beta=1/(k_{\text{B}}T)$ the inverse temperature, primes denote derivatives with respect to $\epsilon$. These cotunneling rates are given by \begin{eqnarray*} W_\mathrm{sf} & = & -\frac{\Gamma}{\beta}\left[\Gamma\phi''(\epsilon)+\Gamma\phi''(\epsilon+U)-\frac{2}{U}\sigma'(\epsilon,U)\right]\\ W_{\mathrm{d}0} & = & - \frac{2 \Gamma} {e^{\beta(2\epsilon+U)}-1} \left[\Gamma\phi'(\epsilon)+\Gamma\phi'(\epsilon+U)-\frac{2}{U}\sigma(\epsilon,U)\right], \end{eqnarray*} and $W_{0\mathrm{d}} = \exp[\beta(2\epsilon+U)] W_{\mathrm{d}0}$. In addition to the cotunneling terms virtual tunneling processes appear, which can be captured in terms of renormalization of the level position, $\epsilon\rightarrow\epsilon+\sigma(\epsilon,U)$, and the tunnel coupling, $\Gamma\rightarrow\Gamma[1+\sigma_\Gamma(\epsilon,U)]$,~\cite{comment_renormalization} with \begin{eqnarray}\label{eq_gammaren} \sigma_\Gamma (\epsilon,\Gamma,U) & = & -\left[\Gamma\phi'(\epsilon)+\Gamma\phi'(\epsilon+U)-\frac{2}{U} \sigma(\epsilon,\Gamma,U) \right]\nonumber \\ && \times \frac{1-f(\epsilon)-f(\epsilon+U)}{1-f(\epsilon)+f(\epsilon+U)}\ . \end{eqnarray} These corrections are due to an interplay of tunneling and interaction and vanish for $U=0$. We verified that they can also be identified in the second-order linear conductance. The corrections to the relaxation times read \begin{subequations} \begin{eqnarray} \gamma_\mathrm{s}^{(2)} & = & \sigma(\epsilon,U) \frac{\partial}{\partial\epsilon}\gamma_\mathrm{s}^{(1)}+\sigma_\Gamma(\epsilon,U)\gamma_\mathrm{s}^{(1)}+2W_\mathrm{sf}\\ \gamma_\mathrm{c}^{(2)} & = & \sigma(\epsilon,U)\frac{\partial}{\partial\epsilon}\gamma_\mathrm{c}^{(1)}+ \sigma_\Gamma(\epsilon,U)\gamma_\mathrm{c}^{(1)}\nonumber\\ & &+2 \frac{ f(\epsilon+U)W_{0\mathrm{d}} + \left(1-f(\epsilon)\right)W_{\mathrm{d}0} } {1-f(\epsilon)+f(\epsilon+U)} \ . \end{eqnarray} \end{subequations} The influence of the corrections on the relaxation times is shown in Fig.~\ref{fig_rel}. Both relaxation rates are enhanced due to the higher order tunneling processes. Importantly, the charge and spin relaxation dynamics remain governed by a single time scale, each. \section{Harmonic modulation} We now consider a harmonic modulation of the level position $\epsilon$ with frequency $\Omega$, which we assume to be much smaller than $\Gamma$, cf. Fig.~\ref{fig_model}(b). We perform a systematic expansion in powers of $\Omega$.~\cite{splett06,cavaliere09} The order in the expansion in $\Omega$, is indicated by a superscript: $(\mathrm{i})$ (instantaneous) for zeroth order and $(\mathrm{a})$ (adiabatic) for first order. While the instantaneous term for the average occupation of the dot, $\langle n \rangle^{(i)}$, immediately adjusts to the applied gate voltage, its first-order correction, $\langle n \rangle^{(a)}$ accounts for a slight lagging behind. In the ac regime, the quantum dot can be modeled by its charge relaxation resistance, $R_q$, in series with its electrochemical capacitance, $C_\mu$.~\cite{buttiker93} We are interested in the frequency-dependent admittance, $G(\Omega)$, which is defined by the linear relation $G(\Omega)=I(\Omega)V(\Omega)$. Therefore we consider the linear regime in the gate potential controlling the level position $\epsilon$. The expansion of the admittance of the dot \begin{equation} G(\Omega) = -i\Omega C_\mu+\Omega^2C_\mu^2R_q \ , \end{equation} for small frequencies enables to identify these quantities and to compute the $RC$ time. To compare with this, we insert the adiabatic expansion of the occupation number into the charge continuity equation $I(t) = -e\ \partial\langle n\rangle/\partial t$ (Ref. \onlinecite{note_displacement}) to arrive at \begin{equation} \frac{1}{\tau_{RC}} = -\frac{\partial \langle n \rangle^{(i)}/\partial t}{\langle n \rangle^{(a)}} \, . \end{equation} To obtain the average dot occupation, we expand the master equation in powers of $\Omega$, \begin{subequations} \label{eq_master_ad} \begin{eqnarray} 0 & = & \mathbf{W}^{(i)}\mathbf{P}^{(i)} \\ \frac{d \mathbf{P}^{(i)}}{dt} & = & \mathbf{W}^{(i)}\mathbf{P}^{(a)}+\mathbf{W}^{(a)}\mathbf{P}^{(i)}+ \partial\mathbf{W}^{(i)}\frac{d \mathbf{P}^{(i)}}{dt}\ . \end{eqnarray} \end{subequations} We refer to Ref.~\onlinecite{splett06} for the evaluation of the adiabatic correction of the kernel, $\mathbf{W}^{(a)}$. The instantaneous occupation probabilities, $\mathbf{P}^{(i)}$, and their first adiabatic correction, $\mathbf{P}^{(a)}$, are obtained from the master equation Eq.~(\ref{eq_master_ad}) by matching orders in the tunnel coupling. We find that the instantaneous probabilities $ \mathbf{P}^{(i)}$ start in zeroth order in the tunnel coupling, while $\mathbf{P}^{(a)}$ starts in minus first order in the tunnel coupling, with $\mathbf{P}^{(a,-1)}$ being proportional to the small factor $\Omega/\Gamma$. A rigorous expansion in the tunnel coupling exists for the inverse of the $RC$-time, $\gamma_{RC}=1/\tau_{RC}$. Close to the resonances, $\epsilon=0$ and $\epsilon=-U$, both the change in the instantaneous occupation and the adiabatic correction are governed by lowest order in the tunnel coupling, which leads to the same relaxation time as found for the charge relaxation time due to fast switching, $\gamma_{RC}^{(1)} =\gamma_{c}^{(1)}\def\gamma^{(1)}$. In the Coulomb-blockaded regions, however, both $\partial \langle n \rangle^{(i,0)}/\partial t$ and $\langle n \rangle^{(a,-1)}$ are exponentially suppressed, and the expression for $\gamma_{RC}^{(1)}$ becomes meaningless. Instead, the next-order corrections in $\Gamma$ needs to be taken into account. For the time variation in the instantaneous occupation number $\partial \langle n \rangle^{(i,1)}/\partial t$, the non-exponentially suppressed contribution to this higher-order correction is due to quantum fluctuations, namely cotunneling processes in which the final dot state is not changed as compared to the initial one. Their rates are $W_{0\rightarrow\sigma\rightarrow 0}=W_{\sigma\rightarrow 0\rightarrow \sigma}=-\frac{\Gamma^2}{\beta}\phi''(\epsilon)$ and $W_{\sigma\rightarrow d\rightarrow \sigma}=W_{d\rightarrow \sigma\rightarrow d}=-\frac{\Gamma^2}{\beta}\phi''(\epsilon+U)$. The higher-order correction $\langle n \rangle^{(a,0)}$, on the other hand, remains exponentially suppressed. As a result, the relaxation rate diverges. This type of processes presumably contributes only to the short time dynamics after a step pulse and does not enter the long-time behavior described by the exponential relaxation. We find for the difference between the $RC$ rate $\gamma_{RC}$ and the charge relaxation rate $\gamma_{c}$ \begin{eqnarray} \frac{\gamma_{RC}^{(2)}-\gamma_{c}^{(2)}}{\gamma^{(1)}} &=& \Gamma \frac{\left[2-\langle n\rangle^{(i,0)} \right]\phi''(\epsilon)+\langle n\rangle^{(i,0)}\phi''(\epsilon+U)}{\partial\langle n\rangle^{(i,0)}/\partial\epsilon}\ .\nonumber\\ \end{eqnarray} Although this difference is only due to second-order rates, it is substantial in the Coulomb-blockade regions where first-order processes are exponentially suppressed. This is shown in Fig.~\ref{fig_compare_times}, where the inverse $RC$-time and the relaxation rate after a single switching event are compared, taking into account tunneling processes up to second order. This difference is a consequence of the different time scales captured in the two expansions. Whereas in the response to the step we take into account the long-time limit but omit high frequencies, the ac response is sensitive only to the frequency at which it is tested. When taking $U=0$ this result agrees well with the respective limit of Ref.~\onlinecite{buttiker93}
Sahraei, E. & Wierzbicki, T. Deformation and failure mechanisms of 18650 battery cells under axial compression. Journal of Power Sources 336, 332–340 (2016). 16. 16. Amodeo, C. M., Ali, M. Y. & Pan, J. Computational models for simulations of lithium-ion battery modules under quasi-static and dynamic constrained compression tests. International Journal of Crashworthiness 22, 1–14 (2017). 17. 17. Xia, Y., Chen, G., Zhou, Q., Shi, X. & Shi, F. Failure behaviours of 100% soc lithium-ion battery modules under different impact loading conditions. Engineering Failure Analysis 82, 149–160 (2017). 18. 18. Xia, Y., Li, T., Ren, F., Gao, Y. & Wang, H. Failure analysis of pinch-torsion tests as a thermal runaway risk evaluation method of li-ion cells. Journal of Power Sources 265, 356–362 (2014). 19. 19. Xia, Y., Wierzbicki, T., Sahraei, E. & Zhang, X. Damage of cells and battery packs due to ground impact. Journal of Power Sources 267, 78–97 (2014). 20. 20. Marcicki, J., Yang, X. G. & Rairigh, P. Fault current measurements during crush testing of electrically parallel lithium-ion battery modules. ECS Electrochemistry Letters 4, A97–A99 (2015). 21. 21. Raffler, M. et al. Finite element model approach of a cylindrical lithium ion battery cell with a focus on minimization of the computational effort and short circuit prediction. Journal of Power Sources 360, 605–617 (2017). 22. 22. KermaniG.SahraeiE.Characterization and modeling of the mechanical properties of lithium-ion batteriesEnergies1017302017 23. 23. Sheikh, M., Elmarakbi, A. & Elkady, M. Thermal runaway detection of cylindrical 18650 lithium-ion battery under quasi-static loading conditions. Journal of Power Sources 370, 61–70 (2017). 24. 24. Jiang, X., Luo, H., Xia, Y. & Zhou, Q. Mechanical behavior of lithium-ion battery component materials and error sources analysis for test results. SAE International Journal of Materials and Manufacturing 9, 614–621 (2016). 25. 25. Zhang, C., Xu, J., Cao, L., Wu, Z. & Santhanagopalan, S. Constitutive behavior and progressive mechanical failure of electrodes in lithium-ion batteries. Journal of Power Sources 357, 126–137 (2017). 26. 26. Sahraei, E., Kahn, M., Meier, J. & Wierzbicki, T. Modelling of cracks developed in lithium-ion cells under mechanical loading. Rsc Advances 5, 80369–80380 (2015). 27. 27. Zhang, X., Sahraei, E. & Wang, K. Deformation and failure characteristics of four types of lithium-ion battery separators. Journal of Power Sources 327, 693–701 (2016). 28. 28. Lai, W.-J., Ali, M. Y. & Pan, J. Mechanical behavior of representative volume elements of lithium-ion battery cells under compressive loading conditions. Journal of Power Sources 245, 609–623 (2014). 29. 29. Volck, T. et al. Method for determination of the internal short resistance and heat evolution at different mechanical loads of a lithium ion battery cell based on dummy pouch cells. Batteries 2, 8 (2016). 30. 30. Feng, X., Pan, Y., He, X., Wang, L. & Ouyang, M. Detecting the internal short circuit in large-format lithium-ion battery using model-based fault-diagnosis algorithm. Journal of Energy Storage 18, 26–39, https://doi.org/10.1016/j.est.2018.04.020 (2018). 31. 31. Feng, X., He, X., Lu, L. & Ouyang, M. Analysis on the fault features for internal short circuit detection using an electrochemical-thermal coupled model. Journal of The Electrochemical Society 165, A155–A167, https://doi.org/10.1149/2.0501802jes (2018). 32. 32. Guo, R., Lu, L., Ouyang, M. & Feng, X. Mechanism of the entire overdischarge process and overdischarge-induced internal short circuit in lithium-ion batteries. Scientific reports 6, 30248 (2016). 33. 33. Cabrera-Castillo, E., Niedermeier, F. & Jossen, A. Calculation of the state of safety (sos) for lithium ion batteries. Journal of Power Sources 324, 509–520 (2016). 34. 34. Xia, B., Shang, Y., Nguyen, T. & Mi, C. A correlation based fault detection method for short circuits in battery packs. Journal of Power Sources 337, 1–10 (2017). 35. 35. Xia,  B.,  Shang,  Y.,  Nguyen,  T. &  Mi,  C.  External short circuit fault diagnosis based on supervised statistical learning. In Transportation Electrification Asia-Pacific (ITEC Asia-Pacific), 2017 IEEE Conference and Exp., 1–5 (IEEE, 2017). 36. 36. Sidhu, A., Izadian, A. & Anwar, S. Adaptive nonlinear model-based fault diagnosis of li-ion batteries. IEEE Transactions on Industrial Electronics 62, 1002–1011 (2015). 37. 37. Chen, W., Chen, W.-T., Saif, M., Li, M.-F. & Wu, H. Simultaneous fault isolation and estimation of lithium-ion batteries via synthesized design of luenberger and learning observers. IEEE Transactions on Control Systems Technology 22, 290–298 (2014). 38. 38. Ouyang, M. et al. Internal short circuit detection for battery pack using equivalent parameter and consistency method. Journal of Power Sources 294, 272–283 (2015). 39. 39. Seo, M., Goh, T., Park, M., Koo, G. & Kim, S. W. Detection of internal short circuit in lithium ion battery using model-based switching model method. Energies 10, 76 (2017). 40. 40. Feng, X., Weng, C., Ouyang, M. & Sun, J. Online internal short circuit detection for a large format lithium ion battery. Applied Energy 161, 168–180 (2016). 41. 41. Xia,  B.,  Mi,  C.,  Chen,  Z. &  Robert,  B.  Multiple cell lithium-ion battery system electric fault online diagnostics. In Transportation Electrification Conference and Expo (ITEC), 2015 IEEE, 1–7 (IEEE, 2015). 42. 42. Asakura,  J.,  Nakashima,  T.,  Nakatsuji,  T. &  Fujikawa,  M.  Battery internal short-circuit detecting device and method, battery pack, and electronic device system (2008). US Patent App. 12/670,796. 43. 43. Yokotani,  K.  Battery system and method for detecting internal short circuit in battery system (2014). US Patent 8,643,332. 44. 44. Leidich,  S.,  Schumann,  S. &  Henrici,  F.  Method for detecting anomalies in a battery cell, and short-circuit sensor system (2015). US Patent App. 15/126,605. 45. 45. Asakura,  J.,  Nakashima,  T.,  Nakatsuji,  T. &  Fujikawa,  M.  Battery internal short-circuit detection apparatus and method, and battery pack (2012). US Patent 8,334,699. 46. 46. Hermann,  W.A. &  Kohn,  S.I.  Detection of over-current shorts in a battery pack using pattern recognition (2013). US Patent 8,618,775. 47. 47. Hermann,  W.A. &  Kohn,  S.I.  Detection of over-current in a battery pack (2013). US Patent App. 14/089,702. 48. 48. Dietterich,  T.G.  Ensemble methods in machine learning. In International workshop on multiple classifier systems, 1–15 (Springer, 2000). 49. 49. Duro, D. C., Franklin, S. E. & Dubé, M. G. Multi-scale object-based image analysis and feature selection of multi-sensor earth observation imagery using random forests. International Journal of Remote Sensing 33, 4502–4526 (2012). 50. 50. Breiman, L. Random forests. Machine learning 45, 5–32 (2001). 51. 51. Memar, P. & Faradji, F. A novel multi-class eeg-based sleep stage classification system. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26, 84–95 (2018). 52. 52. Wang, Z. et al. Fault diagnosis of a rolling bearing using wavelet packet denoising and random forests. IEEE Sensors Journal 17, 5581–5588 (2017). 53. 53. ElMeguid, M. K. A. & Levine, M. D. Fully automated recognition of spontaneous facial expressions in videos using random forest classifiers. IEEE Transactions on Affective Computing 5, 141–154 (2014). 54. 54. Zhang, H. et al. Image classification using rapideye data: Integration of spectral and textual features in a random forest classifier. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 10, 5334–5349 (2017). 55. 55. Zhao, B., Cao, Z. & Wang, S. Lung vessel segmentation based on random forests. Electronics Letters 53, 220–222 (2017). 56. 56. Ouyang, M. et al. Internal short circuit detection for battery pack using equivalent parameter and consistency method. Journal of Power Sources 294, 272–283 (2015). 57. 57. Xia,  B.,  Mi,  C.,  Chen,  Z. &  Robert,  B.  Multiple cell lithium-ion battery system electric fault online diagnostics. In Transportation Electrification Conference and Expo (ITEC), 2015 IEEE, 1–7 (IEEE, 2015). 58. 58. Keates,  A.W.,  Otani,  N.,  Nguyen,  D.J.,  Matsumura,  N. &  Li,  P.T.  Short circuit detection for batteries (2010). US Patent 7,795,843. 59. 59. Love,  C.T. &  Swider-Lyons,  K.  Battery health monitoring system and method (2016). US Patent 9,465,077. 60. 60. Hu, X., Li, S. & Peng, H. A comparative study of equivalent circuit models for li-ion batteries. Journal of Power Sources 198, 359–367, https://doi.org/10.1016/j.jpowsour.2011.10.013 (2012). 61. 61. Liu, K., Zou, C., Li, K. & Wik, T. Charging pattern optimization for lithium-ion batteries with an electrothermal-aging model. IEEE Transactions on Industrial Informatics 14, 5463–5474, https://doi.org/10.1109/TII.2018.2866493 (2018). 62. 62. Liu, K., Hu, X., Yang, Z., Xie, Y. & Feng, S. Lithium-ion battery charging management considering economic costs of electrical energy loss and battery degradation. Energy Conversion and Management 195, 167–179, https://doi.org/10.1016/j.enconman.2019.04.065 (2019). 63. 63. Müller,  A.C. &  Guido,  S.  Introduction to machine learning with Python: a guide for data scientists ("O’Reilly Media, Inc.”, 2016). Download references ## Author information Authors ### Contributions A.N. and A.Kh. conceived the idea and designed the detection framework. A.Y. wrote
r""" Return the root vertex of ``self``. If ``self`` is the Yang-Baxter graph of the partition `[p_1,p_2,\dots,p_k]`, then this is the vertex `(p_k-1,p_k-2,\dots,0,\dots,p_1-1,p_1-2,\dots,0)`. EXAMPLES:: sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator sage: ops = [SwapIncreasingOperator(i) for i in range(4)] sage: Y = YangBaxterGraph(root=(1,0,2,1,0), operators=ops) sage: Y.root() (1, 0, 2, 1, 0) sage: Y = YangBaxterGraph(root=(0,1,0,2,1,0), operators=ops) sage: Y.root() (0, 1, 0, 2, 1, 0) sage: Y = YangBaxterGraph(root=(1,0,3,2,1,0), operators=ops) sage: Y.root() (1, 0, 3, 2, 1, 0) sage: Y = YangBaxterGraph(partition=[3,2]) sage: Y.root() (1, 0, 2, 1, 0) """ return self._root def successors(self, v): r""" Return the successors of the vertex ``v``. EXAMPLES:: sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator sage: ops = [SwapIncreasingOperator(i) for i in range(4)] sage: Y = YangBaxterGraph(root=(1,0,2,1,0), operators=ops) sage: Y.successors(Y.root()) [(1, 2, 0, 1, 0)] sage: sorted(Y.successors((1, 2, 0, 1, 0))) [(1, 2, 1, 0, 0), (2, 1, 0, 1, 0)] """ return [a for (a,b) in self._successors(v)] def plot(self, *args, **kwds): r""" Plots ``self`` as a digraph. EXAMPLES:: sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator sage: ops = [SwapIncreasingOperator(i) for i in range(4)] sage: Y = YangBaxterGraph(root=(1,0,2,1,0), operators=ops) sage: Y.plot() Graphics object consisting of 16 graphics primitives sage: Y.plot(edge_labels=False) Graphics object consisting of 11 graphics primitives """ if "edge_labels" not in kwds: kwds["edge_labels"] = True if "vertex_labels" not in kwds: kwds["vertex_labels"] = True return self._digraph.plot(*args, **kwds) def vertices(self, sort=False): r""" Return the vertices of ``self``. INPUT: - ``sort`` -- boolean (default ``False``) whether to sort the vertices EXAMPLES:: sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator sage: ops = [SwapIncreasingOperator(i) for i in range(3)] sage: Y = YangBaxterGraph(root=(0,2,1,0), operators=ops) sage: Y.vertices(sort=True) [(0, 2, 1, 0), (2, 0, 1, 0), (2, 1, 0, 0)] """ if sort: return sorted(self) return list(self) def edges(self): r""" Return the (labelled) edges of ``self``. EXAMPLES:: sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator sage: ops = [SwapIncreasingOperator(i) for i in range(3)] sage: Y = YangBaxterGraph(root=(0,2,1,0), operators=ops) sage: Y.edges() [((0, 2, 1, 0), (2, 0, 1, 0), Swap-if-increasing at position 0), ((2, 0, 1, 0), (2, 1, 0, 0), Swap-if-increasing at position 1)] """ return self._digraph.edges() def vertex_relabelling_dict(self, v, relabel_operator): r""" Return a dictionary pairing vertices ``u`` of ``self`` with the object obtained from ``v`` by applying the ``relabel_operator`` along a path from the root to ``u``. Note that the root is paired with ``v``. INPUT: - ``v`` -- an object - ``relabel_operator`` -- function mapping a vertex and a label to the image of the vertex OUTPUT: - dictionary pairing vertices with the corresponding image of ``v`` EXAMPLES:: sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator sage: ops = [SwapIncreasingOperator(i) for i in range(3)] sage: Y = YangBaxterGraph(root=(0,2,1,0), operators=ops) sage: def relabel_operator(op, u): ....: i = op.position() ....: return u[:i] + u[i:i+2][::-1] + u[i+2:] sage: Y.vertex_relabelling_dict((1,2,3,4), relabel_operator) {(0, 2, 1, 0): (1, 2, 3, 4), (2, 0, 1, 0): (2, 1, 3, 4), (2, 1, 0, 0): (2, 3, 1, 4)} """ relabelling = {self._root:v} for (u,w,i) in self._edges_in_bfs(): relabelling[w] = relabel_operator(i, relabelling[u]) return relabelling def relabel_vertices(self, v, relabel_operator, inplace=True): r""" Relabel the vertices ``u`` of ``self`` by the object obtained from ``u`` by applying the ``relabel_operator`` to ``v`` along a path from ``self.root()`` to ``u``. Note that the ``self.root()`` is paired with ``v``. INPUT: - ``v`` -- tuple, Permutation, CombinatorialObject - ``inplace`` -- if ``True``, modifies ``self``; otherwise returns a modified copy of ``self``. EXAMPLES:: sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator sage: ops = [SwapIncreasingOperator(i) for i in range(3)] sage: Y = YangBaxterGraph(root=(0,2,1,0), operators=ops) sage: def relabel_op(op, u): ....: i = op.position() ....: return u[:i] + u[i:i+2][::-1] + u[i+2:] sage: d = Y.relabel_vertices((1,2,3,4), relabel_op, inplace=False); d Yang-Baxter graph with root vertex (1, 2, 3, 4) sage: Y.vertices(sort=True) [(0, 2, 1, 0), (2, 0, 1, 0), (2, 1, 0, 0)] sage: e = Y.relabel_vertices((1,2,3,4), relabel_op); e sage: Y.vertices(sort=True) [(1, 2, 3, 4), (2, 1, 3, 4), (2, 3, 1, 4)] """ from copy import copy relabelling = self.vertex_relabelling_dict(v, relabel_operator) Y = self if inplace else copy(self) Y._root = relabelling[Y._root] Y._digraph.relabel(relabelling, inplace=True) if inplace is False: return Y def relabel_edges(self, edge_dict, inplace=True): r""" Relabel the edges of ``self``. INPUT: - ``edge_dict`` -- a dictionary keyed by the (unlabelled) edges. EXAMPLES:: sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator sage: ops = [SwapIncreasingOperator(i) for i in range(3)] sage: Y = YangBaxterGraph(root=(0,2,1,0), operators=ops) sage: def relabel_op(op, u): ....: i = op.position() ....: return u[:i] + u[i:i+2][::-1] + u[i+2:] sage: Y.edges() [((0, 2, 1, 0), (2, 0, 1, 0), Swap-if-increasing at position 0), ((2, 0, 1, 0), (2, 1, 0, 0), Swap-if-increasing at position 1)] sage: d = {((0,2,1,0),(2,0,1,0)):17, ((2,0,1,0),(2,1,0,0)):27} sage: Y.relabel_edges(d, inplace=False).edges() [((0, 2, 1, 0), (2, 0, 1, 0), 17), ((2, 0, 1, 0), (2, 1, 0, 0), 27)] sage: Y.edges() [((0, 2, 1, 0), (2, 0, 1, 0), Swap-if-increasing at position 0), ((2, 0, 1, 0), (2, 1, 0, 0), Swap-if-increasing at position 1)] sage: Y.relabel_edges(d, inplace=True) sage: Y.edges() [((0, 2, 1, 0), (2, 0, 1, 0), 17), ((2, 0, 1, 0), (2, 1, 0, 0), 27)] """ if inplace: Y = self else: from copy import copy Y = copy(self) digraph = Y._digraph for (u,v,i) in digraph.edges(): digraph.set_edge_label(u,v,edge_dict[u,v]) if not inplace: return Y ##### Yang-Baxter Graphs defined by a partition ########################### class YangBaxterGraph_partition(YangBaxterGraph_generic): def __init__(self, partition): r""" A class to model the Yang-Baxter graph of a partition. The Yang-Baxter graph defined by a partition `[p_1,\dots,p_k]` is the labelled directed graph with vertex set obtained by bubble-sorting `(p_k-1,p_k-2,\dots,0,\dots,p_1-1,p_1-2,\dots,0)`; there is an arrow from `u` to `v` labelled by `i` if `v` is obtained by swapping the `i`-th and `(i+1)`-th elements of `u`. .. note:: This is a lazy implementation: the digraph is only computed when it is needed. EXAMPLES:: sage: Y = YangBaxterGraph(partition=[3,2,1]); Y Yang-Baxter graph of [3, 2, 1], with top vertex (0, 1, 0, 2, 1, 0) sage: loads(dumps(Y)) == Y True AUTHORS: - Franco Saliola (2009-04-23) """ self._partition = partition beta = sorted(self._partition, reverse=True) root = sum([tuple(range(b)) for b in beta], tuple())[::-1] operators = [SwapIncreasingOperator(i) for i in range(sum(partition)-1)] super(YangBaxterGraph_partition, self).__init__(root, operators) def __repr__(self): r""" EXAMPLES:: sage: Y = YangBaxterGraph(partition=[3,2]) sage: Y.__repr__() 'Yang-Baxter graph of [3, 2], with top vertex (1, 0, 2, 1, 0)' """ return "Yang-Baxter graph of %s, with top vertex %s" % (self._partition, self._root) def __copy__(self): r""" Return a copy of ``self``. EXAMPLES:: sage: Y = YangBaxterGraph(partition=[3,2]); Y Yang-Baxter graph of [3, 2], with top vertex (1, 0, 2, 1, 0) sage: B = copy(Y); B Yang-Baxter graph of [3, 2], with top vertex (1, 0, 2, 1, 0) sage: Y is B False sage: Y == B True """ from copy import copy Y = self.__class__(self._partition) Y._digraph = copy(self._digraph) return Y @lazy_attribute def _digraph(self): r""" Constructs the underlying digraph and stores the result as an attribute. EXAMPLES:: sage: Y = YangBaxterGraph(partition=[2,1]) sage: Y._digraph Digraph on 2 vertices sage: Y.edges() [((0, 1, 0), (1, 0, 0), Swap positions 0 and 1)] """ digraph = super(YangBaxterGraph_partition, self)._digraph for (u, v, op) in digraph.edges(): digraph.set_edge_label(u, v, SwapOperator(op.position())) return digraph @lazy_attribute def _vertex_ordering(self): r""" Return a list of the vertices of ``self``, sorted using Pythons ``sorted`` method. EXAMPLES:: sage: Y = YangBaxterGraph(partition=[3,2]) sage: Y._vertex_ordering [(1, 0, 2, 1, 0), (1, 2, 0, 1, 0), (1, 2, 1, 0, 0), (2, 1, 0, 1, 0), (2, 1, 1, 0, 0)] """ return sorted(self._digraph.vertices()) def __iter__(self): r""" Iterate over the vertices ``self``. .. NOTE:: The vertices are first sorted using Python's sorted command. EXAMPLES:: sage: Y = YangBaxterGraph(partition=[3,2]) sage: list(Y.__iter__()) [(1, 0, 2, 1, 0), (1, 2, 0, 1, 0), (1, 2, 1, 0, 0), (2, 1, 0, 1, 0), (2, 1, 1, 0, 0)] """ for v in self._vertex_ordering: yield v def _swap_operator(self, operator, u): r""" Return the image of ``u`` under ``operator``. INPUT: - ``i`` -- positive integer between 1 and len(u)-1, inclusive - ``u`` -- tuple, list, permutation, CombinatorialObject, .... EXAMPLES:: sage: Y = YangBaxterGraph(partition=[3,1]) sage:
# 2021 Women’s NCAA Championships Day 2 Ups/Downs: UVA & NC State Put 4 Up Jared Anderson ##### by Jared Anderson 18 March 18th, 2021 For those unfamiliar with swimming terminology, the concept of “Ups” and “Downs” is a good way to track which teams performed best at prelims. In prelims, swimmers qualify for one of two finals heats: the top 8 finishers make the A final, and places 9 through 16 make the B final. In finals, swimmers are locked into their respective final, meaning a swimmer in the B heat (spots 9-16) can only place as high as 9th or as low as 16th, even if they put up the fastest or slowest time of any heat in the final. With that in mind, we’ll be tracking “Ups” and “Downs” after each prelims session to help project team scoring opportunities for that night’s finals. “Up” refers to swimmers in the A final, and “Down” to swimmers in the B final. ### 2021 NCAA WOMEN’S SWIMMING & DIVING CHAMPIONSHIPS It was a big day for the ACC – Virginia and NC State lead the nation with 4 A finalists apiece on day 2 of the women’s NCAA Swimming & Diving Championships. Virginia also has one B finalist, tying for the national lead with 5 individual scorers set up for tonight. Cal also has five, though Virginia has twice as many A finalists. The Cavaliers have 4 up and 1 down, while Cal has 2 up and 3 down. A few other notes: • 22 teams should score individually tonight, barring DQs. 13 teams will be represented in an A final – and that’s in a session that only had 24 total A final spots up for grabs. • Mid-major programs were seeded to score just two swimmers across the entire meet. But two mid-major swimmers, Houston’s Ioanna Sacha and Akron’s Sarah Watsonqualified to score in the 200 IM alone tonight. Neither was seeded to score. • These numbers don’t include 1-meter diving (happening this afternoon) or either of tonight’s relays. The 200 free relay and 400 medley relay will run as timed final events in tonight’s session. ### Day 2 Ups/Downs Team Total 500 Free 200 IM 50 Free Virginia 4/1 1/1 2/0 1/0 NC State 4/0 1/0 1/0 2/0 Cal 2/3 1/2 1/0 0/1 Ohio State 2/1 1/0 1/0 0/1 Georgia 2/1 1/0 1/0 0/1 Texas 2/0 1/0 1/0 0/0 Michigan 2/0 1/0 0/0 1/0 Stanford 1/1 1/0 0/0 0/1 Alabama 1/1 0/1 0/0 1/0 Missouri 1/1 0/0 0/0 1/1 UNC 1/0 0/0 0/0 1/0 Northwestern 1/0 0/0 0/0 1/0 Wisconsin 1/0 0/0 1/0 0/0 Florida 0/4 0/1 0/2 0/1 Louisville 0/2 0/0 0/1 0/1 Tennessee 0/2 0/1 0/1 0/0 Kentucky 0/2 0/0 0/2 0/0 Nebraska 0/1 0/1 0/0 0/0 Virginia Tech 0/1 0/1 0/0 0/0 Houston 0/1 0/0 0/1 0/0 Akron 0/1 0/0 0/1 0/0 USC 0/1 0/0 0/0 0/1 ### Day 2 Prelims Scoring No official points enter the books until tonight’s finals. But if every swimmer were to hold their place from prelims, here’s how each team would score in these three individual events tonight. Note: these do not include last night’s 800 free relay, nor do they include tonight’s two relays, or the 1-meter diving event. Rank Team Day 2 Individual Points 1 UVA 79 2 NC State 55 3 California 43 4 Michigan 31 5 Georgia 31 6 Texas 30 7 Ohio State 25.5 8 Stanford 23 9 Missouri 22 10 Alabama 20 11 Florida 16 12 Tennessee 14 13 Wisconsin 13 14 UNC 13 15 Northwestern 11 16 Louisville 9.5 17 USC 9 18 Nebraska 5 19 Houston 5 20 Akron 4 21 VT 4 22 Kentucky 3 Here’s what the total running points would look like tonight, including the 800 free relay, if all swimmers kept to their places from this morning: Rank Team Day 2 Individual Points Day 1 Points Projected Total 1 UVA 79 40 119 2 California 43 32 75 3 Texas 30 30 60 4 Georgia 31 28 59 5 NC State 55 55 6 Ohio State 25.5 24 49.5 7 Alabama 20 26 46 8 Michigan 31 14 45 9 Stanford 23 22 45 10 Kentucky 3 34 37 11 Florida 16 18 34 12 Missouri 22 22 13 Wisconsin 13 8 21 14 Tennessee 14 2 16 15 Louisville 9.5 4 13.5 16 UNC 13 13 17 Indiana 12 12 18 Northwestern 11 11 19 VT 4 6 10 20 Texas A&M 10 10 21 USC 9 9 22 Nebraska 5 5 23 Houston 5 5 24 Akron 4 4 Subscribe Notify of Inline Feedbacks Wahooswimfan 1 year ago Looks like Texas has 1 finalist in the 1m diving; Cal none, UVA none, NCSU none; Georgia none, OSU an 11th Tomek 1 year ago Texas lost potential B finalist in the last diving round, O’Neill dropped from 12 after 5 to 18 after 6 round Swimmerj 1 year ago That’s such a bada photo of Walsh Wahooswimfan 1 year ago Amazing the balance among teams – not like some past years where a few team were placing 2-4 swimmers in each final. And times are very close, particularly for B finals – a few tenths makes the difference. BearlyBreathing 1 year ago Interesting. It’s not looking very balanced from where I’m sitting but maybe I’m biased. I’m a little overwhelmed by Virginia’s potential dominance tonight. Really looking forward to the first individual finals in 2 years and the possibility of a Bear or four playing spoiler. Go Freaking Bears. See you all in 3 and a half hours. Last edited 1 year ago by BearlyBreathing Snarky 1 year ago Which relay? The Cal 50 freestylers didn’t do so well this am while NCSU put 2 in the A. With Rowe having been 21.4 earlier this season and Hanson 21.8, the Pack might give VA a run in the 200 free relay. BearlyBreathing 1 year ago 400 MR Cal Swim Fan 1 year ago As a Cal Fan I’m ecstatic UVA is killing it! But also Go Bears!!!! Jonny Newsom 1 year ago Last edited 1 year ago by Jonny Newsom swimgeek 1 year ago UVA’s A-finals are so high-end, it’s kind of a game changer. 1-1-1-3 — and I’d say all 3 top seeds are pretty heavy favorites to win tonight. Swim85 1 year ago That 800 Free relay is going to haunt the Pack on Saturday…. I still don’t understand why they didn’t swim Alons and Berkoff to keep that relay in top 3-5. Swimfan 1 year ago They can’t do all 5. The other 4 are way better now. Swim85 1 year ago They are seeded 11th in the 400 Fr Relay WITHOUT Alons and Berkoff. Huh 1 year ago 3 of the others will stay the same, only the 400 Fr will be better, I don’t think they make up the 25+ points they would have gotten. However, its easy to armchair this after the fact so I can’t blame the coaches for doing what they thought was best. 1 year ago Don’t forget that they lost Heather MacCausland for this meet. So, they had to do something to shift their relays around. They were honestly a sprinter short of being able to wedge someone in there without sacrificing a relay – so they just decided to go all-in on 4. We can look back at how it shakes out post-meet, but their hand was forced a little. Swim85 1 year ago Yes I know that. When I saw the news, I was thinking Hansson will be able to replace MacCausland in the 400 free relay. She’s split 21.8 in the relay and has the range to go up to 200. MacCausland was big for them at ACC, but Hansson can easily matched 48 free split. VASWAMMER 1 year ago Sending positive vibes to Heather. She had a great ACCs and it’s a tough blow to not be able to compete in NCAAs a second year in a row. Say’s Phoebe 1 year ago It should also be noted that in some ways it changes the event load for Alons
projective space $\mathbb{C}P^{n}(4)$ with holomorphic sectional curvature $4$. In addition, we consider the eigenvalues of the closed eigenvalue problem of Laplacian on the compact Riemannian manifolds without boundary in section \ref{sec6}. In the last section, we give some gap conjectures of consecutive eigenvalues of the Dirichlet problem on complete Riemannian manifolds. As a further interest, we provide some important examples to support those conjectures proposed in this section. \vskip5mm \section{Some Technical Lemmas}\label{sec2} \vskip3mm In order to give the proofs of theorem \ref{thm1.1} and theorem \ref{thm1.2}, we would like to prove some key lemmas in this section. At first, we recall the following algebraic inequality which is proved by Chen, Zheng and Yang in \cite{CZY}. By applying this algebraic inequality, Chen, Zheng and Yang established the following general formula (see lemma 2.1 in \cite{CZY}). \begin{lem}\label{lem2.1} Let $(M^{n},g)$ be an $n$-dimensional complete Riemannian manifold and $\Omega$ a bounded domain with piecewise smooth boundary $\partial\Omega$ on $M^{n}$. Assume that $\lambda_{i}$ is the $i^{\text{th}}$ eigenvalue of the Dirichlet problem \eqref{Eigenvalue-Problem} and $u_{i}$ is an orthonormal eigenfunction corresponding to $\lambda_{i}$, $i = 1,2,\cdots$, such that \begin{equation*} \left\{ \begin{aligned}\Delta u_{i}=-\lambda u_{i},\ \ \ \ \ \ \ & in\ \ \ \ \Omega, \\ u_{i}=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ & on\ \ \partial\Omega, \\ \int_{\Omega} u_{i}u_{j}dv=\delta_{ij},\ \ & for~any \ i,j=1,2,\cdots. \end{aligned} \right. \end{equation*} Then, for any function $h(x)\in C^{3}(\Omega)\cap C^{2}(\overline{\Omega})$ and any integer $k,i \in \mathbb{Z}^{+},~(k>i\geq1)$, eigenvalues of the Dirichlet problem \eqref{Eigenvalue-Problem} satisfy \begin{equation}\begin{aligned}\label{general-formula-1}&((\lambda_{k+2}-\lambda_{i})+(\lambda_{k+1}-\lambda_{i}))\|\nabla hu_{i}\|^{2} \\&\leq2\sqrt{(\lambda_{k+2}-\lambda_{i})(\lambda_{k+1}-\lambda_{i})\||\nabla h|^{2}u_{i}\|^{2}} +\|2\langle\nabla h,\nabla u_{i}\rangle+u_{i}\Delta h\|^{2}, \end{aligned}\end{equation} where \begin{equation*} \|h(x)\|^{2} =\int_{\Omega}h^{2}(x)dv. \end{equation*} \end{lem} For the closed eigenvalue problem, we can also prove the following by the same method given by: \begin{lem}\label{lem2.2} Let $(M^{n},g)$ be an $n$-dimensional closed Riemannian manifold. Assume that $\overline{\lambda}_{i}$ is the $i^{\text{th}}$ eigenvalue of the eigenvalue problem \eqref{Eigen-Prob-closed} and $u_{i}$ is an orthonormal eigenfunction corresponding to $\overline{\lambda}_{i}$, $i =0, 1,2,\cdots$, such that \begin{equation*} \left\{ \begin{aligned}\Delta u_{i}=-\lambda u_{i},\ \ \ \ \ \ \ & in\ \ \ \ \Omega, \\ \int_{\Omega} u_{i}u_{j}dv=\delta_{ij},\ \ & for~any \ i,j=0,1,2,\cdots. \end{aligned} \right. \end{equation*} Then, for any function $h(x)\in C^{3}(\Omega)\cap C^{2}(\overline{\Omega})$ and any integer $k,i \in \mathbb{Z},~(k>i\geq0)$, eigenvalues of the closed eigenvalue problem \eqref{Eigen-Prob-closed} satisfy \begin{equation}\begin{aligned}\label{general-formula-1}&((\overline{\lambda}_{k+2}-\overline{\lambda}_{i})+(\overline{\lambda}_{k+1}-\overline{\lambda}_{i}))\|\nabla hu_{i}\|^{2} \\&\leq2\sqrt{(\overline{\lambda}_{k+2}-\overline{\lambda}_{i})(\overline{\lambda}_{k+1}-\overline{\lambda}_{i})\||\nabla h|^{2}u_{i}\|^{2}} +\|2\langle\nabla h,\nabla u_{i}\rangle+u_{i}\Delta h\|^{2}, \end{aligned}\end{equation} where \begin{equation*} \|h(x)\|^{2} =\int_{\Omega}h^{2}(x)dv. \end{equation*} \end{lem} \begin{proof}Recall that the proof of lemma \ref{lem2.1} given by Chen-Zheng-Yang in \cite{CZY} is very fascinating and the key strategy is to apply the Rayleigh-Ritz inequality and Lagrange method of multipliers in real Banach spaces . By the same strategy as the one in \cite{CZY}, it is not difficult to give the proof of this lemma if one notices to count the number of eigenvalues from $0$. Here, we omit it.\end{proof} By applying lemma \ref{lem2.1}, we have \begin{lem}\label{lem2.3}Let $\rho $ be a constant such that, for any $i=1,2,\cdots,k,$ $\lambda_{i}+\rho>0$. Under the assumption of the lemma \ref{lem2.1}, for any $j=1,2,\cdots,l,$ and any real value function $h_{j}\in C^{3}(\Omega)\cap C^{2}(\overline{\Omega})$, we have \begin{equation}\begin{aligned}\label{general-formula-2}\sum_{j=1}^{l}\frac{a_{j}^{2}+b_{j}}{2}\left(\lambda_{k+2}-\lambda_{k+1}\right)^{2} \leq4(\lambda_{k+2}+\rho)\sum_{j=1}^{l}\|2\langle\nabla h_{j},\nabla u_{i}\rangle+u_{i}\Delta h_{j}\|^{2} , \end{aligned}\end{equation} where \begin{equation*}a_{j}=\sqrt{\|\nabla h_{j}u_{i}\|^{2}},\end{equation*} \begin{equation*}b_{j}=\sqrt{\||\nabla h_{j}|^{2}u_{i}\|^{2}},\end{equation*} \begin{equation}a_{j}^{2}\geq b_{j},\end{equation} and \begin{equation*} \|h(x)\|^{2}=\int_{\Omega}h^{2}(x)dv. \end{equation*} \end{lem} \begin{proof}By the assumption in this lemma, we have \begin{equation*}\begin{aligned}\frac{a_{j}^{2}-b_{j}}{2} \left(\sqrt{\lambda_{k+2}-\lambda_{i}}+\sqrt{\lambda_{k+1}-\lambda_{i}}\right)^{2}\geq0,\end{aligned}\end{equation*} which is equivalent to the following: \begin{equation}\begin{aligned}\label{gap-ineq}&a_{j}^{2}((\lambda_{k+2}-\lambda_{i})+(\lambda_{k+1}-\lambda_{i})) -2b_{j}\sqrt{(\lambda_{k+2}-\lambda_{i})(\lambda_{k+1}-\lambda_{i})}\\&\geq\frac{a_{j}^{2}+b_{j}}{2} \left(\sqrt{\lambda_{k+2}-\lambda_{i}}-\sqrt{\lambda_{k+1}-\lambda_{i}}\right)^{2}.\end{aligned}\end{equation} By \eqref{gap-ineq} and \eqref{general-formula-1}, we have \begin{equation*}\begin{aligned}\frac{a_{j}^{2}+b_{j}}{2} \left(\sqrt{\lambda_{k+2}-\lambda_{i}}-\sqrt{\lambda_{k+1}-\lambda_{i}}\right)^{2} &\leq\|2\langle\nabla h_{j},\nabla u_{i}\rangle+u_{i}\Delta h_{j}\|^{2}. \end{aligned}\end{equation*} Taking sum over $j$ from $1$ to $l$, we yield \begin{equation}\begin{aligned}\label{gen-for-4}\sum_{j=1}^{l}\frac{a_{j}^{2}+b_{j}}{2}\left(\sqrt{\lambda_{k+2}-\lambda_{i}}-\sqrt{\lambda_{k+1}-\lambda_{i}}\right)^{2} \leq\sum_{j=1}^{l}\|2\langle\nabla h_{j},\nabla u_{i}\rangle+u_{i}\Delta h_{j}\|^{2}. \end{aligned}\end{equation} Multiplying \eqref{gen-for-4} by $\left(\sqrt{\lambda_{k+2}-\lambda_{i}}+\sqrt{\lambda_{k+1}-\lambda_{i}}\right)^{2}$ on both sides, one can infer that \begin{equation*}\begin{aligned}\sum_{j=1}^{l}\frac{a_{j}^{2}+b_{j}}{2}\left(\lambda_{k+2}-\lambda_{k+1}\right)^{2} &\leq\sum_{j=1}^{l}\|2\langle\nabla h_{j},\nabla u_{i}\rangle+u_{i}\Delta h_{j}\|^{2}\left(\sqrt{\lambda_{k+2}-\lambda_{i}}+\sqrt{\lambda_{k+1}-\lambda_{i}}\right)^{2}\\ &=\sum_{j=1}^{l}\|2\langle\nabla h_{j},\nabla u_{i}\rangle+u_{i}\Delta h_{j}\|^{2}\\&\times\left(\sqrt{(\lambda_{k+2}+\rho)-(\lambda_{i}+\rho)}+\sqrt{(\lambda_{k+1}+\rho)-(\lambda_{i}+\rho)}\right)^{2}\\ &\leq4(\lambda_{k+2}+\rho)\sum_{j=1}^{l}\|2\langle\nabla h_{j},\nabla u_{i}\rangle+u_{i}\Delta h_{j}\|^{2} . \end{aligned}\end{equation*} which is the inequality \eqref{general-formula-2}. Therefore, we finish the proof of this lemma. \end{proof} \begin{rem} Recall that, under the assumption that $\|\nabla h\|=1$, by utilizing \eqref{general-formula-1}, Chen, Zheng and Yang \cite{CZY} obtained \begin{equation}\label{gene-form-3}\left(\lambda_{k+2}-\lambda_{k+1}\right)^{2} \leq4\lambda_{k+2}\sum_{j=1}^{l}\|2\langle\nabla h_{j},\nabla u_{i}\rangle+u_{i}\Delta h_{j}\|^{2},\end{equation} which plays a significant role in estimating the gap of $\lambda_{k+1}-\lambda_{k}$. If there is no this assumption, it will encounter great difficulties of computing or estimating the detailed value of the term $|\|\nabla h\|^{2}u_{i}|$ and thus to obtain \eqref{gene-form-3} even if $h$ is a standard coordinate function on Euclidean space. However, we notice that the assumption that $|\nabla h|=1$ can be replaced by the assumption that the trial function $h$ satisfies the following condition: \begin{equation*}\|\nabla hu_{i}\|^{2}\geq\sqrt{\||\nabla h|^{2}u_{i}\|^{2}}.\end{equation*} Under the assumption, we can obtain the inequality \eqref{general-formula-2}, which plays a significant role in estimating the gap of eigenvalues of Laplacian on general Riemannian manifolds. \end{rem} By the same method as the proof of lemma \ref{lem2.3}, we can prove the following lemma if one notices to count the number of eigenvalues from $0$. \begin{lem}\label{lem2.4}Let $\rho $ be a constant such that, for any $i=0,1,2,\cdots,k,$ $\overline{\lambda}_{i}+\rho>0$. Under the assumption of the lemma \ref{lem2.2}, for any $j=0,1,2,\cdots,l,$ and any real value function $h_{j}\in C^{2}(M^{n})$, we have \begin{equation}\begin{aligned}\label{general-formula-4}\sum_{j=1}^{l}\frac{a_{j}^{2}+b_{j}}{2}\left(\overline{\lambda}_{k+2}-\overline{\lambda}_{k+1}\right)^{2} \leq4(\overline{\lambda}_{k+2}+\rho)\sum_{j=1}^{l}\|2\langle\nabla h_{j},\nabla u_{i}\rangle+u_{i}\Delta h_{j}\|^{2} , \end{aligned}\end{equation} where \begin{equation*}a_{j}=\sqrt{\|\nabla h_{j}u_{i}\|^{2}},\end{equation*} \begin{equation*}b_{j}=\sqrt{\||\nabla h_{j}|^{2}u_{i}\|^{2}},\end{equation*} \begin{equation}a_{j}^{2}\geq b_{j},\end{equation} and \begin{equation*} \|h(x)\|=\int_{\Omega}h(x)dv. \end{equation*} \end{lem} \section{Proofs of theorem \ref{thm1.1} and theorem \ref{thm1.2}}\label{sec3} \vskip 3mm In this section, we would like to give the proofs of theorem \ref{thm1.1} and theorem \ref{thm1.2}. Firstly, we need the following lemma which can be found in \cite{CC}. \begin{lem}\label{lem3.1} For an $n$-dimensional submanifold $M^{n}$ in Euclidean space $\mathbb{R}^{n+p}$, let $y=(y^{1},y^{2},\cdots,y^{n+p})$ is the position vector of a point $p\in M^{n}$ with $y^{\alpha}=y^{\alpha}(x_{1}, \cdots, x_{n})$, $1\leq \alpha\leq n+p$, where $(x_{1}, \cdots, x_{n})$ denotes a local coordinate system of $M^n$. Then, we have \begin{equation*} \sum^{n+p}_{\alpha=1}g(\nabla y^{\alpha},\nabla y^{\alpha})= n, \end{equation*} \begin{equation*} \begin{aligned} \sum^{n+p}_{\alpha=1}g(\nabla y^{\alpha},\nabla u)g(\nabla y^{\alpha},\nabla w)=g(\nabla u,\nabla w), \end{aligned} \end{equation*} for any functions $u, w\in C^{1}(M^{n})$, \begin{equation*} \begin{aligned} \sum^{n+p}_{\alpha=1}(\Delta y^{\alpha})^{2}=n^{2}H^{2}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \sum^{n+p}_{\alpha=1}\Delta y^{\alpha}\nabla y^{\alpha}= 0, \end{aligned} \end{equation*} where $H$ is the mean curvature of $M^{n}$. \end{lem} \vskip 2mm \textbf{\emph{Proof of theorem}} \ref{thm1.1}. Let $a_{1},a_{2},\cdots,a_{n+p}$ are $(n+p)$ positive number. We define $(n+p)$ scarling coordinate functions $h_{j}(x)=\alpha_{j}x^{j}$, such that \begin{equation}\label{a-b-in}a_{j}^{2}=\|\nabla h_{j}u_{i}\|^{2}\geq\sqrt{\||\nabla h_{j}|^{2}u_{i}\|^{2}}=b_{j}\geq0,\end{equation} and \begin{equation}\begin{aligned} \label{sum-3.2}\sum_{j=1}^{n+p}\int2 u_{i}\langle\nabla h_{j},\nabla u_{i}\rangle\Delta h_{j}dv&=0,\end{aligned}\end{equation}where $j=1,2,\cdots,n+p,$ and $x^{j}$ denotes the $j$-th standard coordinate function of the Euclidean space $\mathbb{R}^{n+p}$. Let $$\alpha=\min_{1\leq j\leq n+p}\{\alpha_{j}\},$$$$\overline{\alpha}=\max_{1\leq j\leq n+p}\{\alpha_{j}\},$$$$\beta=\min_{1\leq j\leq n+p}\{b_{j}\},$$ and $l=n+p$, then, by lemma \ref{lem2.1}, we have \begin{equation}\begin{aligned}\label{left-1}\sum_{j=1}^{l}\frac{a_{j}^{2}+b_{j}}{2}&=\sum_{j=1}^{n+p}\frac{a_{j}^{2}+b_{j}}{2}\\&\geq\frac{1}{2}\left(n\alpha^{2}+ \sum_{j=1}^{n+p}b_{j}\right)\\&\geq\frac{1}{2}\left(n\alpha^{2}+(n+p)\beta\right),\end{aligned}\end{equation} \begin{equation}\label{nH}\sum^{n+p}_{j=1}(\Delta h_{j})^{2}\leq\overline{\alpha}^{2}n^{2}H^{2},\end{equation} and \begin{equation}\label{h-u}\sum_{j=1}^{n+p}\int_{\Omega}\langle\nabla h_{j},\nabla u_{i}\rangle^{2}dv\leq\overline{\alpha}^{2}\sum_{j=1}^{n+p}\int_{\Omega}\langle\nabla x^{j},\nabla u_{i}\rangle^{2}dv=\overline{\alpha}^{2}\lambda_{i}.\end{equation} Since eigenvalues are invariant under isometries, defining $$ c=\frac{1}{4}\inf_{\psi\in \Psi}\max_{\Omega}n^{2}H^{2}>0, $$ where $\Psi$ denotes the set of all isometric immersions from $M^n$ into a Euclidean space, by lemma \ref{lem3.1}, \eqref{sum-3.2}, \eqref{nH}, and \eqref{h-u}, we have \begin{equation}\begin{aligned}\label{right-1}4(\lambda_{k+2}+c)\sum_{j=1}^{n+p}\|2\langle\nabla h_{j},\nabla u_{i}\rangle +u_{i}\Delta h_{j}\|^{2}&\leq4(\lambda_{k+2}+c)\overline{\alpha}^{2}\left(4\lambda_{i}+\int_{\Omega}u^{2}_{i}n^{2}H^{2}dv\right) \\&\leq16\lambda_{k+2}\overline{\alpha}^{2}\left(\lambda_{i}+c\right).\end{aligned}\end{equation} Let $i=1,\rho=c$, then, substituting \eqref{left-1} and \eqref{right-1} into \eqref{general-formula-2}, we have \begin{equation}\begin{aligned}\label{gap1}\left(\lambda_{k+2}-\lambda_{k+1}\right)^{2} \leq\frac{32\overline{\alpha}^{2}(\lambda_{k+2}+c)}{n\alpha^{2}+(n+p)\beta}\left(\lambda_{1}+c\right) , \end{aligned}\end{equation} Therefore, we deduce from \eqref{gap1} that, \begin{equation*}\begin{aligned}\lambda_{k+2}-\lambda_{k+1}&\leq \sqrt{\frac{32\overline{\alpha}^{2}}{n\alpha^{2}+(n+p)\beta}}\sqrt{\lambda_{1}+c}\sqrt{\lambda_{k+2}+c}\\&\leq (\lambda_{1}+c)\sqrt{\frac{32\overline{\alpha}^{2}C_{0}(n)}{n\alpha^{2}+(n+p)\beta}}(k+1)^{\frac{1}{n}} \\&=C_{n,\Omega}(k+1)^{\frac{1}{n}},\end{aligned} \end{equation*} where $$C_{n,\Omega}=(\lambda_{1}+c)\sqrt{\frac{32\overline{\alpha}^{2}C_{0}(n)}{n\alpha^{2}+(n+p)\beta}}.$$ Therefore, we complete the proof of theorem \ref{thm1.1}. $$\eqno\Box$$ \begin{rem}In the theorem \ref{thm1.1}, one can obtain an even stronger result. Indeed, in the proof of this theorem, there exist a positive integer $1\leq j_{0}\leq n+p$ such that we can choose $n+p$ positive numbers $\alpha_{1},\alpha_{2},\cdots,\alpha_{n+p}$ satisfy the following: \begin{equation*}a_{j}^{2}=\|\nabla h_{j}u_{i}\|^{2}=\sqrt{\||\nabla h_{j}|^{2}u_{i}\|^{2}}=b_{j}\geq0,~where~j=1,2,\cdots,j_{0}-1,j_{0}+1,\cdots,n+p, \end{equation*} \begin{equation*}a_{j_{0}}^{2}=\|\nabla h_{j_{0}}u_{i}\|^{2}\leq\sqrt{\||\nabla h_{j_{0}}|^{2}u_{i}\|^{2}}=b_{j_{0}}\geq0, \end{equation*} and \begin{equation*}\begin{aligned} \sum_{j=1}^{n+p}\int2 u_{i}\langle\nabla h_{j},\nabla u_{i}\rangle\Delta h_{j}dv&=0.\end{aligned}\end{equation*}\end{rem} \begin{corr} Assume that $(M^{n},g)$ is an $n$-dimensional complete Riemannian manifolds, which is isometrically immersed into $(n+\overline{p})$-dimensional Euclidean space $\mathbb{R}^{n+\overline{p}}$. Let $\lambda_{i}$ be the $i$-th $(i=1,2,\cdots,k)$ eigenvalue of the Dirichlet problem \eqref{Eigenvalue-Problem}. Then we have \begin{equation}\label{z1}\begin{aligned}\lambda_{k+1}-\lambda_{k}\leq C_{n,\Omega}k^{\frac{1}{n}},\end{aligned} \end{equation} where $$C_{n,\Omega}=(\lambda_{1}+c)\sqrt{\frac{32\overline{\alpha}^{2}C_{0}(n)}{n\alpha^{2}+(n+p)\beta}},$$ $C_{0}(n)$ is the same as the one in \eqref{Cheng-Yang-ineq}, and $$ c=\frac{1}{4}\inf_{\psi\in \Psi}\max_{\Omega}n^{2}H^{2}>0, $$ where $\Psi$ denotes the set of all isometric immersions from $M^n$ into a Euclidean space $\mathbb{R}^{n+\overline{p}}$. Furthermore, assume that $(M^{n},g)$ is an $n$-dimensional complete minimal submanifold which is isometrically immersed into $(n+p)$-dimensional Euclidean space $\mathbb{R}^{n+p}$, and then the constant $c$ is given by $c=0$. \end{corr} \begin{rem}We shall note that when $p$ tends to infinity, it dose not mean that the constant $C_{n,\Omega}$ will be asymptotic to zero. This is because, for any $j=1,2,\cdots,n+p,$ we have \begin{equation*}a_{j}^{2}=\|\nabla h_{j}u_{i}\|^{2}\geq\sqrt{\||\nabla h_{j}|^{2}u_{i}\|^{2}}=b_{j}\geq0,\end{equation*} which implies that $(n+1)\rho\leq n\alpha^{2}$. \end{rem} \begin{rem}Usually, we choose the standard coordinate functions to construct the trial functions to obtain the universal inequalities or the estimate for the bounds of eigenvalues. However, we do not choose standard coordinate functions but the scarling coordinate functions which satisfy some conditions to construct the trial functions in the proof of theorem \ref{thm1.1}.\end{rem} From the proof of theorem \ref{thm1.1}, we have \begin{rem}If $M^{n}$ is an $n$-dimensional Euclidean space, then we have $H=0$, and thus $c=0$. Let $\alpha_{j}=1$, where $j=1,2,\cdots,n+p$, then $h_{j}=x^{j}$. Thus, we have $$\alpha=1,$$ and $$\sum_{j=1}^{n+p}b_{j}=n,$$which implies that $$C_{n,\Omega}=(\lambda_{1}+c)\sqrt{\frac{32\overline{\alpha}^{2}C_{0}(n)}{n\alpha^{2}+\sum_{j=1}^{n+p}b_{j}}} =4\lambda_{1}\sqrt{\frac{C_{0}(n)}{n}}.$$ Therefore, the eigenvalue inequality \eqref {z1} in theorem \ref{thm1.1} generalize the eigenvalue inequality \eqref{czy-1} given by Chen-Zheng-Yang in { \rm \cite{CZY}}. \end{rem} \textbf{\emph{Proof of theorem}} \ref{thm1.2}. By lemma \ref{lem2.2}, lemma \ref{lem2.4} and lemma \ref{lem3.1}, we can give the proof by using the same method as the proof of theorem \ref{thm1.1}. $$\eqno\Box$$ Similarly, we have the following: \begin{corr} Let $(M^{n},g)$ be an $n$-dimensional closed Riemannian manifold, which is isometrically immersed into $(n+\overline{p})$-dimensional Euclidean space $\mathbb{R}^{n+\overline{p}}$, and $\overline{\lambda}_{i}$ be the $i$-th $(i=0,1,2,\cdots,k)$ eigenvalue of the closed eigenvalue problem \eqref{Eigen-Prob-closed}. Then, for any $k=1,2,\cdots,$ there exist some constants $\alpha^{\prime},$ and $b^{\prime}_{j},j=1,2,\cdots,n+p$, such that \begin{equation*}\begin{aligned}\overline{\lambda}_{k+1}-\overline{\lambda}_{k}\leq \overline{C}_{n,\Omega}k^{\frac{1}{n}},\end{aligned} \end{equation*} where $$\overline{C}_{n,\Omega}=(\overline{\lambda}_{1}+\overline{c})\sqrt{\frac{32\overline{\alpha}^{\prime2}C_{0}(n)}{n\alpha^{\prime2}+\sum_{j=1}^{n+p}b^{\prime}_{j}}},$$ and $C_{0}(n)$ is the same as the one in \eqref{Cheng-Yang-ineq}, and $$ \overline{c}=\frac{1}{4}\inf_{\psi\in \Psi}\max_{M^{n}}n^{2}H^{2}>0, $$ where $\Psi$ denotes the set of all isometric immersions from $M^n$ into a Euclidean space $\mathbb{R}^{n+\overline{p}}$. Furthermore, assume that $(M^{n},g)$ is an $n$-dimensional closed minimal submanifold which is isometrically immersed into $(n+\overline{p})$-dimensional Euclidean space $\mathbb{R}^{n+\overline{p}}$, and then, \begin{equation}\begin{aligned}\overline{c}=0.\end{aligned} \end{equation} \end{corr} \section{ Estimates for the Eigenvalues on the Unit Sphere and Cylinder}\label{sec4} In this
observed velocity gradients. In the past three decades a wealth of observations toward regions of star formation, as well as toward evolved stars, have revealed that the main properties that characterise collimated molecular outflows are: i) The velocity field of the outflow is described by a proportional relationship between the expansion velocity and the distance to the driving source, i.e. v$\propto$$r^{\eta}$, with $\eta$$\approx$1. ii) The collimation of the flow increases systematically with flow velocity and distance from the driving source. iii) The flow exhibits a power-law variation of mass with velocity, i.e. $m(\rm v)$$\propto$v$^{-\gamma}$ \citep[][and references therein]{Lada1996, Cabrit1997}. The most popular mechanism to explain these properties is the jet-driven bow shock \citep{Raga1993,Smith1997}. This mechanism naturally creates a positive velocity gradient since as one moves from the broader wings to the narrow apex of the bow shock the shock becomes less oblique and the net forward velocity increases. In addition, this mechanism produces more swept-up mass at low velocities since the intercepted mass flux is determined by the bow shock cross-section, which steadily grows in the bow wings while the velocity decreases \citep{Cabrit1997}. The LVC of IRAS~16342$-$3814 exhibits all the properties that characterise jet-driven bow shocks. As shown in \S\ref{spt_kin_model_section}, the velocity field of the LVC has a power-law dependence with the distance, although the power-law exponent, $\eta$=1.6, is somewhat larger than the typical values seen in other molecular outflows. The collimation of the LVC increases with velocity as it can be seen in Fig.~\ref{Fig2}. Finally, assuming optically thin emission, LTE conditions and an average excitation temperature $T_{\rm ex}$=30~K, the mass of gas expanding at a given velocity, i.e. the mass spectrum $m$(v), can be obtained from the line profile of the CO($J$=3$\rightarrow$2) emission\footnote{The value of the mass estimated in this manner would represent a lower limit of the actual value if the emission is optically thick.}. In Fig.~\ref{Fig6} we show a log-log plot of the mass as a function of the velocity offset, where we have used a fractional abundance of CO relative to H$_{2}$, $f$(CO)=3$\times$10$^{-4}$ \citep{Sahai2017}. This plot reveals that the emission between 10<v$_{\rm offset}$(km~s$^{-1}$)<100~km~s$^{-1}$ is described by a power-law $m$(v)$\propto$v$^{-\gamma}$ with $\gamma$=0.94, and between 100<v$_{\rm offset}$(km~s$^{-1}$)<150~km~s$^{-1}$ $\gamma$=3.2. This double power-law variation of the mass with velocity is exactly what observations have revealed in other molecular outflows \citep{Kuiper1981, Rodriguez1982,Lada1996}, and confirmed by numerical simulations of jet-driven molecular outflows \citep{Smith1997}. However, it should be pointed out that, if the CO($J$=3$\rightarrow$2) line is optically thick, the values of these power-law exponents represent just a lower limit. Furthermore, it cannot be ruled out that the break of the power-law exponent around v$_{\rm offset}$=100~km~s$^{-1}$ might be due to a change in the optical depth regime. In Fig.~\ref{Fig7} we show contours indicating the CO($J$=1$\rightarrow$0) emission superimposed on a moment-8 (maximum value of the spectrum for each pixel) image of the CO($J$=3$\rightarrow$2) emission. The CO($J$=1$\rightarrow$0) emission was averaged over the velocity range $-$10<v$_{\rm offset}$(km~s$^{-1}$)<+10, which is the typical expansion velocity of the CSE formed during the AGB phase. The brightness distribution of the emission is round and has a deconvolved size of $\sim$0$\rlap{.}^{\prime\prime}$9, which is almost twice as large as the deconvolved size of the CO($J$=3$\rightarrow$2) emission in the same velocity range (see \S3), suggesting that it is tracing material of the CSE created by the AGB wind. The existence of a relic AGB circumstellar envelope is also suggested by the low-velocity OH~1612 maser emission, which exhibits maser features distributed over a region more extended than the bipolar outflow \cite[see Fig. 1 of ][]{Sahai1999}. Moreover, \cite{Murakawa2012} showed that the SED of IRAS~16342$-$3814 is best reproduced by a model that includes a disk, a torus, bipolar lobes, as well as a spherical AGB envelope. Thus, given these arguments, together with the results presented in the previous paragraph, we conclude that the LVC in IRAS~16342$-$3814 corresponds to material of the CSE that has been swept-up by the jet-driven bow shock. \begin{figure*} \centering \includegraphics[angle=0,scale=0.6]{Fig7.pdf} \caption{CO emission from IRAS~16342$-$3814. The colour scale corresponds to the moment-8 (maximum value of the spectrum for each pixel) image of the CO($J$=3$\rightarrow$2) emission. The contours indicate the CO($J$=1$\rightarrow$0) emission averaged over the velocity range $-$10<v$_{\rm offset}$(km~s$^{-1}$)<+10 with values (0.007, 0.01, 0.02, 0.03, 0.05, 0.1, 0.2, 0.3) Jy~beam$^{-1}$, where the first contour represents 4 times the value of the rms. The orange dashed line delineates the region that contains emission from the EHVO. The ellipses located at the lower left corner indicate the synthesised beams of the CO($J$=3$\rightarrow$2) and CO($J$=1$\rightarrow$0) observations, respectively. The beam size of the CO($J$=3$\rightarrow$2) observations is the same as in Fig.~\ref{Fig2} and the beam size of the CO($J$=1$\rightarrow$0) observations is 1$\rlap{.}^{\prime\prime}$25$\times$1$\rlap{.}^{\prime\prime}$18 with P.A.=$-$58$^{\circ}$.} \label{Fig7}% \end{figure*} A feature revealed in these ALMA observations, not commonly found so evident in other molecular outflows, is the emission associated to the HVC. Fig.~\ref{Fig6} shows that for velocities offsets higher than 150~km~s$^{-1}$ the mass spectrum exhibits a plateau followed by a sharp peak. \cite{Smith1997} obtained a similar feature in the intensity profile from their hydrodynamic simulations of a jet-driven outflow. They identified this feature with the jet's direct emission \citep[see Fig. 1 and 2 of][]{Smith1997}. The deceleration of these component suggests the presence of turbulent mixing through Kelvin-Helmholtz instabilities at the velocity shear between the jet and the ambient gas. If turbulent entrainment along the beam is the dominant process, then the jet decreases in average velocity along its length as it transfers momentum to the entrained material \citep{Chernin1994}. Given a velocity of the jet v$_{\rm exp}$=380~km~s$^{-1}$, a sound speed in the medium $c_{s}$$\sim$1~km~s$^{-1}$, a density of the medium $n_{\rm H_{2}}$=10$^{6}$~cm$^{-3}$ and a kinetic temperature $T_{\rm k}$$\sim$100~K, the Reynolds number of the jet is $\mathcal{R}e_{m}$>10$^{10}$. Consequently, the jet is expected to be highly turbulent and suffer deceleration, which is indeed what the observations reveal. We conclude that the HVC represents material entrained by the underlying jet near its axis or it could be the molecular component of the jet itself. As mentioned earlier, molecular outflows with the characteristic properties of a jet-driven outflow have also been found in the CSE of some evolved stars \citep[e.g.][]{Bujarrabal1998,Sanchez-Contreras2000,Alcolea2001, Alcolea2007}. For these objects, the linear relationship between the expansion velocity and the distance to the driving source has been interpreted in terms of free expansion of the gas after having suffered a strong axial acceleration by a collimated wind or jet during a time period much shorter than the whole post-AGB lifetime of the source \citep{Bujarrabal1998,Alcolea2001}. Particularly, \cite{Alcolea2001} derived an upper limit to the duration of this post-AGB interaction in the pre-PN OH~231.8$+$4.2 of $\sim$125~years. Our results suggest that in IRAS~16342$-$3814 we are witnessing the presence of two different processes of entrainment, one called ``prompt entrainment", which transfers momentum through the leading bow shock, and the second is ``steady-state entrainment", which refers to ambient gas that is entrained along the sides of the jet \citep{De-Young1986,Chernin1994}. The former would lead to the formation of the linear relationship of the velocity with the distance seen in the PV diagrams of the molecular emission of the more evolved objects, which corresponds to swept-up material in a jet-driven bow shock. The new ingredient revealed in these ALMA observations is the HVC, which is related to the driving jet. It is likely that in the molecular outflows seen in other more evolved objects the jet has long disappeared as it is expected to last only for a couple of hundred years. This strengthens the idea that wf-nebula are indeed undergoing the ephemeral transition where they develop the collimated outflows that shape the CSE leading to the formation of asymmetrical PNe. In fact, recently Orosz et al. (2018) showed a beautiful example of a decelerating outflow in the wf-nebula IRAS~18113$-$2503 traced by H$_{2}$O masers, which is likely to be directly related to the driving jet. \begin{figure*} \centering \includegraphics[angle=0,scale=0.9]{Fig8.pdf} \caption{Oscillating pattern of the positions of the peak-emission for the high-velocity
# Game Maker Studio 2 problems with too many instances So I am trying to make a management game, a space station simulator if you will, with potentially hundreds and thousands of customers entering and leaving the station every game day. So my plan of handling this is to first create a persistent object called oManager. Over the course of the game, oManager will continuously spawn instances of this object called oPerson. The part of oManager that spawns person: global.dailyflow = 100; show_debug_overlay(true); var number_a_day = irandom_range(ceil(global.dailyflow*0.6),global.dailyflow); tourist_base = ceil(number_a_day*0.1); tourist_extra = ceil(tourist_base*(1+global.tourist_modifier/100)); passerby_base = ceil(number_a_day*0.53); passerby_extra = ceil(passerby_base*(1+global.passerby_modifier/100)); shopper_base = ceil(number_a_day*0.25); shopper_extra = ceil(shopper_base*(1+global.shopper_modifier/100)); mercenary_base = floor(number_a_day*0.1); mercenary_extra = floor(mercenary_base*(1+global.mercenary_modifier/100)); with instance_create_layer(0,0,"instances",oPerson){ guy_type = 0; event_user(0); } } repeat(tourist_base+tourist_extra){ with instance_create_layer(0,0,"instances",oPerson){ guy_type = 1; event_user(0); } } repeat(passerby_base+passerby_extra){ with instance_create_layer(0,0,"instances",oPerson){ guy_type = 2; event_user(0); } } repeat(shopper_base+shopper_extra){ with instance_create_layer(0,0,"instances",oPerson){ guy_type = 3; event_user(0); } } repeat(mercenary_base+mercenary_extra){ with instance_create_layer(0,0,"instances",oPerson){ guy_type = 4; event_user(0); } } Each oPerson instance decides on how many days he stays, where he would go to and how much money he spends each day, with much more to come in the future. However, the problem I am currently facing is that the game would just crash after about one game day. I think the problem is there are just too many instances and the game just cannot handle it. I tried to create only one oPerson of each type per day and it worked fine. I consider it essential to make each person an instance itself, considering they have to make pretty complex decisions, but I don't know how to make it work. The game crashes specifically when I click on a button to change room. All of the objects are persistent. GM:S is generally able to deal with as many instances as your machine is capable of handling, both in memory and CPU usage; every engine limitation is bonded to the machine the program is running on most of the times. Nowadays many games need to spawn a lot of instances simultaneously, and you can hear of projects displaying thousands of instances at once such as this one talked about on Reddit. In my experience, GM games usually crash because of code errors/exceptions, or memory/stack overflows, rarely because of instances number. If your game crashes when you move to another room, you must be sure the transition doesn't trigger instance spawning over and over: you would be creating hundreds of instances without noticing, and that would cause memory to grow up until the application crashes; even worse, if object spawning happens inside a bad-formatted loop statement it may be mistakenly spawning objects without stopping, thus causing an infinite loop in your code leading the application to crashing. Solutions to such problems include reviewing your code about checking the number of maximum instances you want to be in the game at once: you may think your code is bulletproof and not more than number_a_day oPerson objects will be spawned in your room; but if the spawning script is called more often that planned, it will result in uncontrolled growth of instance number in your game. You could implement a slot system for the oManager object, which is given a maximum size and spawns a oPerson only if there's enough room in this variable (an array, a ds_map, or whatever you like to use). No objects are allowed to be spawned if it's full, and its size can be computed knowing the value of number_a_day or a larger value if you want to have room for extra instances. Recommendation is having an upper bound value to limit how many instances are moving around in your room. EDIT Persistent objects in GM may cause troubles as well. If you need to move to another room but getting back to the previous one having objects in the same state they were before leaving, you want to use persistent rooms. This way objects aren't persistent anymore (and they will not be carried along when changing room), but anytime you get back to such persistent room everything will be in the same position and doing the same things they were doing before leaving. Also there's no trouble for resetting a persistent room. User arirish shows a simple snippet in this GM:S forum discussion that allows a persistent room to return to its original state (thus, as it was when the game started). This solution shall work for you, as persistent instances could cause the game to crash when moved to a different room (due to persistence) than the one they were supposed to move around. • Hi thank you for your comment. My set up currently works like this: the oManager object has a user_event. Every game day, this user_event would spawn hundreds of oPerson objects(symbolize the customer entering the station). In the mean time, after every game day a portion of oPerson instances will be destroyed (symbolize the customer leaving the station). A portion of the oPerson will maker the decision to go from one module to the other. Either way, on the frame which this user_event is triggered, many calculation are made at the same time, the fps spikes down to <200. – danielfang Sep 3 '18 at 21:37 • Also the error message after the crash is: X://windows/Runner.exe exited with non-zero status (-1073741819) FAILED: Run Program Complete – danielfang Sep 3 '18 at 21:45 • Mmh, check the following: you're playing too many sounds, and they're loaded in memory instead of streamed (memory limit); somewhere in your code you're trying to write/read beyond the size limit of some arrays (buffer overflow), or you're not properly deallocating possible dynamic resources (data structures, surfaces...) that cumulate and may cause a memory leak, and then memory limit is reached. To properly solve this problem, try to play the game with low number of oPersons and iterate through a couple of days, and debug run the game to keep a look on key variables and CPU usage. – liggiorgio Sep 3 '18 at 23:14 Is this code in a step event? And what do you mean by "crash", you get any error or the game just freezes? Instance numbers in GM:S don't matter as much as the code that your instances have. Having a huge number of instances that have complex codes each can cause major slowdown (and you can follow that with show_debug_overlay(), as you did) but since you said the game instantly crashes, I'm guessing you are spawning too many instances at once, and GM:S thinks you are on an infinite loop and the game freezes (that, or you are actually spawning hundreds in a single step which is not good). You didn't show what global.trader_modifier and global.tourist_modifier, etc are, so I'm not exactly sure how many of these instances you are spawning, but either way if this code is in a step event this code is just not viable, since you are at the very least spawning 31 passerby's per step which in a 30 room_speed game is 930 instances per second, and if this is in a create event, again, GM:S might consider this an infinite loop or its just too much. I'm not too sure how you want your game to work, but either way spawning instances more smoothly throughout your game day would probably fix the issue, and in regards to how many of these instances you can spawn, that comes down to how complex their code is. You can check how many microseconds its taking to complete the code in your instances using get_timer() at the beginning of the code and storing it in a variable (like var t = get_timer()), and at the end of the code do show_debug_message("Code timer: " + string(get_timer()-t)). • Hi thank you for your comment. My set up currently works like this: the oManager object has a user_event. Every game day, this user_event would spawn hundreds of oPerson objects(symbolize the customer entering the station). In the mean time, after every game day a portion of oPerson instances will be destroyed (symbolize the customer
import taichi as ti import taichi_glsl as tl from argparse import ArgumentParser import numpy as np import torch import matplotlib.pyplot as plt from torchvtk.rendering import plot_tf, plot_tfs from transfer_functions import get_tf def fig_to_img(fig): fig.set_tight_layout(True) fig.set_dpi(100) fig.canvas.draw() w, h = fig.get_size_inches() * fig.get_dpi() w, h = int(w), int(h) im = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape( (h, w, 3)) plt.close(fig) return im @ti.func def low_high_frac(x: float): ''' Returns the integer value below and above, as well as the frac Args: x (float): Floating point number Returns: int, int, float: floor, ceil, frac of `x` ''' x = ti.max(x, 0.0) low = ti.floor(x) high = low + 1 frac = x - float(low) return int(low), int(high), frac @ti.func def premultiply_alpha(rgba): rgba.xyz *= rgba.w return rgba @ti.func def get_entry_exit_points(look_from, view_dir, bl, tr): ''' Computes the entry and exit points of a given ray Args: look_from (tl.vec3): Camera Position as vec3 view_dir (tl.vec): View direction as vec3, normalized bl (tl.vec3): Bottom left of the bounding box tr (tl.vec3): Top right of the bounding box Returns: float, float, bool: Distance to entry, distance to exit, bool whether box is hit ''' dirfrac = 1.0 / view_dir t1 = (bl.x - look_from.x) * dirfrac.x t2 = (tr.x - look_from.x) * dirfrac.x t3 = (bl.y - look_from.y) * dirfrac.y t4 = (tr.y - look_from.y) * dirfrac.y t5 = (bl.z - look_from.z) * dirfrac.z t6 = (tr.z - look_from.z) * dirfrac.z tmin = max(max(min(t1, t2), min(t3, t4)), min(t5, t6)) tmax = min(min(max(t1, t2), max(t3, t4)), max(t5, t6)) hit = True if tmax < 0.0 or tmin > tmax: hit = False return tmin, tmax, hit # @ti.func # def random (vec2 st) { # return fract(sin(dot(st.xy, # vec2(12.9898,78.233)))* # 43758.5453123); # } @ti.data_oriented class VolumeRaycaster(): def __init__(self, volume_resolution, render_resolution, max_samples=512, tf_resolution=128, fov=30.0, nearfar=(0.1, 100.0)): ''' Initializes Volume Raycaster. Make sure to .set_volume() and .set_tf_tex() after initialization Args: volume_resolution (3-tuple of int): Resolution of the volume data (w,h,d) render_resolution (2-tuple of int): Resolution of the rendering (w,h) tf_resolution (int): Resolution of the transfer function texture fov (float, optional): Field of view of the camera in degrees. Defaults to 60.0. nearfar (2-tuple of float, optional): Near and far plane distance used for perspective projection. Defaults to (0.1, 100.0). ''' self.resolution = render_resolution self.aspect = render_resolution[0] / render_resolution[1] self.fov_deg = fov self.fov_rad = np.radians(fov) self.near, self.far = nearfar # Taichi Fields self.volume = ti.field(ti.f32, needs_grad=True) self.tf_tex = ti.Vector.field(4, dtype=ti.f32, needs_grad=True) self.tf_momentum = ti.Vector.field(4, dtype=ti.f32) self.render_tape = ti.Vector.field(4, dtype=ti.f32, needs_grad=True) self.output_rgba = ti.Vector.field(4, dtype=ti.f32, needs_grad=True) self.reference = ti.Vector.field(4, dtype=ti.f32) self.output_rgb = ti.Vector.field(3, dtype=ti.f32) self.valid_sample_step_count = ti.field(ti.i32) self.sample_step_nums = ti.field(ti.i32) self.entry = ti.field(ti.f32) self.exit = ti.field(ti.f32) self.rays = ti.Vector.field(3, dtype=ti.f32) self.loss = ti.field(ti.f32, (), needs_grad=True) self.max_valid_sample_step_count = ti.field(ti.i32, ()) self.max_samples = max_samples self.ambient = 0.4 self.diffuse = 0.8 self.specular = 0.3 self.shininess = 32.0 self.light_color = tl.vec3(1.0) self.cam_pos = ti.Vector.field(3, dtype=ti.f32) self.cuda = torch.device("cuda") volume_resolution = tuple(map(lambda d: d // 4, volume_resolution)) render_resolution = tuple(map(lambda d: d // 16, render_resolution)) ti.root.dense(ti.ijk, volume_resolution) \ .dense(ti.ijk, (4, 4, 4)) \ .place(self.volume) ti.root.dense(ti.ijk, (*render_resolution, max_samples)) \ .dense(ti.ijk, (16, 16, 1)) \ .place(self.render_tape) ti.root.dense(ti.ij, render_resolution) \ .dense(ti.ij, (16, 16)) \ .place(self.valid_sample_step_count, self.sample_step_nums) ti.root.dense(ti.ij, render_resolution) \ .dense(ti.ij, (16, 16)) \ .place(self.output_rgba, self.reference, self.output_rgb) ti.root.dense(ti.ij, render_resolution) \ .dense(ti.ij, (16, 16)) \ .place(self.entry, self.exit) ti.root.dense(ti.ij, render_resolution) \ .dense(ti.ij, (16, 16)).place(self.rays) ti.root.dense(ti.i, tf_resolution).place(self.tf_tex) ti.root.dense(ti.i, tf_resolution) \ .place(self.tf_tex.grad, self.tf_momentum) ti.root.place(self.cam_pos) ti.root.lazy_grad() def set_volume(self, volume): self.volume.from_numpy(volume.astype(np.float32)) def set_tf_tex(self, tf_tex): self.tf_tex.from_numpy(tf_tex.astype(np.float32)) def set_reference(self, reference): self.reference.from_numpy(reference.astype(np.float32)) @ti.func def get_ray_direction(self, orig, view_dir, x: float, y: float): ''' Compute ray direction for perspecive camera. Args: orig (tl.vec3): Camera position view_dir (tl.vec3): View direction, normalized x (float): Image coordinate in [0,1] along width y (float): Image coordinate in [0,1] along height Returns: tl.vec3: Ray direction from camera origin to pixel specified through `x` and `y` ''' u = x - 0.5 v = y - 0.5 up = tl.vec3(0.0, 1.0, 0.0) right = tl.cross(view_dir, up).normalized() up = tl.cross(right, view_dir).normalized() near_h = 2.0 * ti.tan(self.fov_rad) * self.near near_w = near_h * self.aspect near_m = orig + self.near * view_dir near_pos = near_m + u * near_w * right + v * near_h * up return (near_pos - orig).normalized() @ti.func def sample_volume_trilinear(self, pos): ''' Samples volume data at `pos` and trilinearly interpolates the value Args: pos (tl.vec3): Position to sample the volume in [-1, 1]^3 Returns: float: Sampled interpolated intensity ''' pos = tl.clamp(((0.5 * pos) + 0.5), 0.0, 1.0) \ * ti.static(tl.vec3(*self.volume.shape) - 1.0 - 1e-4) x_low, x_high, x_frac = low_high_frac(pos.x) y_low, y_high, y_frac = low_high_frac(pos.y) z_low, z_high, z_frac = low_high_frac(pos.z) x_high = min(x_high, ti.static(self.volume.shape[0] - 1)) y_high = min(y_high, ti.static(self.volume.shape[1] - 1)) z_high = min(z_high, ti.static(self.volume.shape[2] - 1)) # on z_low v000 = self.volume[x_low, y_low, z_low] v100 = self.volume[x_high, y_low, z_low] x_val_y_low = tl.mix(v000, v100, x_frac) v010 = self.volume[x_low, y_high, z_low] v110 = self.volume[x_high, y_high, z_low] x_val_y_high = tl.mix(v010, v110, x_frac) xy_val_z_low = tl.mix(x_val_y_low, x_val_y_high, y_frac) # on z_high v001 = self.volume[x_low, y_low, z_high] v101 = self.volume[x_high, y_low, z_high] x_val_y_low = tl.mix(v001, v101, x_frac) v011 = self.volume[x_low, y_high, z_high] v111 = self.volume[x_high, y_high, z_high] x_val_y_high = tl.mix(v011, v111, x_frac) xy_val_z_high = tl.mix(x_val_y_low, x_val_y_high, y_frac) return tl.mix(xy_val_z_low, xy_val_z_high, z_frac) @ti.func def get_volume_normal(self, pos): delta = 1e-3 x_delta = tl.vec3(delta, 0.0, 0.0) y_delta = tl.vec3(0.0, delta, 0.0) z_delta = tl.vec3(0.0, 0.0, delta) dx = self.sample_volume_trilinear(pos + x_delta) \ - self.sample_volume_trilinear(pos - x_delta) dy = self.sample_volume_trilinear(pos + y_delta) \ - self.sample_volume_trilinear(pos - y_delta) dz = self.sample_volume_trilinear(pos + z_delta) \ - self.sample_volume_trilinear(pos - z_delta) return tl.vec3(dx, dy, dz).normalized() @ti.func def apply_transfer_function(self, intensity: float): ''' Applies a 1D transfer function to a given intensity value Args: intensity (float): Intensity in [0,1] Returns: tl.vec4: Color and opacity for given `intensity` ''' length = ti.static(float(self.tf_tex.shape[0] - 1)) low, high, frac = low_high_frac(intensity * length) return tl.mix( self.tf_tex[low], self.tf_tex[min(high, ti.static(self.tf_tex.shape[0] - 1))], frac) @ti.kernel def compute_entry_exit(self, sampling_rate: float, jitter: int): ''' Produce entry, exit, rays, mask buffers Args: sampling_rate (float): Sampling rate (multiplier to Nyquist criterium) jitter (int): Bool whether to apply jitter or not ''' for i, j in self.entry: # For all pixels max_x = ti.static(float(self.render_tape.shape[0])) max_y = ti.static(float(self.render_tape.shape[1])) look_from = self.cam_pos[None] view_dir = (-look_from).normalized() bb_bl = ti.static(tl.vec3(-1.0, -1.0, -1.0)) # Bounding Box bottom left bb_tr = ti.static(tl.vec3(1.0, 1.0, 1.0)) # Bounding Box bottom right x = (float(i) + 0.5) / max_x # Get pixel centers in range (0,1) y = (float(j) + 0.5) / max_y # vd = self.get_ray_direction(look_from, view_dir, x, y) # Get exact view direction to this pixel tmin, tmax, hit = get_entry_exit_points(look_from, vd, bb_bl, bb_tr) # distance along vd till volume entry and exit, hit bool vol_diag = ti.static((tl.vec3(*self.volume.shape) - tl.vec3(1.0)).norm()) ray_len = tmax - tmin n_samples = hit * ( ti.floor(sampling_rate * ray_len * vol_diag) + 1 ) # Number of samples according to https://osf.io/u9qnz if jitter: tmin += ti.random(dtype=float) * ray_len / n_samples self.entry[i, j] = tmin self.exit[i, j] = tmax self.rays[i, j] = vd self.sample_step_nums[i, j] = n_samples @ti.kernel def raycast(self, sampling_rate: float): ''' Produce a rendering. Run compute_entry_exit first! ''' for i, j in self.valid_sample_step_count: # For all pixels for sample_idx in range(self.sample_step_nums[i, j]): look_from = self.cam_pos[None] if self.render_tape[i, j, sample_idx - 1].w < 0.99 \ and sample_idx < ti.static(self.max_samples): tmax = self.exit[i, j] n_samples = self.sample_step_nums[i, j] ray_len = (tmax - self.entry[i, j]) tmin = self.entry[i, j] + 0.5 * ray_len / n_samples # Offset tmin as t_start vd = self.rays[i, j] pos = look_from + tl.mix( tmin, tmax, float(sample_idx) / float(n_samples - 1)) * vd # Current Pos light_pos = look_from + tl.vec3(0.0, 1.0, 0.0) intensity = self.sample_volume_trilinear(pos) sample_color = self.apply_transfer_function(intensity) opacity = 1.0 - ti.pow(1.0 - sample_color.w, 1.0 / sampling_rate) # if sample_color.w > 1e-3: normal = self.get_volume_normal(pos) light_dir = (pos - light_pos).normalized() # Direction to light source n_dot_l = max(normal.dot(light_dir), 0.0) diffuse = self.diffuse * n_dot_l r = tl.reflect(light_dir, normal) # Direction of reflected light r_dot_v = max(r.dot(-vd), 0.0) specular = self.specular * pow(r_dot_v, self.shininess) shaded_color = tl.vec4( (diffuse + specular + self.ambient) * sample_color.xyz * opacity *
fundamental issues relating to potential manufacturing, operational and any other possible deficiencies. At the time of writing this submission it has become apparent that CASA recognised this deficiency in terms of their understanding of the data that was provided on 3 November. On 18 November CASA wrote to RA-Aus seeking instruction on how to identify 28 engine related issues referred to earlier in this submission. It is of serious concern that CASA does not only provide a basis for its decision, it does not understand the data provided by RA-Aus and has acted on a flawed understanding of the issues. Impacts on industry, aviation and RA-Aus Whilst the impacts on industry should not be an overriding factor when related to safety and decisions made, consideration must be given to potential financial and reputational damage caused by the issue of this consultative document to industry with insufficient analysis of data and short response times. Moreover, the ability of a crippled industry to cope with and implement growing requirements in terms of safety should be a consideration. This is certainly the case when proposed actions may indirectly and adversely impact the industry’s ability to sufficiently address future safety related issues. RA-Aus has approximately 1000 affected aircraft on its register and charges $130 for the annual registration of each. With around three times as many pilots as aircraft it could be argued that there will be around 3000 affected members, each of whom pays$210 per year to maintain their pilot certificate.1 If all those affected chose to discontinue their relationship with RA-Aus as a result of these restrictions then the worst case loss of income may be in the order of $760,000 per annum. With an operating budget of approximately$2.5m per annum (compared to some $180m for CASA to administer around the same number of private pilots) if even a small proportion of these pilots and aircraft owners left the association then the ability of RA-Aus to administer its safety related functions on behalf of the Government would be severely impacted.2 If Jabiru, the aircraft and engine manufacturer were to fail, this would result in a worsening of the situation and a deterioration of safety standards. Not only would RA-Aus struggle to maintain its activities in relation to improving safety in light aircraft, current Jabiru owners would have no access to ongoing support or spare parts for their current aircraft. That is to say, if Jabiru failed as a result of CASAs actions, owners would no longer be able to maintain their aircraft to a standard that would be safe due to a lack of ongoing support from the manufacturer. With US media outlets already publicising CASAs blunt approach to the problem the news is already reaching foreign shores. Indeed at the time of writing this submission RA-Aus has become aware of at least two foreign manufacturers that have cancelled orders which will have a notable financial impact on Jabiru and affect their ability to address the many concerns that CASA may have. 1 Given that more than two thirds of RA-Aus’ 170+ flight training facilities use Jabiru aircraft or engines, the true number of affected pilots may indeed be much higher. 2 While CASA is not transparent in terms of how it allocates its funding it should be noted that their total operating budget is in the order of$180m annually. With similar numbers of private pilot licence holders (although again, CASA is not transparent in terms of how many are active) to RA-Aus’ active pilot community it is clear that RA-Aus ostensibly performs comparable functions in a much more efficient manner than CASA. CASA has been cited in the Forsyth Report as being adversarial with industry and, some five months after the publication of this report, appears to be maintaining that style of approach despite it being ineffective as noted in the same document. The manner in which this matter has been handled to date is a standout example of the type of behaviour for which the Forsyth Report reserved its strongest criticism. That the CASA personnel involved either didn’t recognise or didn’t care that their actions constitute that kind of behaviour is of great concern and suggests that CASA has made no efforts to address the significant concerns of the report in the five months since its publication. CASA states on its website that its mission is “To enhance and promote aviation safety through effective regulation and by encouraging the wider aviation community to embrace and deliver higher standards of safety” yet the actions outlined above seemingly contradict with this mission. Indeed, the actions taken on this occasion can only be described as far from encouraging a positive result. In addition to these impacts the aircraft types in question form a large part of the fleet used for training purposes. Flight training is the first opportunity for RA-Aus (and any flying body) to impress the need for high levels of safety on new pilots. With two thirds of the RA-Aus flight training facilities relying on Jabiru for their operations, this safety message can no longer be promulgated to pilots. CASA will argue that the flight training activities being proposed are not eliminating pilot training, they are simply restricting pilot training to dual pilot operations. That is, pilot training can continue but trainee pilots cannot go solo using Jabiru aircraft. This sentiment further reinforces the lack of understanding of the industry on the part of CASA, the body responsible for regulating it. Pilots are required, by law and under the RA-Aus Operations Manual, to undergo solo training before being issued with a licence or pilot certificate. This is a fundamental requirement of any training regime whether it be administered by RA-Aus, CASA or another body. The simple fact is that you cannot become qualified to fly an aircraft in Australia without conducting solo flying time. Thus, the restriction on flying schools that renders them unable to provide such training and effectively shuts them down. Before any student is permitted to undertake a solo flight of any type, competency in managing emergency situations which include engine failures must be demonstrated. This is a requirement for both the CASA and RA-Aus flight training syllabi and is intended to equip pilots with the required knowledge to safely cope with such an event. Thus the recommendation to restrict solo flight training operations in Jabiru powered aircraft is a position that RA-Aus patently disagrees with and vigorously opposes. The negative effects of the proposed restrictions must include potential loss of income and the threat to the livelihoods of those Australian’s that are employed in the industry. These include, but are not limited to, the direct impact on manufacturing (including the sub-contractors involved in Jabiru’s manufacturing processes), the employment of aircraft maintainers in the industry (a sector already crumbling under pressure) and the pilots and instructors that have devoted significant amounts of time and money to gain their flying credentials. With dwindling opportunities in the sector there is a significant possibility that these people, especially pilots, will leave the country to seek work elsewhere contributing to the existing problem of a decline in aviation expertise in Australia. For private operators of the aircraft the ramifications are equally significant. With many individuals purchasing these aircraft with the intention of using them as a two seat vehicle, the proposed restrictions effectively render them unsuitable for this type of operation. Furthermore, each and every aircraft is required to have a warning sticker attached to the instrument panel that is visible by all occupants stating that the aircraft does not comply with the standard safety regulations and that all persons fly in the aircraft at their own risk. In addition to this, many operators use their aircraft for work related purposes such as cattle spotting, observing fences and checking dam levels. For regional Australia where these type of operations are common the implications of the restrictions will extend well beyond aviation and into other, struggling sectors such as agriculture. With regional employment already suffering this proposed restriction has the potential to worsen an already difficult situation. There is also a significant chance that many
(i.e., $\mathbf{Z}^{\textrm{H}} \mathbf{Z} = \mathbf{I}_N$). Consequently, $\mathbf{Z}$ does not alter the transmit power. In addition, applying $\mathbf{Z}^{\textrm{H}}$ at the receiver does not alter the noise statistics. As in the case of CP-DSSS, we also assume that $\mathbf{Z}$ is circulant. This property will prove useful for efficient despreading in the FD. The expander matrix, $\mathbf{E}_L$, performs the upsample function and is formed by taking $\mathbf{I}_{N/L}$ and inserting $L-1$ rows of zeros after each row, resulting in a matrix with dimensions $N \times (N/L)$. The received signal from the $K$ UEs at the $m^{\textrm{th}}$ antenna after CP removal is expressed as \begin{equation} \mathbf{y}_m^{\textrm{UL}} = \sum_{ k=1}^K \mathbf{H}_{m,k} \mathbf{x}_k^{\textrm{UL}} + \mathbf{w}_m, \label{eq:02} \end{equation} where $\mathbf{H}_{m,k}$ is the $N \times N$ circulant convolutional channel matrix between antenna $m$ and user $k$ and $\mathbf{w}_m$ is the receiver noise ($\mathbf{w}_m \sim \mathcal{CN}( \mathbf{0}, \sigma^2_w \mathbf{I}_N )$) at the $m^{\textrm{th}}$ antenna. Let $\mathbf{h}_{m,k}$ represent the channel impulse response vector between antenna $m$ and user $k$, which is of length $L_h$. $\mathbf{H}_{m,k}$ is formed by first taking $\mathbf{h}_{m,k}$ and appending $N-L_h$ zeros to form the base vector ${\mathbf{h}_{m,k} }_{(0)}$. We then take downward cyclic shifts of ${ \mathbf{h}_{m,k} }_{(0)}$ to create \begin{equation} \mathbf{H}_{m,k}= \begingroup \setlength\arraycolsep{4pt} \begin{bmatrix} { \mathbf{h}_{m,k} }_{(0)} & { \mathbf{h}_{m,k} }_{(1)} \dots & { \mathbf{h}_{m,k} }_{(N-2)} & { \mathbf{h}_{m,k} }_{(N-1)} \end{bmatrix} \endgroup, \label{eq:03} \end{equation} where the parenthetical subscript represents the number of cyclical shifts applied to the base vector. We will also use the parenthetical subscript later in the paper to cyclically shift the expander matrix, $\mathbf{E}_L$. In order to process the received signal in the FD, we take the $N$-point DFT of the received signal vector in \eqref{eq:02} after substituting \eqref{eq:01}. This results in the expression \begin{equation} \tilde{ \mathbf{y} }_m^{\textrm{UL}} = \sum_{k=1}^K \mathbf{\Lambda}_{m,k} \mathbf{\Omega} \frac{ 1 }{ \sqrt{L} } \mathbf{T}_{N,L} \tilde{ \mathbf{s} }_{k,1}^{\textrm{UL}} + \tilde{ \mathbf{w} }_m, \label{eq:04} \end{equation} where the tilde represents the FD representation of the vectors and $\mathbf{\Lambda}_{m,k}$, $\mathbf{\Omega}$, and $ \mathbf{T}_{N,L} / \sqrt{L}$ are the FD representations of $\mathbf{H}_{m,k}$, $\mathbf{Z}$, and $\mathbf{E}_L$, respectively. The FD representation of a matrix may not be a common term, but it is applicable in cases where a matrix can be decomposed using a DFT matrix and an inverse DFT (IDFT) matrix. It is helpful to realize that the DFT decomposition is similar to a singular value decomposition (SVD). In fact, the two are related for circulant matrices since the absolute values of the FD representation elements are equal to the singular values of the SVD \cite{Gray:2006}. Any circulant $N \times N$ matrix (e.g., $\mathbf{H}_{m,k}$ and $\mathbf{Z}$) is diagonalized by the $N$-point DFT matrix, $\mbox{\boldmath$\cal F$}_N$. Consequently, the $\mathbf{\Lambda}_{m,k}$ and $\mathbf{\Omega}$ matricies are diagonal. We note that $\mbox{\boldmath$\cal F$}_N$ is scaled such that $\mbox{\boldmath$\cal F$}_N^{-1} = \mbox{\boldmath$\cal F$}_N^{\textrm{H}}$. Hence, $\mathbf{H}_{m,k} = \mbox{\boldmath$\cal F$}_N^{-1} \mathbf{\Lambda}_{m,k} \mbox{\boldmath$\cal F$}_N$ and $\mathbf{Z} = \mbox{\boldmath$\cal F$}_N^{-1} \mathbf{\Omega} \mbox{\boldmath$\cal F$}_N$ \cite{Gray:2006}. It follows that taking the $N$-point DFT of the columns of $\mathbf{H}_{m,k}$ results in $\mbox{\boldmath$\cal F$}_N \mathbf{H}_{m,k} = \mathbf{\Lambda}_{m,k} \mbox{\boldmath$\cal F$}_N$, where $\mathbf{\Lambda}_{m,k}$ is a diagonal matrix containing the eigenvalues of $\mathbf{H}_{m,k}$. Let $\lambda_{m,k,i}$ represent the $i^{\textrm{th}}$ value along the diagonal of $\mathbf{\Lambda}_{m,k}$. The eigenvalues can also be obtained by taking the N-point DFT of the channel impulse response $\mathbf{h}_{m,k}$, which is used to form $\mathbf{H}_{m,k}$. For more efficient computation, it is noted that all of the FD conversions can be performed with the Fast Fourier Transform (FFT) instead of the DFT. We use the fact that $\mathbf{\Omega}$ is diagonal to show that $\mathbf{\Lambda}_{m,k}$ and $\mathbf{\Omega}$ are commutable. As a result, despreading the received signal is performed in the FD by multiplying the received signal by $\mathbf{\Omega}^*$ since $\mathbf{\Omega}^* \mathbf{\Omega} = \mathbf{I}_N$ because $\mathbf{Z}$ is unitary. Based on this result, future uses of \eqref{eq:04} will drop the $\mathbf{\Omega}$ spreading term in order to focus on the processing after despreading. The expander matrix, $\mathbf{E}_L$, is not circulant, but due to its structure, it can still be factored with the DFT and IDFT matrices. Because $\mathbf{E}_L$ is not a square matrix, the size of the DFT and IDFT matrices differ, i.e., $\mathbf{E}_L = \mbox{\boldmath$\cal F$}_N^{-1} ( \mathbf{T}_{N,L} / \sqrt{L} ) \mbox{\boldmath$\cal F$}_{N/L}$, where $\mbox{\boldmath$\cal F$}_{N/L}$ is the $N/L$-point DFT matrix. The matrix $\mathbf{T}_{N,L}$ is the $N \times (N/L)$ vertical tiling matrix, where $\mathbf{T}_{N,L} = [ \mathbf{I}_{N/L} \ \mathbf{I}_{N/L} \ \dots \ \mathbf{I}_{N/L} ]^{\textrm{T}}$. We note here that $\tilde{\mathbf{s}}^{\textrm{UL}}_{k,1} = \mbox{\boldmath$\cal F$}_{N/L} \mathbf{s}^{\textrm{UL}}_{k,1}$. Due to the tiling matrix structure, the FD representation of the $N/L$ symbols will be replicated $L$ times in the available spectrum with an amplitude scaling of $1/\sqrt{L}$. \section{Single-stream Detection} \label{Single_Stream_Detection} Single-stream detection in the FD follows a similar approach to the MRC-MMSE detector presented in \cite{Kenney:TWC:2021}. As such, we define the vector of received signals for the $n^{\textrm{th}}$ bin as $\tilde{\mathbf{y}}_{:,n}^{\textrm{UL}} = [ \tilde{y}_{1,n}^{\textrm{UL}} \ \tilde{y}_{2,n}^{\textrm{UL}} \ \dots \ \tilde{y}_{M,n}^{\textrm{UL}} ]^{\textrm{T}}$. One key observation for single-stream operation is that there are $L$ frequency bins of $\tilde{ \mathbf{y} }_m^{\textrm{UL}}$ that have components of $\tilde{s}_{k,1,n}^{\textrm{UL}}$, where the third index specifies the frequency bin. Each of those bins are spaced by $N/L$ bins due to the structure of the tiling matrix, $\mathbf{T}_{N,L}$. Hence, we define the composite vector of received signals as $\overline{\mathbf{y}}_{:,n}^{\textrm{UL}} = [ ( \tilde{\mathbf{y}}^{\textrm{UL}}_{:,n} )^{\textrm{T}} \ ( \tilde{\mathbf{y}}^{\textrm{UL}}_{:,n+N/L} )^{\textrm{T}} \ \dots \ ( \tilde{\mathbf{y}}^{\textrm{UL}}_{:,n+(L-1)N/L} )^{\textrm{T}} ]^{\textrm{T}}$, for $n=0, 1, ..., N/L-1$. In order to express the equation for the composite received vector, $\overline{\mathbf{y}}_{:,n}^{\textrm{UL}}$, we now construct the FD composite channel matrix. The FD channel matrix for bin $n$ is based on the $n^{\textrm{th}}$ diagonal element of each $\mathbf{\Lambda}_{m,k}$ matrix and is defined as \begin{equation} \mathbf{A}_n = \begin{bmatrix} \lambda_{1,1,n} & \lambda_{1,2,n} & \dots & \lambda_{1,K,n} \ \\ \lambda_{2,1,n} & \lambda_{2,2,n} & \dots & \lambda_{2,K,n} \ \\ \vdots & \vdots & \ddots & \vdots \\ \lambda_{M,1,n} & \lambda_{M,2,n} & \dots & \lambda_{M,K,n} \ \\ \end{bmatrix}. \label{eq:05} \end{equation} We note that the eigenvalues that form $\mathbf{A}_n$ are each a sum of the $L_h$ elements of the channel impulse response, $\mathbf{h}_{m,k}$, after being rotated according to the DFT coefficients. Here, we assume that $\mathbf{h}_{m,k}$ is composed of zero-mean complex Gaussian random variables. Since the channel gain is normalized to unity, the elements of $\mathbf{A}_n$ are also zero-mean complex Gaussian random variables with unit variance. Given that the rows of $\mathbf{A}_n$ span all of the users and each row corresponds to a specific antenna, the rows of $\mathbf{A}_n$ are zero-mean, complex Gaussian vectors with covariance $\mathbf{I}_K$. Consequently, the matrix $\mathbf{A}_n^{\textrm{H}} \mathbf{A}_n$ is a central complex Wishart matrix with $M$ degrees of freedom following Definition 2.3 of \cite{Verdu:Random_Matrix_Theory}. The composite channel matrix for single-stream processing in the FD is defined as $\overline{\mathbf{A}}_n =[ \mathbf{A}_n^{\textrm{T}} \ \mathbf{A}_{n+N/L}^{\textrm{T}} \ \dots \ \mathbf{A}_{n+(L-1)N/L}^{\textrm{T}} ]^{\textrm{T}}$, for $n=0, 1, ..., N/L-1$. Note that $\overline{\mathbf{A}}_n$ is an $ML \times K$ matrix. Since each of the constituent channel matrices is taken from a different part of the spectrum, they appear as though they are from unique antennas. By extension, the matrix $\overline{\mathbf{A}}_n^{\textrm{H}} \overline{\mathbf{A}}_n$ is a central complex Wishart matrix with $ML$ degrees of freedom. In effect, we have created $ML$ virtual antennas from the $M$ physical antennas. By increasing the number of virtual antennas by a factor of $L$, the single-stream processing has the ability to process a large number of UEs simultaneously. In fact, the number of UEs can exceed the number of physical antennas as long as the following condition is met: $K < ML$. The FD composite noise vector is defined as $\overline{\mathbf{w}}_{:,n} = [ ( \tilde{\mathbf{w}}_{:,n} )^{\textrm{T}} \ (\tilde{\mathbf{w}}_{:,n+N/L} )^{\textrm{T}} \ \dots \ ( \tilde{\mathbf{w}}_{:,n+(L-1)N/L} )^{\textrm{T}} ]^{\textrm{T}}$. We can now express the FD composite received vectors for $n=0, 1, ..., N/L-1$ as \begin{equation} \overline{\mathbf{y}}_{:,n}^{\textrm{UL}} = \frac{1}{\sqrt{L}} \overline{\mathbf{A}}_n \tilde{\mathbf{s}}_{:,1,n}^{\textrm{UL}} + \overline{\mathbf{w}}_{:,n}, \label{eq:06} \end{equation} where $\tilde{\mathbf{s}}_{:,1,n}^{\textrm{UL}}$ is the FD vector of the transmitted symbols corresponding to bin $n$ defined as $\tilde{\mathbf{s}}_{:,1,n}^{\textrm{UL}} = [ \tilde{s}_{1,1,n}^{\textrm{UL}} \ \tilde{s}_{2,1,n}^{\textrm{UL}} \ \dots \ \tilde{s}_{K,1,n}^{\textrm{UL}} ]^{\textrm{T}}$. Now that we have a compact expression for $\overline{\mathbf{y}}_{:,n}^{\textrm{UL}}$, we can
The three general cases are presented in Figure \ref{fig3}. \begin{figure}\ \begin{centering} \includegraphics[scale=1]{kepler.eps} \caption[Possible Kepler trajectories]{Possible Kepler trajectories: an ellipse, parabola, and hyperbola }\label{fig3} \end{centering} \end{figure} Returning to Kepler's laws, the first is now obvious. The only closed trajectories, or orbits, are elliptical. Kepler's second law is a statement of conservation of angular momentum. Indeed, introduce polar coordinates such that $q_1=r\cos\phi$, $q_2=r\sin\phi$ note that along the trajectory $${\ell}={\cal L}_2=q_1(t)p_2(t)-q_2(t)p_1(t)= q_1\frac{dq_2}{dt}-q_2\frac{dq_1}{dt}=r^2\frac{d\phi}{dt}.$$ The area traced out from time $0$ to time $t$ is $A(t)=\frac12\int_{\phi(0)}^{\phi(t)}\!r^2(\phi)\,\mathrm{d}\phi$. Differentiating with respect to time: $\frac{d}{dt}A(t)=\frac{1}{2}r^2\frac{d\phi(t)}{dt}=\frac{\ell}{2}$, so the rate is constant. Note that Kepler's third law is only valid for closed trajectories: ellipses. We may write the period $T$ of such an orbit in terms of the constants of the motion. Explicit evaluation for $\phi(0)=0$, $\phi(T)=2\pi$, yields $A(T)=\frac{\ell T}{2}$ as the area of the ellipse. Kepler's third law follows easily from equation (\ref{keplerconics}) and the simple calculus expression for the area of an ellipse. \subsection{A Kepler analogue on the 2-sphere} \label{kepler_analogues} There are analogs of the Kepler problem on spaces of nonzero constant curvature that are also superintegrable. We consider a 2-sphere analog and show that superintegrability yields information about the trajectories, and that in a particular limit we recover the Euclidean space problem. It is convenient to consider the 2-sphere as a two dimensional surface embedded in Euclidean 3-space. Let $s_1,s_2,s_3$ be standard Cartesian coordinates. Then the equation \begin{equation}\label{sphere}s_1^2+s_2^2+s_3^2=1\end{equation} defines the unit sphere. The embedding phase space is now six dimensional with conjugate momenta $p_1,p_2,p_3$. The phase space for motion on the 2-sphere will be a four dimensional submanifold of this Euclidean phase space. One of the constraints is (\ref{sphere}). Since the tangent vector to any trajectory constrained to the sphere is orthogonal to the normal vector, we have the additional phase space constraint $s_1p_1+s_2p_2+s_3p_3=0$. The Hamiltonian is \begin{equation}\label{hamiltoniansphere}{\cal H}={\cal J}_1^2+{\cal J}_2^2+{\cal J}_3^2+\frac{\alpha s_3}{\sqrt{ s_1^2+s_2^2}}\end{equation} with $\alpha<0$ and ${\cal J}_1=s_2p_3-s_3p_2$, where ${\cal J}_2$, ${\cal J}_3$ are obtained by cyclic permutations of $1,2,3$. If the universe has constant positive curvature, this would be a possible model for planetary motion about the Sun, \cite{Segal}. Note that the ${\cal J}_k$ are angular momentum generators, although ${\cal J}_1,{\cal J}_2$ are not constants of the motion. Due to the embedding, we have $${\cal H}'=p_1^2+p_2^2+p_3^2+\frac{\alpha s_3}{(s_1^2+s_2^2+s_3^2)\sqrt{s_1^2+s_2^2}}=\frac{{\cal H} +(s_1p_1+s_2p_2+s_3p_3)^2}{s_1^2+s_2^2+s_3^2},$$ so we can use the usual Euclidean Poisson bracket $\{{\cal F}, {\cal G}\}=\sum_{i=1}^3(-\partial_{s_i}{\cal F}\partial_{p_i}{\cal G}+\partial_{p_i}{\cal F}\partial_{s_i}{\cal G})$ for our computations if at the end we restrict to the unit sphere. (Note that we have here normalized the parameters $m/2=1$.) The Hamilton equations for the trajectories $s_j(t), p_j(t)$ in phase space are $ \frac{ds_j}{dt}=\{ {\cal H},s_j\}$, $ \frac{dp_j}{dt}=\{ {\cal H},p_j\}$, $j=1,2,3$. The classical basis for the constants is \begin{equation}\label{constants6} {\cal L}_1=2{\cal J}_1 {\cal J}_3-\frac{\alpha s_1}{\sqrt{s_1^2+s_2^2}},\ {\cal L}_2=2{\cal J}_2{\cal J}_3-\frac{\alpha s_2}{\sqrt{s_1^2+s_2^2}},\ {\cal X}={\cal J}_3.\end{equation} The structure and Casimir relations are \begin{equation}\label{structure6}\{{\cal X},{\cal L}_1\}=-{\cal L}_2,\ \{{\cal X},{\cal L}_2\}={\cal L}_1,\ \{{\cal L}_1,{\cal L}_2\}=4({\cal H}-2{\cal X}^2){\cal X}, \end{equation} \begin{equation}\label{classcasimir6}{\cal L}_1^2 + {\cal L}_2^2+4 {\cal X}^4 -4{\cal H} {\cal X}^2-\alpha^2=0. \end{equation} Here, $({{\cal L}_1},{{\cal L}_2})$ transforms as a vector with respect to rotations about the $s_3$-axis. The Casimir relation expresses the square of the length of this vector in terms of the other constants of the motion: ${\cal L}_1^2+{\cal L}_2^2=\kappa^2$, $\kappa^2=\alpha^2+4{\cal H}{\cal X}^2-4 {\cal X}^4$, where $\kappa\ge 0$. We choose the $s_1,s_2,s_3$ coordinate system so that the vector points in the direction of the positive $s_1$ axis: $({\cal L}_1,{\cal L}_2)=(\kappa,0)$. Then \begin{equation}\label{symmetries}{\cal J}_1{\cal J}_3=\frac{{\alpha}s_1}{2\sqrt{s_1^2+s_2^2}}+\frac{\kappa}{2},\quad {\cal J}_2{\cal J}_3=\frac{{\alpha}s_2}{2\sqrt{s_1^2+s_2^2}},\end{equation} \[{\cal J}_3={\cal X},\quad {\cal J}_1^2+{\cal J}_2^2+{\cal J}_3^2={\cal H}-\frac{{\alpha}s_3}{\sqrt{s_1^2+s_2^2}}.\] Substituting the first three equations into the fourth, we obtain: \begin{equation}\label{cone2}\left({\cal H}{\cal X}^2-(\frac{\alpha^2}{4}+\frac{\kappa^2}{4})-{\cal X}^4\right)^2(s_1^2+s_2^2)-\alpha^2(\frac{\kappa s_1}{2}+{\cal X}^2s_3)^2=0\end{equation} For fixed values of the constants of the motion equation (\ref{cone2}) describes a cone. Thus the orbit lies on the intersection of this cone with the unit sphere $s_1^2+s_2^2+s_3^2=1$, a conic section. This is the spherical geometry analog of Kepler's first law. A convenient way to view the trajectories is to project them onto the $(s_1,s_2)$-plane: $(s_1,s_2,s_3)\to (s_1,s_2)$. The projected points describe a curve in the unit disc $s_1^2+s_2^2\le 1$. This curve is defined by \begin{equation}\label{projectedtraj}[{\cal H}{\cal X}^2-(\frac{\alpha^2}{4}+\frac{\kappa^2}{4})-{\cal X}^4]^2(s_1^2+s_2^2)-\alpha^2(\frac{\kappa s_1}{2}\pm{\cal X}^2\sqrt{1-s_1^2-s_2^2})^2=0.\end{equation} The plus sign corresponds to the projection of the trajectory from the northern hemisphere, the minus sign to projection from the southern hemisphere. Notice that the potential has an attractive singularity at the north pole: $s_1=s_2=0,s_3=1$ and a repulsive singularity at the south pole. In polar coordinates $s_1=r\cos\phi$, $s_2=r\sin\phi$ the equation of the projected orbit on the $(s_1,s_2)$-plane is given by \begin{equation}\label{polarorbit} r^2=\frac{4{\cal X}^4}{4{\cal X}^4+(-\alpha+\kappa\cos\phi)^2},\quad 0\le\phi<2\pi.\end{equation} \begin{theorem} For nonzero angular momentum, the projection sweeps out area $A(t)$ in the plane at a constant rate $\frac{dA}{dt}={\cal X}$ with respect to the origin (0,0). \end{theorem} \begin{theorem} For nonzero angular momentum, the period of the orbit is \begin{equation}\label{polarperiod}T=\frac{4{\cal X}^3\pi \left(\frac{1}{4{\cal X}^4+(\kappa-\alpha)^2}\sqrt{\frac{4{\cal X}^4+(\kappa-\alpha)^2}{4{\cal X}^4+(\kappa+\alpha)^2}} +\frac{1}{{\cal X}^4+(\kappa+\alpha)^2}\right)}{\sqrt{2\sqrt{\frac{4{\cal X}^4+(\kappa-\alpha)^2}{4{\cal X}^4+(\kappa+\alpha)^2}}+2\frac{4{\cal X}^4+\alpha^2-\kappa^2}{4{\cal X}^4+(\alpha+\kappa)^2}}}.\end{equation} \end{theorem} \subsubsection{Contraction to the Euclidean space Kepler problem} \label{limit} The sphere model can be considered as describing a 2-dimensional bounded ``universe'' of radius $1$. Suppose an observer is situated ``near'' the attractive north pole. This observer uses a system of units with unit length $\epsilon$ where $0<\epsilon<<1$. The observer is unable to detect that she is on a 2-sphere; to her the universe appears flat. In her units, the coordinates are \begin{equation} \label{contractioncoords} \begin{array}{l}s_1=\epsilon x,\quad s_2=\epsilon y,\quad s_3= 1+O(\epsilon^2),\\ p_1=\frac{p_x}{\epsilon},\quad p_2=\frac{p_y}{\epsilon},\quad p_3= -(xp_x+yp_y)+O(\epsilon^2).\end{array}\end{equation} Here, $\epsilon^2$ is so small that to the observer, it appears that the universe is the plane $s_3=1$ with local Cartesian coordinates $(x,y)$. We define new constants $\beta,{ h}$ by \begin{equation} \alpha=\beta/\epsilon, \qquad {\cal H}-{\cal X}^2=h/\epsilon^2. \label{contractionparam}\end{equation} Substituting into (\ref{hamiltoniansphere}), we find $\frac{1}{\epsilon^2}(p_x^2+p_y^2)+\frac{\beta}{\epsilon^2\sqrt{x^2+y^2}}=\frac{ h}{\epsilon^2}$. Multiplying both sides of this equation by $\epsilon^2$ we obtain the Hamiltonian for the Euclidean Kepler system: \begin{equation}\label{KeplerHam} p_x^2+p_y^2+\frac{\beta}{\sqrt{x^2+y^2}}={ h}.\end{equation} Using the same procedure we find that the constants of the motion become $ {\cal X}=xp_y-yp_x$, $ {\cal L}_1=\frac{e_1}{\epsilon}$,$ {\cal L}_2=\frac{e_2}{\epsilon}$, where $(e_1,e_2)$ is the Laplace-Runge-Lenz vector for the Kepler system: $ e_1=-2{\cal X}p_y-\frac{\beta x}{\sqrt{x^2+y^2}}$,$ e_2=2{\cal X}p_x-\frac{\beta y}{\sqrt{x^2+y^2}}$. The same procedure applied to the structure equations yields \[ \{{\cal X},{e}_1\}=-{e}_2,\ \{{\cal X},{e}_2\}={e}_1,\ \{{e}_1,{e}_2\}=4{h}{\cal X},\ {e}_1^2 + {e}_2^2 -4{h} {\cal X}^2-\beta^2=0. \] Thus the length of the Laplace vector is $k=\kappa/\epsilon$ where $k^2=4h{\cal X}^2+\beta^2$. To the observer, the trajectories lie in the plane $s_3=1$. and equation (\ref{projectedtraj}) for the paths of the trajectories is \begin{equation}\label{Keplertraj}[{h}{\cal X}^2-(\frac{\beta^2}{4}+\frac{k^2}{4})]^2(x^2+y^2)-\beta^2(\frac{kx}{2}+{\cal X}^2)^2=0.\end{equation} The solutions of the Kepler trajectory equations (\ref{Keplertraj}) are the usual conic sections: intersections of a plane and a cone. \medskip \subsubsection{Trajectory determination} To obtain a plot of a trajectory, defined by its constants of the motion and parametrized by time, we can integrate Hamilton's equations numerically. To do this it is necessary to identify a point in six-dimensional phase space that lies on the trajectory, to serve as an intial point for integration. Thus for each set of constants of the motion we must find a distinguished point. First we take the case ${\cal X}\ne 0$. We project the orbits onto the ${s}_1-{s}_2$ unit disk. From (\ref{polarorbit}), we see that ${r}^2$, is minimized when ${\phi}=0$. This is the perihelion of the orbit, which implies that ${q}_1={r}({0})=\sqrt{\frac{4{\cal X}^4}{4{\cal X}^4+(-{\alpha}+{\kappa})^2}}$, ${q}_2=0$, ${p}_1=0$, ${p}_2=\frac{\cal X}{{q}_1}$, ${q}_3=\frac{({\alpha}+{\kappa})}{\sqrt{4{\cal X}^4+(-{\alpha}+{\kappa})^2}}$ and ${p}_3=0$. Note that aphelion occurs at ${q}_1=-{r}({\pi})=\sqrt{\frac{4{\cal X}^4}{4{\cal X}^4+(-{\alpha}-{\kappa})^2}}$, ${q}_2=0$, ${p}_1=0$, ${p}_2=-\frac{\cal X}{{q}_1}$. The Casimir relation implies $\kappa^2=\alpha^2+4({\cal H}-{\cal X}^2){\cal X}^2>0$. If ${\cal H}={\cal X}^2$, then $\alpha^2=\kappa^2$ and $-\alpha=\kappa$, so the aphelion is precisely unity, while perihelion is less than unity. If ${\cal H}<{\cal X}^2$, we have $-\alpha<\kappa$, which implies aphelion occurs in the northern hemisphere. If ${\cal H}>{\cal X}^2$, we have $-\alpha>\kappa$, which implies aphelion occurs in the southern hemisphere. In the case of zero angular momentum, ${\cal X}=0$, (\ref{symmetries}) implies ${s}_2=0$, and ${\alpha}={\kappa}$, so ${\cal H}={{\cal J}_2}^2+\frac{{\alpha}{s}_3}{{s}_1}$. The projection onto the disk is a segment of the line $s_2=0$, so the motion occurs on a segment of a great circle on the 2-sphere. All trajectories crash into the north pole in finite time. \medskip {\bf Classification of trajectories} {\bf Case 1:} The trajectory is contained within the northern hemisphere. We have ${r}^2={4{\cal X}^2}/{4{\cal X}^2+(-{\alpha}+{\kappa}\cos{\phi})^2}<1$ for all ${\phi},$ and ${\cal X}^2> {\cal H}$. {\bf Case 2:} The projection has one point of tangency with the circle; the trajectory is contained in the closure of a hemisphere, $\frac{\alpha}{\kappa}=\pm1$ so ${\cal H}={\cal X}^2$. {\bf Case 3:} The projection has two
# The continuous random variable X has the following density function:kx2 0 < x < 2 f(x) = {2 otherwiseFind the ###### Question: The continuous random variable X has the following density function: kx2 0 < x < 2 f(x) = {2 otherwise Find the value of k. Select one: a.8 b. 16/3 C. 3/8 d. None of these e. 3/16 f.1/16 g. 8/3 h: 1/8 i.4 #### Similar Solved Questions ##### Which of the following statements is true regarding the proper use of a compound light microscope?Select one:The ocular lens is the lens closest to the slideB. The coarse focus knob will move the stage right to left; but never up and down C Total magnification is the objective lens power added to the ocular lens power Magnification is achieved using the ocular lens and an objective lens Which of the following statements is true regarding the proper use of a compound light microscope? Select one: The ocular lens is the lens closest to the slide B. The coarse focus knob will move the stage right to left; but never up and down C Total magnification is the objective lens power added to... ##### Select all of the correct statements about reaction quotients and equilibrium constants from the choices below.... Select all of the correct statements about reaction quotients and equilibrium constants from the choices below. 1. A reaction quotient equals the equilibrium constant at equilibrium. 2. As a reaction progresses backward toward equilibrium K drops until it reaches Q. 3. As a reaction progresses forwa... ##### You must design a pair of slits for a demonstration in which the first maximum of... You must design a pair of slits for a demonstration in which the first maximum of the diffraction pattern sits at the same place as the 5th order maximum of the interference pattern. What is the ratio of the slit spacing over the slit width? Recall that the light intensity at the screen is the produ... ##### How do you implicitly differentiate -1=x-ytan(x+2y) ? How do you implicitly differentiate -1=x-ytan(x+2y) ?... ##### The monopolistically competitive firm differs from monopoly in that its O a. profit is maximized where MR = MC. 1 b... The monopolistically competitive firm differs from monopoly in that its O a. profit is maximized where MR = MC. 1 b. demand curve slopes downward. O c. demand curve is flatter. O d. MR curve lies below its demand curve.... ##### PASSIVE COMPONENTS AND NETWORKSaverage power delivered by the ac voltage source (b) What is the half-power frequency for this circuit? 24. Suppose we construct an RC high-pass filter from 0.01-4F capacitor and 4.7-k0 resistor: Diagram the circuit What is the half-power frequency for this filter? At what frequency is Vout / Vin equal to 0.1? At what frequency is Vou/ Van equal to 0.92 Could this filter be used t0 reduce 60-Hz line frequency interference in signal? 25. (a) Consider an RC high-pass PASSIVE COMPONENTS AND NETWORKS average power delivered by the ac voltage source (b) What is the half-power frequency for this circuit? 24. Suppose we construct an RC high-pass filter from 0.01-4F capacitor and 4.7-k0 resistor: Diagram the circuit What is the half-power frequency for this filter? At... ##### CH3HCOOHCH3 HaC CH3 H COOH CH3 HaC... ##### What is the smallest radius of an unbanked (flat) track aroundwhich a bicyclist can travel... What is the smallest radius of an unbanked (flat) track around which a bicyclist can travel if her speed is 27 km/h and the coefficient of static friction between tires and track is 0.22?... ##### Point) Find the vector in R? from point A = (-3,-6) to B = (-1,_8).ABhelp (vectors) point) Find the vector in R? from point A = (-3,-6) to B = (-1,_8). AB help (vectors)... ##### (11%) Problem 5: A time-dependent but otherwise uniform magnetic field of magnitude Bo(t) is confined in... (11%) Problem 5: A time-dependent but otherwise uniform magnetic field of magnitude Bo(t) is confined in a cylindrical region of radius 7.5 cm. Initially the magnetic field in the region is pointed out of the page and has a magnitude of 2.5 T, but it is decreasing at a rate of 3.5 G/s. Due to the ch... ##### Section 4.1 Linear Transformations: Problem 9Previous ProblemProblem ListNext Problempoint) Let AFind basis for the kernal and image of the linear transformation T defined by T(z) = Az.Kerne basisImage basis: [2,2,-1]To enter basis into WeBWork , place the entries of each vector inside of brackets, and enter list of these vectors, separated by commas Forinstance, if your basis i5then you would enter [1,2,3] [1,1,1] into the answer blank Section 4.1 Linear Transformations: Problem 9 Previous Problem Problem List Next Problem point) Let A Find basis for the kernal and image of the linear transformation T defined by T(z) = Az. Kerne basis Image basis: [2,2,-1] To enter basis into WeBWork , place the entries of each vector inside of br... ##### Expert Systems and Intelligent Agents" Use the Internet or the Strayer Library to research articles on... Expert Systems and Intelligent Agents" Use the Internet or the Strayer Library to research articles on expert systems and companies which use them. Next, select two (2) companies that currently use expert systems. Then, discuss the fundamental advantages and disadvantages of using expert systems... ##### Let D = {1,2,3,4}; find spanning set of F(D) Let D = {1,2,3,4}; find spanning set of F(D)... ##### Enter your answer in the provided box; Calculate tbe percent by mass of the solute in the following aqueous solution: 31.0 g OfKCl in 142 g of water Enter your answer in the provided box; Calculate tbe percent by mass of the solute in the following aqueous solution: 31.0 g OfKCl in 142 g of water... ##### The probability of * certain surgery t0 be successlul 8040. What is tho probability that at Icast nine out of 10 operated patients will recovcr? The probability of * certain surgery t0 be successlul 8040. What is tho probability that at Icast nine out of 10 operated patients will recovcr?... ##### Treating anorexia nervosa is difficult because the client does not believe he/she needs to gain weight.... Treating anorexia nervosa is difficult because the client does not believe he/she needs to gain weight. 1. True 2. False Anorexia, bulimia nervosa, and pica are eating disorders that are equally common among men and women. 1. False 2. True Weight-loss products and ser... ##### Section 3: Snell's law P36.3.1 (5.00) Normal @i ni nr ө, Light passes from a material... Section 3: Snell's law P36.3.1 (5.00) Normal @i ni nr ө, Light passes from a material with index of refraction 1.30 to one with index of refraction 1.68. The angle of incidence is 64.2°. What is the angle of refraction? (0/5 submissions used) Save P36.3.1 Submit P36.3.1... ##### Which number below represents point Quadratic Binomia1_ Zx3 _ 10x 2 8x - 4 3 4x2 + 1lx - 2 4 10x3 + 7x2 + 3x -5 5 6 6_ 3x2 _ 4x0 0Other: Which number below represents point Quadratic Binomia 1_ Zx3 _ 10x 2 8x - 4 3 4x2 + 1lx - 2 4 10x3 + 7x2 + 3x -5 5 6 6_ 3x2 _ 4x 0 0 Other:... ##### The following events listed below has an impact on the market for Car. For each event,... The following events listed below has an impact on the market for Car. For each event, which curve is affected (supply or demand for Car), what direction is it shifted, and what is the resulting impact on equilibrium price and quantity of Car? (Draw graph for each event) Questions 1. The price of ca... ##### When 4.56 g of a nonelectrolyte solute dissolvcd in water make 10S mL of solution at 23 %C,the solution exerts an osmotic pressure of 855 tOIT:What is the molar concentration of the solution?concentration:How many moles of solute are in the
# -*- coding:utf-8 -*- # Copyright xmuspeech (Author: Snowdar 2020-01-05, JFZhou 2020-01-08) # Update Multi-task learning. Author: Zheng Li 2020-10 import os import sys import logging import numpy as np import pandas as pd # import libs.support.kaldi_common as kaldi_common # Used to interact with shell from .kaldi_dataset import KaldiDataset # Logger logger = logging.getLogger(__name__) logger.addHandler(logging.NullHandler()) class ChunkSamples(): def __init__(self, dataset: KaldiDataset, chunk_size: int, chunk_type='speaker_balance', chunk_num_selection=0, scale=1.5, overlap=0.1, drop_last_size=20, seed=1024): ''' Parameters: self.dataset: the object which contain the dicts such as utt2spk, utt2spk_int and so on. self.chunk_size: the number of frames in a chunk. self.chunk_type: which decides how to chunk the feats for training. chunk_num_selection: -1->suggestion scale, 0->max, >0->specify. self.overlap: the proportion of overlapping for every chunk. ''' self.dataset = dataset self.chunk_size = chunk_size self.chunk_type = chunk_type self.chunk_num_selection = chunk_num_selection self.scale = scale self.overlap = overlap self.drop_last_size = drop_last_size assert 0 <= self.overlap < 1 np.random.seed(seed) # chunk_samples: table [[]] if self.chunk_type == "full_length": self.head = ['utt-id', 'ark-path', 'class-label'] else: self.head = ['utt-id', 'ark-path', 'start-position', 'end-position', 'class-label'] self.chunk_samples = self.__sample() def __sample(self): # JFZhou: speaker_balance and sequential. chunk_samples = [] if self.chunk_type == 'speaker_balance': spk2chunks = {} total_chunks = 0 # max number of chunks for all speakers max_chunk_num = 0 # chunk num of every utterance chunk_counter = {} for key in self.dataset.spk2utt.keys(): utt_selected = self.dataset.spk2utt[key] # chunk num of every speaker spk_chunk_num = 0 for utt in utt_selected: ark_path = self.dataset.feats_scp[utt] num_frames = self.dataset.utt2num_frames[utt] if num_frames < self.chunk_size: # logger.warn('The num frames {0} of {1} is less than chunk size {2}, so ' # 'skip it.'.format(utt, num_frames, self.chunk_size)) chunk = "{0} {1} {2} {3} {4}".format( utt + '-0', ark_path, 0, num_frames - 1, self.dataset.utt2spk_int[utt]) if key in spk2chunks.keys(): spk2chunks[key].append(chunk) else: spk2chunks[key] = [chunk] total_chunks += 1 spk_chunk_num += 1 else: chunk_counter[utt] = 0 offset = 0 overlap_size = int(self.overlap * self.chunk_size) while offset + self.chunk_size <= num_frames: chunk = "{0} {1} {2} {3} {4}".format( utt + '-' + str(chunk_counter[utt]), ark_path, offset, offset + self.chunk_size - 1, self.dataset.utt2spk_int[utt]) offset += self.chunk_size - overlap_size if key in spk2chunks.keys(): spk2chunks[key].append(chunk) else: spk2chunks[key] = [chunk] chunk_counter[utt] += 1 total_chunks += 1 spk_chunk_num += 1 if offset + self.drop_last_size < num_frames: chunk = "{0} {1} {2} {3} {4}".format( utt + '-' + str(chunk_counter[utt]), ark_path, num_frames - self.chunk_size, num_frames - 1, self.dataset.utt2spk_int[utt]) spk2chunks[key].append(chunk) total_chunks += 1 spk_chunk_num += 1 chunk_counter[utt] += 1 if spk_chunk_num > max_chunk_num: max_chunk_num = spk_chunk_num for key in spk2chunks.keys(): chunk_selected = spk2chunks[key] if self.chunk_num_selection == 0: num_chunks_selected = max_chunk_num elif self.chunk_num_selection == -1: num_chunks_selected = int(total_chunks // len(self.dataset.spk2utt) * self.scale) else: num_chunks_selected = self.chunk_num_selection num_chunks = len(chunk_selected) if num_chunks < num_chunks_selected: # valid rather than validation # Make up the insufficient chunks valid_utts = [utt for utt in self.dataset.spk2utt[key] if self.dataset.utt2num_frames[utt] >= self.chunk_size] utts = np.random.choice(valid_utts, num_chunks_selected - num_chunks, replace=True) for utt in utts: start = np.random.randint( 0, self.dataset.utt2num_frames[utt] - self.chunk_size + 1) end = start + self.chunk_size - 1 chunk_selected.append("{0} {1} {2} {3} {4}".format( utt + '-' + str(chunk_counter[utt]), self.dataset.feats_scp[utt], start, end, self.dataset.utt2spk_int[utt])) chunk_counter[utt] += 1 else: chunk_selected = np.random.choice( spk2chunks[key], num_chunks_selected, replace=False) for chunk in chunk_selected: chunk_samples.append(chunk.split()) elif self.chunk_type == 'sequential': for utt in self.dataset.feats_scp.keys(): ark_path = self.dataset.feats_scp[utt] num_frames = self.dataset.utt2num_frames[utt] if num_frames < self.chunk_size: # logger.warn('The num frames {0} of {1} is less than chunk size {2}, ' # 'so skip it.'.format(utt, num_frames, self.chunk_size)) # print([utt + '-0', ark_path, 0, num_frames - 1, self.dataset.utt2spk_int[utt]]) chunk_samples.append([utt + '-0', ark_path, 0, num_frames - 1, self.dataset.utt2spk_int[utt]]) else: chunk_counter = 0 offset = 0 overlap_size = int(self.overlap * self.chunk_size) while offset + self.chunk_size <= num_frames: chunk_samples.append([utt + '-' + str(chunk_counter), ark_path, offset, offset + self.chunk_size - 1, self.dataset.utt2spk_int[utt]]) chunk_counter += 1 offset += self.chunk_size - overlap_size if offset + self.drop_last_size < num_frames: chunk_samples.append([utt + '-' + str(chunk_counter), ark_path, num_frames - self.chunk_size, num_frames - 1, self.dataset.utt2spk_int[utt]]) # every_utt for validation set elif self.chunk_type == "every_utt": chunk_selected = [] for utt in self.dataset.utt2spk.keys(): ark_path = self.dataset.feats_scp[utt] num_frames = self.dataset.utt2num_frames[utt] if num_frames < self.chunk_size: logger.warn('The num frames {0} of {1} is less than chunk size {2}, so skip it.'.format( utt, num_frames, self.chunk_size)) else: for chunk_counter in range(0, self.chunk_num_selection): start = np.random.randint( 0, self.dataset.utt2num_frames[utt] - self.chunk_size + 1) end = start + self.chunk_size - 1 chunk_selected.append("{0} {1} {2} {3} {4}".format( utt + '-' + str(chunk_counter), self.dataset.feats_scp[utt], start, end, self.dataset.utt2spk_int[utt])) for chunk in chunk_selected: chunk_samples.append(chunk.split()) # 自己通过valid_data指定的验证集(测试集),不分割chunk elif self.chunk_type == "full_length": for i, utt in enumerate(self.dataset.utt2spk.keys()): ark_path = self.dataset.feats_scp[utt] chunk_samples.append([utt, ark_path, self.dataset.utt2spk_int[utt]]) else: raise TypeError( "Do not support chunk type {0}.".format(self.chunk_type)) return chunk_samples def save(self, save_path: str, force=True): if os.path.exists(save_path) and not force: raise ValueError( "The path {0} is exist. Please rm it by yourself.".format(save_path)) save_dir = os.path.dirname(save_path) if not os.path.exists(save_dir): os.makedirs(save_dir) data_frame = pd.DataFrame(self.chunk_samples, columns=self.head) data_frame.to_csv(save_path, sep=" ", header=True, index=False) class ChunkSamplesMultiTask(): def __init__(self, dataset:KaldiDataset, chunk_size:int, chunk_type='speaker_balance', chunk_num_selection=0, scale=1.5, overlap=0.1, drop_last=False, seed=1024): ''' Parameters: self.dataset: the object which contain the dicts such as utt2spk, utt2spk_int and so on. self.chunk_size: the number of frames in a chunk. self.chunk_type: which decides how to chunk the feats for training. chunk_num_selection: -1->suggestion scale, 0->max, >0->specify. self.overlap: the proportion of overlapping for every chunk. ''' self.dataset = dataset self.chunk_size = chunk_size self.chunk_type = chunk_type self.chunk_num_selection = chunk_num_selection self.scale = scale self.overlap = overlap self.drop_last = drop_last assert 0<= self.overlap < 1 np.random.seed(seed) # chunk_samples: table [[]] self.head = ['utt-id', 'ark-path', 'start-position', 'end-position', 'class-label', 'ali-path'] #Zheng Li 2020-10 self.chunk_samples = self.__sample() def __sample(self): # JFZhou: speaker_balance and sequential. chunk_samples = [] if self.chunk_type == 'speaker_balance': spk2chunks = {} total_chunks = 0 max_chunk_num = 0 chunk_counter = {} for key in self.dataset.spk2utt.keys(): utt_selected = self.dataset.spk2utt[key] spk_chunk_num = 0 for utt in utt_selected: ark_path = self.dataset.feats_scp[utt] num_frames = self.dataset.utt2num_frames[utt] ali_path = self.dataset.ali_scp[utt] #Zheng Li 2020-10 if num_frames < self.chunk_size: logger.warn('The num frames {0} of {1} is less than chunk size {2}, so skip it.'.format(utt, num_frames, self.chunk_size)) else: chunk_counter[utt] = 0 offset = 0 overlap_size = int(self.overlap * self.chunk_size) while offset + self.chunk_size <= num_frames: chunk = "{0} {1} {2} {3} {4} {5}".format(utt+'-'+str(chunk_counter[utt]),ark_path,offset,offset+self.chunk_size-1,self.dataset.utt2spk_int[utt],ali_path) #Zheng Li 2020-10 offset += self.chunk_size - overlap_size if key in spk2chunks.keys(): spk2chunks[key].append(chunk) else: spk2chunks[key] = [chunk] chunk_counter[utt] += 1 total_chunks += 1 spk_chunk_num += 1 if not self.drop_last and offset + overlap_size < num_frames: chunk = "{0} {1} {2} {3} {4} {5}".format(utt+'-'+str(chunk_counter[utt]),ark_path,num_frames-self.chunk_size,num_frames-1,self.dataset.utt2spk_int[utt],ali_path) #Zheng Li 2020-10 total_chunks += 1 spk_chunk_num += 1 chunk_counter[utt] += 1 if spk_chunk_num > max_chunk_num: max_chunk_num = spk_chunk_num for key in spk2chunks.keys(): chunk_selected = spk2chunks[key] if self.chunk_num_selection==0: num_chunks_selected = max_chunk_num elif self.chunk_num_selection==-1: num_chunks_selected = int(total_chunks//len(self.dataset.spk2utt)*self.scale) else: num_chunks_selected = self.chunk_num_selection num_chunks = len(chunk_selected) if num_chunks < num_chunks_selected: valid_utts = [ utt for utt in self.dataset.spk2utt[key] if self.dataset.utt2num_frames[utt] >= self.chunk_size ] utts = np.random.choice(valid_utts,num_chunks_selected-num_chunks,replace=True) for utt in utts: start = np.random.randint(0, self.dataset.utt2num_frames[utt]-self.chunk_size+1) end = start + self.chunk_size - 1 chunk_selected.append("{0} {1} {2} {3} {4} {5}".format(utt+'-'+str(chunk_counter[utt]),self.dataset.feats_scp[utt],start,end,self.dataset.utt2spk_int[utt],self.dataset.ali_scp[utt])) #Zheng Li 2020-10 chunk_counter[utt] += 1 else: chunk_selected = np.random.choice(spk2chunks[key],num_chunks_selected,replace=False) for chunk in chunk_selected: chunk_samples.append(chunk.split()) elif self.chunk_type == 'sequential': for utt in self.dataset.feats_scp.keys(): ark_path = self.dataset.feats_scp[utt] num_frames = self.dataset.utt2num_frames[utt] ali_path = self.dataset.ali_scp[utt] #Zheng Li 2020-10 if num_frames < self.chunk_size: logger.warn('The num frames {0} of {1} is less than chunk size {2}, so skip it.'.format(utt, num_frames, self.chunk_size)) else: chunk_counter = 0 offset = 0 overlap_size = int(self.overlap * self.chunk_size) while offset + self.chunk_size <= num_frames: chunk_samples.append([utt+'-'+str(chunk_counter),ark_path,offset,offset+self.chunk_size-1,self.dataset.utt2spk_int[utt],ali_path]) #Zheng Li 2020-10 chunk_counter += 1 offset += self.chunk_size - overlap_size if not self.drop_last and offset + overlap_size < num_frames: chunk_samples.append([utt+'-'+str(chunk_counter),ark_path,num_frames-self.chunk_size,num_frames-1,self.dataset.utt2spk_int[utt],ali_path]) #Zheng Li 2020-10 # every_utt for valid elif self.chunk_type == "every_utt": chunk_selected = [] for utt in self.dataset.utt2spk.keys(): ark_path = self.dataset.feats_scp[utt] num_frames = self.dataset.utt2num_frames[utt] ali_path = self.dataset.ali_scp[utt] #Zheng Li 2020-10 if num_frames < self.chunk_size: logger.warn('The num frames {0} of {1} is less than chunk size {2}, so skip it.'.format(utt, num_frames, self.chunk_size)) else: for chunk_counter in range(0, self.chunk_num_selection): start = np.random.randint(0, self.dataset.utt2num_frames[utt]-self.chunk_size+1) end = start + self.chunk_size - 1 chunk_selected.append("{0} {1} {2} {3} {4} {5}".format(utt+'-'+str(chunk_counter),self.dataset.feats_scp[utt],start,end,self.dataset.utt2spk_int[utt],self.dataset.ali_scp[utt])) #Zheng Li 2020-10 for chunk in chunk_selected: chunk_samples.append(chunk.split()) else: raise TypeError("Do not support chunk type {0}.".format(self.chunk_type)) return chunk_samples
as per \cite{TsangLai2008}. This initial solution is then made more robust in an iterative fashion as follows. We take the resulting values of $\delta h$ and $d \delta h/dr$ at the corotation location as new initial conditions for the forced differential equation. Integrating outwards then produces an updated inhomogeneous profile for $\delta h$. Once again the boundary conditions set the necessary homogeneous parts which are generally diminished by this process. This iterative procedure ensures that the resultant solution of the master equation \eqref{eq:master_eqn} is not overly sensitive to the initial (arbitrary) choice of initial conditions. As a final step, it is desirable to eliminate any spurious oscillations in the so called ``phase gradient error" from the solution near the boundaries \citep{KorycanskyPollack1993,MirandaRafikov2019}. This is a measure of the difference between the expected wave phase, set by the characteristic WKB wavenumber, and that found from the solution as d$\text{Arg}(\delta h)/dr$. Oscillations in this diagnostic are suggestive of unwanted incoming waves entering the domain. The amplitudes of these oscillations are reduced by using one step of the root finding Netwon-Raphson method, which adjusts the boundary condition prescription in order to minimise the phase gradient error function \citep{MirandaRafikov2019}. This, in turn, slightly updates the final solution. Although a small refinement, this step is important for proper handling of subtle interference effects when many modes are superimposed. \subsection{Extracting the Fourier components of the perturbing potential} \label{subsection:extracting_potential} In order to solve the inhomogeneous version of the master equation \eqref{eq:master_eqn}, we must supply our solver with the radial structure of the $(l,m)$ component $\Phi_{lm}$ of the perturbing potential. In the zero eccentricity case considered previously this is simply done by computing the softened Laplace coefficients (e.g. see equation (13) of \cite{MirandaRafikov2019}). However, now that there are two intrinsic periodicities, we numerically extract each $\Phi_{lm}$ from equation \eqref{eq:fourier_expansion} via the double integral \begin{align} \label{eq:phi_mode_integral} \Phi_{lm}(r) &= \frac{1}{2\pi^2}\int_{t = 0}^{2\pi/n} \int_{\phi = 0}^{2\pi} \Phi(r,\phi,t) \cos(m\phi-l n t) \,d\phi\,dt \\ &= -\frac{GM_{\text{p}}}{2\pi^2}\int_{t = 0}^{2\pi/n} \int_{\phi = 0}^{2\pi} \frac{\cos(m\phi-l n t) \,d\phi\,dt}{\left[r^2+r_{\text{p}}^2-2 r r_{\text{p}} \cos\psi+\epsilon^2\right]^{1/2}} , \end{align} where $\psi(t) = \phi-\phi_{\text{p}}(t)$ denotes the azimuthal displacement from the planet and $\epsilon$ is the softening parameter employed to prevent singularities at the location of the planet when $r - r_{\text{p}} = \psi = 0$. Calculation of this integral requires knowledge of the orbital trajectory of the planet $(r_{\text{p}}(t),\phi_{\text{p}}(t))$ over time. To get that, we solve\footnote{We employ the python package \texttt{kepler.py} which uses an algorithm based on the work of \cite{Markley1995} and \cite{Nijenhuis1991}.} Kepler's equation for the eccentric anomaly $E(t)$ as a function of the mean anomaly $M(t) = n t $, as given by equation (2.52) of \cite{MurrayMcDermott1999}. This then provides the true anomaly of the planet such that \begin{align} \label{eq:phi_p} \cos\phi_{\text{p}}(t) = \frac{\cos E(t)-e}{1-e\cos E(t)}, ~~~~~~ \sin\phi_{\text{p}}(t) = \frac{\sqrt{1-e^2}\sin E(t)}{1-e\cos E(t)}, \end{align} and the radial position of the planet \begin{equation} \label{eq:r_p} r_{\text{p}}(t) = \frac{a(1-e^2)}{1+e \cos\phi_{\text{p}}(t)}. \end{equation} The double integral in equation \eqref{eq:phi_mode_integral} is then numerically computed using an equally weighted quadrature on a uniform grid with $( n_t , n_\phi )$ cells in $t$ and $\phi$ respectively. Note that $( n_t , n_\phi )$ must be chosen sufficiently high to avoid Nyquist-Shannon sampling artefacts for higher order modes. As we find in the convergence study performed in \S\ref{section:convergence}, we typically want the maximum mode number for $m$ to be over 100, which suggests taking $n_t$ and $n_\phi$ at least as large as 200. The solver routine employed to integrate the master equation \eqref{eq:master_eqn} requires the $\Phi_{lm}$ to be calculated at each radial point as demanded by the adaptive step-size of its integration over $r$. Computing a double integral at each step is time-inefficient and so we resort to interpolation instead. We first calculate the $\Phi_{lm}$ on a logarithmically spaced grid defined between the inner radius $r_{\text{min}}$ and the outer radius $r_{\text{max}}$, at a set number of points, $n_r$. Then a cubic interpolation is used between these points for arbitrary values of $r$ to supply $\Phi_{lm}$ to the solver of the master equation. \subsection{Mode superposition} The quantum number $m$ controls the azimuthal structure of the modes. The previous numerical simulations of ZZ22 exhibit sharp, caustic features with significant information conveyed by the fine details of the perturbation pattern, suggesting that we should calculate a sufficiently high number of modes in $m$; we denote this number $m_{\text{max}}$. Meanwhile the $l$ quantum number encodes the time dependence of the pattern caused by the eccentricity of the planetary orbit. One can show that the coefficients of the modal expansion of the classical disturbing function are proportional to $e^{|l-m|}$ (see \cite{MurrayMcDermott1999}). Thus for small eccentricities, the ``off-diagonal" contributions are expected to quickly drop off as $l$ departs from $m$. We then set the range of $l$ that we consider about each $m$ according to the parameter $|l-m|_{\text{max}}$ --- the maximum value of $|l-m|$. We need to be a little careful about which modes should be included in the net summation. Indeed, physically the density waves are launched from Linblad resonances where the Doppler shifted forcing frequency $\tilde{\omega}=m(\omega_{lm}-\Omega)$ matches the epicylic frequency $\kappa$. If we consider a disc with no pressure support, we can easily equate these frequencies and find the resonant locations to be \begin{equation} \label{eq:linblad_resonances} \frac{r_{\pm}}{a} = \left(\frac{m \pm 1}{l}\right)^{2/3}, \end{equation} where the plus and minus signs correspond to the inner and outer Linblad resonances respectively. Now, consider the following cases. When $m=1$ and $l \geq 1$ the inner Linblad resonance moves to $r_{-}\to 0$ and hence no resonance occurs in the inner part of the disc. However, there is still an outer Linblad resonance. In this case, the modal solution will be wave-like exterior to the Linblad resonance and evanescent interior to it. Despite the lack of an outgoing wave at the interior boundary, use of a higher order ODE solver is still able to find well behaved solutions which can be incorporated into the Fourier summation. Meanwhile, taking $m=1$ and $l<0$, would result in a non-physical resonance location and should be ignored. For the case $m = 1$ and $l = 0$ the ratio given by equation \eqref{eq:linblad_resonances} for $r_{-}$ is indeterminate. In fact, this resonance corresponds to the specific condition that $\Omega(r) - \kappa(r) = 0$ which is satisfied everywhere in the disc for our assumed uniform pressure setup (see Section \ref{subsection:parameters}). This is an apsidal resonance wherein the precession frequency of disc material equals the precession frequency of the planet (which is zero in our Keplerian system). Such a secular resonance behaves distinctly different from a mean-motion Linblad resonance and will instead manifest itself as a global eccentric mode which depends sensitively on the boundary conditions. Such forced global eccentric modes have been studied by \cite{TeyssandierOgilvie2016a} and \cite{Lubow2022} but here we will neglect them since we are focusing on the launching of localised spiral features which can be treated as a separate effect. As such we will only consider mode contributions in the range $m \geq 1$ and $l \ge 1$ \footnote{In practice all the $m=1$ contributions are found to be subdominant and do not drastically affect the net morphology. Thus all these complications could be avoided in future by neglecting the $m=1$ Fourier terms. Note that this would also conveniently remove the indirect term from consideration.}. \subsection{Parameters} \label{subsection:parameters} In this work we perform a range of targeted linear calculations to be compared with the numerical runs of ZZ22. For that reason, we adopt a setup for the disc analogous to their study, namely, a uniform background disc density with $p=0$. Furthermore, as stated earlier, we adopt a globally isothermal EoS with $q=0$ such that $L_T^{-1} \to 0$ and the wave angular momentum flux is conserved away from the wave excitation region. The disc aspect ratio at the semi-major axis of the planetary orbit is taken to be $h_{\text{p}} = 0.1$ and the gravitational softening length is set to be $\epsilon = 0.3
Recall that the diameter of a set E is the value   diam(E) = sup x − y | x, y ∈ E . 72 2 The Lebesgue Measure The diameter of the empty set is assumed to be zero. 2.6.1 Let ε > 0. A family of sets {eα }α∈A is called an ε-cover of a set E ⊂ Rm if  eα and diam(eα )  ε for every α ∈ A. E⊂ α∈A In what follows, we will need only covers that are at most countable, so hereafter we assume that the set A is countable without stating this explicitly. We may assume without loss of generality that A = N. We do this in most cases, but sometimes it is convenient to use other sets of indices. It is clear that for every ε > 0, the space Rm and, consequently, every subset of Rm , has an ε-cover. For arbitrary p > 0 and E ⊂ Rm , set ∞    diam(ej ) p   {ej }j 1 is an ε-cover of E . μp (E, ε) = inf  2 j =1 Obviously, the function ε → μp (E, ε) (which may take infinite values) is decreasing, and hence the limit lim μp (E, ε) = sup μp (A, ε) ε→+0 ε>0 exists. Definition The function E → μ∗p (E) = lim μp (E, ε), ε→+0 defined on all subsets of Rm , is called the p-dimensional outer Hausdorff 10 measure. We will soon see that μ∗p is indeed an outer measure in the sense of Definition 1.4.2. Note also that, interpreting the space Rm as a subspace of Rn (n > m), we may regard every set E contained in Rm as a subset of Rn . The diameters of a set computed in the spaces Rm and Rn , obviously, coincide, so that the value μ∗p (E) does not depend on the ambient space. Thus, speaking about the outer Hausdorff measure of a set E, we may, and shall, omit reference to the space in which we regard it to be embedded. When it is necessary to specify the domain of the function μ∗p , we mention it explicitly. In this connection, note that for subsets of the space Rm , the outer measures μ∗p are of interest only for p  m, since otherwise μ∗p ≡ 0 (see the end of Sect. 2.6.6). 10 Felix Hausdorff (1868–1942)—German mathematician. 2.6  Hausdorff Measures 73 2.6.2 Let us establish the basic properties of the function μ∗p . (1) 0  μ∗p (E)  +∞, μ∗p (∅) = 0. (2) Monotonicity: if E ⊂ F , then μ∗p (E)  μ∗p (F ). These properties are obvious. (3) μ∗p is an outer measure: if E ⊂ Proof We will assume that ∞ n=1 En , then μ∗p (E)  ∗ n=1 μp (En ) < +∞, ∗ n=1 μp (En ). since otherwise the inequality in (n) question is trivial. Fix a number ε > 0 and consider ε-covers {ej }j 1 of the sets En such that ∞  diam(e(n) ) p  j 2 j =1 < μp (En , ε) + ε 2n (n = 1, 2, . . . ). (n) Obviously, the family {ej }n,j 1 is an ε-cover of E, and, therefore, μp (E, ε)  ∞  diam(e(n) ) p  j 2 n,j =1   ∞  ∞  ε μp (En , ε) + n  μ∗p (En ) + ε. 2 n=1 n=1 Passing to the limit as ε → 0, we obtain the desired result.  On sets that are sufficiently far from each other, the function μ∗p is additive. More precisely, sets E and F are called separated if   inf x − y | x ∈ E, y ∈ F > 0. (4) For separated sets, μ∗p (E ∨ F ) = μ∗p (E) + μ∗p (F ). Proof Since μ∗p (E ∨ F )  μ∗p (E) + μ∗p (F ) by the subadditivity of μ∗p , we only need to prove the reverse inequality. Let 0 < ε < inf{x − y | x ∈ E, y ∈ F }. Consider an arbitrary ε-cover {ej }j 1 of the set E ∨ F . By the choice of ε, for every index j at least one of the intersections ej ∩ E, ej ∩ F is empty, whence ∞   diam(ej ) p j =1 2    diam(ej ) p + 2 ej ∩E=∅   diam(ej ) p . 2 ej ∩F =∅ Since the families {ej }ej ∩E=∅ and {ej }ej ∩F =∅ are ε-covers of the sets E and F , respectively, we have ∞   diam(ej ) p  μp (E, ε) + μp (F, ε). 2 j =1 74 2 The Lebesgue Measure Taking the lower boundary of the left-hand side over all ε-covers, we see that μp (E ∨ F, ε)  μp (E, ε) + μp (F, ε). To complete the proof, it suffices to let ε → 0.  (5) Let E ⊂ Rm , and let  : E → Rn be a map satisfying the Lipschitz condition:   (x) − (y)  Lx − y for x, y ∈ E, where L is a constant. Then   μ∗p (E)  Lp μ∗p (E). In particular, μ∗p ((E)) = 0 if μ∗p (E) = 0. Proof Let μ∗p (E) < +∞, and let {ej }j 1 be an ε-cover of E such that ∞   diam(ej ) p 2 j =1 < μp (E, ε) + ε. We will assume that ej ⊂ E for all j (otherwise replace ej by ej ∩ E). Since diam((ej ))  L diam(ej ), the sets (ej ) form an Lε-cover of the set (E), whence ∞     diam((ej )) p μp (E), Lε  2 j =1 L p ∞   diam(ej ) p j =1 2   < Lp μp (E, ε) + ε . Passing to the limit as ε → 0, we obtain the desired inequality.  Remark For μ∗p (E) = 0, the equality μ∗p ((E)) = 0 can be obtained under less restrictive assumptions on the map . It suffices to require that it is only locally Lipschitz (this condition is satisfied, in particular, for maps that are smooth in a neighborhood of E). Then one should split E into countably many parts on which  satisfies the Lipschitz condition (with a separate constant for each part), apply the obtained result to each of them, and then use the countable subadditivity of μ∗p . To formulate the next property, we introduce two important classes of continuous maps. Definition Let E ⊂ Rm . We say that a map  : E → Rn is a weak contraction of E if (x) − (y)  x − y for all x, y in E. We say that a continuous map  : E → Rn is expanding on E if (x) − (y)  x − y for all x, y from E. 2.6  Hausdorff Measures 75 In other words, a weak contraction is a map that satisfies the Lipschitz condition with Lipschitz constant 1. It is not necessarily invertible. However, an expanding map is invertible, and its inverse is a weak contraction. In particular, any expanding map is a homeomorphism. We emphasize that the image of a Borel set under an expanding map is again a Borel set (this is a direct corollary of the proposition from Sect. 2.3.3). (6) If  is a weak contraction of a set E, then μ∗p ((E))  μ∗p (E). For an expanding map, the reverse inequality holds. This follows immediately from Property (5). (7) If a map  preserves the distances between points of a set E, then μ∗p ((E)) = μ∗p (E). In particular, the outer Hausdorff measure is invariant under translations and orthogonal transformations. The
form of proper functions We will denote the proper functions of translation operator Ta as \ k (x ). They will be such that: T a \ k (x ) c k ,a \ k (x ) where ck,a is a proper value of operator Ta. Depending on the definition of operator Ta, we also have: T a \ k (x ) \ k (x  a ). Now, let us try and obtain the precise form of the coefficients c k ,a . In order to do this, we will apply two successive translations of modulus a1 and then a2 to the function \ k (x ). In this example we will use a1 a and a2 a so that the application of operator T a2 D T a1 results in a return to the departure point and thus leaves function \ k (x ) unchanged. We can thus write that: Ta T a1 \ k (x ) c k ,a \ k (x ) \ k (x  a ) D T a1 \ k (x ) T a2 (T a1 \ k (x )) T a2 c k ,a .\ k (x ) T a2 (T a1 \ k (x )) \ k (x  a  a ) \ k (x ) 2 Ÿ c k ,a .c k ,a 1. c k ,a .c k ,a \ k (x ) ½° ¾ °¿ 58 Solid-State Physics for Electronics T a1 \ k (x ) \ k (x  a1 ) T a1 T a2 Ta \ k (x ) D Ta1 \k (x ) 2 T a2 D T a1 An acceptable solution is c k ,a condition as e ikae ika \k (x  a1  a2 ) e ika as it is in accordance with the preceding 1. Finally, the equation for the proper values can be written as: T a \ k (x ) e ika \ k (x ). Now, let us determine the form of the proper function \ k (x ) of a semi-free electron (in first-order approximation). In order to do this, we can first of all note that the zero-order solution is of the form \ 0k (x ) e ikx . To reach the first-order 1 approximation (where the function is denoted by \ k (x ) { \ k (x )) the solution to the function of the zero order must be perturbed by a function u(x) such that: 1 \ k (x ) { \ k (x ) \ 0k (x) u (x ) e ikx u (x ). Now we need to find the form that gives u(x). To do that, we look at the proper functions \ k (x ) that can be chosen in the form \ k (x ) e ikx u (x ) with the condition that u(x) u(x + a). These functions are called Bloch functions. In effect, we have: T a \ k (x ) c k ,a \ k (x ) \ k ( x ) e ikx u ( x ) ikx c k ,a e and T a \ k (x ) u (x ) \ k (x  a ) u (x  a ). >\ k ( x )@" x x a " ½ e ikae ikx u (x ) ° °° ¾Ÿ ° x a " ° ik ( x  a ) e u (x  a ) °¿ u(x) ªe ikx u ( x ) º ¬ ¼" x ck,a e ika = The Origin of Band Structures 59 To conclude, the proper functions for the semi-free electron are in the form \ k (x ) e ikx u (x ) and are called Bloch functions. Function u(x) is periodic with a period equal to a, the lattice repeat unit. 3.2. Mathieu’s equation 3.2.1. Form of Mathieu’s equation The form of this equation is that of the Schrödinger equation where V w 0 cos 2S a x . By denoting the mass of the electron as P we thus have the following Mathieu’s equation: d ²\ dx ²  2P § 2S · x ¸\ ¨ E  w 0 cos a ¹ =² © 0. [3.1] 3.2.2. Wave function in accordance with Mathieu’s equation Following on from Bloch’s theorem, Mathieu’s equation should be written \ k (x ) e ikx u (x ), where u(x) is a periodic function of period a, the lattice repeat unit. On introducing the angular frequency Z that is tied to the repeat unit of the lattice by equation a 2S Z , it is possible to develop the periodic function u(x) as a Fourier series, such that: u (x ) ¦ A n e in Zx n where n is an integer and A n Z u (x 2S ³ 2S )e in Zx dx . Z To within a normalization factor (A0), the function \ k (x ) must therefore be in the form: \ k (x ) ª º e ikx «1  ¦ A n e in Zx » . ¬ n z0 ¼ [3.2] 60 Solid-State Physics for Electronics In equation [3.2], the first term (associated with unit 1 of the square bracket) corresponds to the zero-order wave function of an electron (\ 0k (x ) v e ikx ) , and can perhaps be seen as the principal part of this wave function. As a consequence it is possible to write (still within the normalization term) that: \ k (x ) e ikx  ¦ A n e i ( k Zn ) x . [3.3] n z0 From this we can deduce that: w\ k ike ikx  wx 2 w \k wx 2 n z0 2 ikx k e ¦ i (k  n Z)A n e i ( k  n Z)x  [3.4] 2 ¦ (k  n Z) A n e i ( k  n Z) x . n z0 The solution for zero order energy E, written as E 0 =² k ² 2m , can be introduced into equation [3.1], so that we then obtain the energy E as a precise function of E0 and bring in the term [E – E0] (which is a small term for the first order term). This gives: d ²\ ª 2P º  « (E  E 0  w 0 cos Zx )  k ² » \ dx ² ¬ = ² ¼ [3.5] 0. Substituting equations [3.3] and [3.4] into equation [3.5] gives terms in k ²e ikx that cancel out: d ²\ dx ²  k ²\  2P ª E  E 0  w 0 cos Zx º \ ¼ =² ¬ 0, k ²e ikx and hence we can now write that: 2P ¦ A n e i ( k  n Z)x ª¬ k 2  (k  n Z)2 º¼  2 ª¬ E  E 0  w 0 cos Zx º¼ e ikx = n z0  2P =2 ¦ ª¬E  E 0  w 0 cos Zx º¼ A n e n z0 i ( k  n Z) x 0. [3.6] The Origin of Band Structures 61 In equation [3.6], the third term brings in the coefficients (E  E 0 )A n and w 0 A n , which are second-order terms, as (E  E 0 ), w 0 and > A n @n z 0 are all small terms of the first order that can be neglected in an approximate calculation. This is because to zero order, the energy is E 0 , the potential energy is V V 0 = 0, and the development of wave function is constrained to the single term for A0, so that > A n @n z 0  0. Mathieu’s equation is now reduced here to: ¦ An e i (k  n Z)x > k ²  [k  n Z]² @  n z0 2P ª E  E 0  w 0 cos Zx º e ikx ¼ =² ¬ 0. [3.7] The resolution method, i.e. to obtain coefficient Am (of the development of \ ), is to multiply equation [3.7] by e i ( k  m Z) x and then integrate over a repeat unit, in other words from 0 to a. Thus, the first step, multiplication by e i ( k  m Z) x , gives us: ¦ A
DD-ME2, and NL3$^*$.} \label{tab:GS-1} \\ \hline \hline ~ & $\beta_{2,n}$ & $\beta_{2,p}$ & $\beta_{2}$ & $\beta_{4}$ & $\beta_{6}$ & $\beta_{8}$ & $\beta_{10}$ & $R_{n}$ & $R_{p}$ & $R_{\mathrm{t}}$ & $R_{\mathrm{c}}$ & $E_{\mathrm{B}}$ \\ ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & (fm) & (fm) & (fm) & (fm) & (MeV) \\ \hline \endfirsthead \hline \hline ~ & $\beta_{2,n}$ & $\beta_{2,p}$ & $\beta_{2}$ & $\beta_{4}$ & $\beta_{6}$ & $\beta_{8}$ & $\beta_{10}$ & $R_{n}$ & $R_{p}$ & $R_{\mathrm{t}}$ & $R_{\mathrm{c}}$ & $E_{\mathrm{B}}$ \\ ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & (fm) & (fm) & (fm) & (fm) & (MeV) \\ \hline \endhead \hline \hline \endfoot PC-PK1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \\ $^{264}$Hs & 0.270 & 0.280 & 0.274 & $-$0.002 & $-$0.060 & $-$0.013 & 0.011 & 6.245 & 6.090 & 6.182 & 6.138 & 1924.415 \\ $^{266}$Hs & 0.266 & 0.276 & 0.271 & $-$0.021 & $-$0.063 & $-$0.004 & 0.014 & 6.267 & 6.101 & 6.200 & 6.148 & 1939.205 \\ $^{268}$Hs & 0.262 & 0.273 & 0.266 & $-$0.040 & $-$0.063 & ~~0.004 & 0.015 & 6.288 & 6.111 & 6.217 & 6.158 & 1953.554 \\ $^{270}$Hs & 0.257 & 0.269 & 0.261 & $-$0.057 & $-$0.061 & ~~0.012 & 0.015 & 6.306 & 6.120 & 6.232 & 6.167 & 1967.408 \\ $^{272}$Hs & 0.245 & 0.258 & 0.250 & $-$0.060 & $-$0.049 & ~~0.013 & 0.010 & 6.330 & 6.131 & 6.252 & 6.178 & 1979.303 \\ $^{274}$Hs & 0.216 & 0.228 & 0.221 & $-$0.053 & $-$0.038 & ~~0.009 & 0.006 & 6.344 & 6.135 & 6.263 & 6.182 & 1990.951 \\ $^{276}$Hs & 0.188 & 0.198 & 0.192 & $-$0.049 & $-$0.027 & ~~0.007 & 0.003 & 6.357 & 6.139 & 6.273 & 6.185 & 2002.778 \\ PK1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \\ $^{264}$Hs & 0.253 & 0.258 & 0.255 & ~~0.006 & $-$0.058 & $-$0.016 & 0.011 & 6.228 & 6.058 & 6.159 & 6.105 & 1934.074 \\ $^{266}$Hs & 0.253 & 0.258 & 0.255 & $-$0.014 & $-$0.065 & $-$0.006 & 0.016 & 6.253 & 6.070 & 6.179 & 6.118 & 1947.952 \\ $^{268}$Hs & 0.256 & 0.261 & 0.258 & $-$0.034 & $-$0.070 & ~~0.005 & 0.019 & 6.278 & 6.084 & 6.201 & 6.131 & 1961.285 \\ $^{270}$Hs & 0245 & 0.251 & 0.248 & $-$0.044 & $-$0.062 & ~~0.010 & 0.016 & 6.297 & 6.091 & 6.216 & 6.138 & 1973.766 \\ $^{272}$Hs & 0.211 & 0.216 & 0.213 & $-$0.029 & $-$0.053 & ~~0.005 & 0.010 & 6.305 & 6.090 & 6.221 & 6.137 & 1985.924 \\ $^{274}$Hs & 0.194 & 0.198 & 0.195 & $-$0.038 & $-$0.040 & ~~0.006 & 0.010 & 6.322 & 6.097 & 6.234 & 6.144 & 1997.412 \\ $^{276}$Hs & 0.178 & 0.182 & 0.180 & $-$0.047 & $-$0.028 & ~~0.007 & 0.007 & 6.342 & 6.105 & 6.250 & 6.151 & 2008.356 \\ PKDD & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \\ $^{264}$Hs & 0.250 & 0.255 & 0.252 & ~~0.001 & $-$0.060 & $-$0.015 & 0.011 & 6.207 & 6.053 & 6.145 & 6.101 & 1932.544 \\ $^{266}$Hs & 0.253 & 0.258 & 0.255 & $-$0.020 & $-$0.066 & $-$0.004 & 0.016 & 6.233 & 6.067 & 6.166 & 6.115 & 1946.294 \\ $^{268}$Hs & 0.258 & 0.264 & 0.260 & $-$0.041 & $-$0.072 & ~~0.009 & 0.021 & 6.259 & 6.082 & 6.188 & 6.129 & 1959.686 \\ $^{270}$Hs & 0.252 & 0.261 & 0.256 & $-$0.059 & $-$0.062 & ~~0.017 & 0.017 & 6.278 & 6.091 & 6.204 & 6.138 & 1972.399 \\ $^{272}$Hs & 0.211 & 0.217 & 0.213 & $-$0.030 & $-$0.056 & ~~0.006 & 0.019 & 6.282 & 6.087 & 6.205 & 6.134 & 1983.376 \\ $^{274}$Hs & 0.190 & 0.194 & 0.191 & $-$0.039 & $-$0.041 & ~~0.006 & 0.010 & 6.296 & 6.093 & 6.216 & 6.139 & 1994.504 \\ $^{276}$Hs & 0.174 & 0.179 & 0.176 & $-$0.048 & $-$0.028 & ~~0.007 & 0.007 & 6.316 & 6.100 & 6.233 & 6.147 & 2004.934 \\ DD-ME2 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \\ $^{264}$Hs & 0.260 & 0.267 & 0.263 & $-$0.001 & $-$0.061 & $-$0.012 & 0.014 & 6.178 & 6.073 & 6.136 & 6.121 & 1928.426 \\ $^{266}$Hs & 0.261 & 0.269 & 0.264 & $-$0.023 & $-$0.066 & $-$0.001 & 0.019 & 6.200 & 6.086 & 6.154 & 6.133 & 1942.991 \\ $^{268}$Hs & 0.259 & 0.269 & 0.263 & $-$0.042 & $-$0.068 & ~~0.011 & 0.021 & 6.220 & 6.097 & 6.171 & 6.144 & 1957.270 \\ $^{270}$Hs & 0.252 & 0.264 & 0.257 & $-$0.058 & $-$0.060 & ~~0.017 & 0.017 & 6.236 & 6.105 & 6.184 & 6.152 & 1971.027 \\ $^{272}$Hs & 0.213 & 0.222 & 0.216 & $-$0.032 & $-$0.054 & ~~0.007 & 0.012 & 6.240 & 6.101 & 6.185 & 6.148 & 1981.994 \\ $^{274}$Hs & 0.196 & 0.204 & 0.199 & $-$0.039 & $-$0.041 & ~~0.007 & 0.010 & 6.256 & 6.107 & 6.198 & 6.154 & 1993.391 \\ $^{276}$Hs & 0.178 & 0.186 & 0.181 & $-$0.048 & $-$0.027 & ~~0.008 & 0.007 & 6.271 & 6.113 & 6.210 & 6.160 & 2004.657 \\ NL3$^*$ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \\ $^{264}$Hs & 0.265 & 0.271 & 0.267 & ~~0.004 & $-$0.060 & $-$0.014 & 0.013 & 6.260 & 6.079 & 6.186 & 6.126 & 1931.827 \\ $^{266}$Hs & 0.263 & 0.270 & 0.266 & $-$0.017 & $-$0.065 & $-$0.004 & 0.017 & 6.284 & 6.090 & 6.206 & 6.137 & 1945.933 \\ $^{268}$Hs & 0.262 & 0.269 & 0.265 & $-$0.036 & $-$0.067 & ~~0.007 & 0.018 & 6.307 & 6.101 & 6.225 & 6.148 & 1959.582 \\ $^{270}$Hs & 0.256 & 0.265 & 0.260 & $-$0.054 & $-$0.061 & ~~0.015 & 0.017 & 6.326 & 6.109 & 6.240 & 6.156 & 1972.574 \\ $^{272}$Hs & 0.233 & 0.241 & 0.236 & $-$0.045 & $-$0.051 & ~~0.010 & 0.012 & 6.344 & 6.115 & 6.254 & 6.162 & 1984.027 \\ $^{274}$Hs & 0.204 & 0.211 & 0.207 & $-$0.039 & $-$0.040 & ~~0.007 & 0.009 & 6.357 & 6.118 & 6.264 & 6.164 & 1995.465 \\ $^{276}$Hs & 0.185 & 0.193 & 0.188 & $-$0.047 & $-$0.028 & ~~0.008 & 0.006 & 6.376 & 6.125 & 6.279 & 6.171 & 2006.648 \\ \end{longtable*} \setlength{\tabcolsep}{2.4mm} \begin{longtable*}{ccccccccccccc} \caption{Same as Table \ref{tab:GS-1}, but for isotones with $N=162$.} \label{tab:GS-2} \\ \hline \hline ~ & $\beta_{2,n}$ & $\beta_{2,p}$ & $\beta_{2}$ & $\beta_{4}$ & $\beta_{6}$ & $\beta_{8}$ & $\beta_{10}$ & $R_{n}$ & $R_{p}$ & $R_{\mathrm{t}}$ & $R_{\mathrm{c}}$ & $E_{\mathrm{B}}$ \\ ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & (fm) & (fm) & (fm) & (fm) & (MeV) \\ \hline \endfirsthead \hline \hline ~ & $\beta_{2,n}$ & $\beta_{2,p}$ & $\beta_{2}$ & $\beta_{4}$ & $\beta_{6}$ & $\beta_{8}$ & $\beta_{10}$ & $R_{n}$ & $R_{p}$ & $R_{\mathrm{t}}$ & $R_{\mathrm{c}}$ & $E_{\mathrm{B}}$ \\ ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & (fm) & (fm) & (fm) & (fm) & (MeV) \\ \hline \endhead \hline \hline \endfoot PC-PK1 & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & \\ $^{266}$Rf & 0.261 & 0.274 & 0.266 & $-$0.039 & $-$0.060 & 0.005 & 0.013 & 6.303 & 6.083 & 6.218 & 6.129 & 1953.531 \\ $^{268}$Sg & 0.260 & 0.274 & 0.266 & $-$0.048 & $-$0.063 & 0.009 & 0.015 &
#!/usr/bin/env python3 import datetime import shutil import uuid from io import BytesIO from pathlib import Path from dateutil.parser import parse from ukis_pysat.stacapi import StacApi try: import landsatxplore.api import numpy as np import pystac import requests import sentinelsat from PIL import Image from pylandsat import Product from pystac.extensions import sat from shapely import geometry, wkt, ops except ImportError as e: msg = ( "ukis_pysat.data dependencies are not installed.\n\n" "Please pip install as follows:\n\n" " python -m pip install ukis-pysat[data] --upgrade" ) raise ImportError(str(e) + "\n\n" + msg) from ukis_pysat.file import env_get from ukis_pysat.members import Datahub class Source: """ Provides methods to query data sources for metadata and download images and quicklooks (APIs only). Remote APIs and local data directories that hold metadata files are supported. """ def __init__(self, datahub, catalog=None, url=None): """ :param datahub: Data source (<enum 'Datahub'>). :param catalog: Only applicable if datahub is 'STAC_local'. Can be one of the following types: Path to STAC Catalog file catalog.json (String, Path). Pystac Catalog or Collection object (pystac.catalog.Catalog, pystac.collection.Collection). None initializes an empty catalog. (default: None) :param url: Only applicable if datahub is 'STAC_API'. STAC Server endpoint, reads from STAC_API_URL environment variable by default (default: None) """ self.src = datahub if self.src == Datahub.STAC_local: # connect to STAC Catalog if isinstance(catalog, (pystac.catalog.Catalog, pystac.collection.Collection)): self.api = catalog elif isinstance(catalog, (str, Path)): href = Path(catalog).resolve().as_uri() self.api = pystac.catalog.Catalog.from_file(href) elif catalog is None: self.api = self._init_catalog() else: raise AttributeError( f"{catalog} is not a valid STAC Catalog [catalog.json, pystac.catalog.Catalog, " f"pystac.collection.Collection, None] " ) elif self.src == Datahub.STAC_API: if url: self.api = StacApi(url=url) else: self.api = StacApi() elif self.src == Datahub.EarthExplorer: # connect to Earthexplorer self.user = env_get("EARTHEXPLORER_USER") self.pw = env_get("EARTHEXPLORER_PW") self.api = landsatxplore.api.API(self.user, self.pw) elif self.src == Datahub.Scihub: # connect to Scihub self.user = env_get("SCIHUB_USER") self.pw = env_get("SCIHUB_PW") self.api = sentinelsat.SentinelAPI( self.user, self.pw, "https://scihub.copernicus.eu/dhus", show_progressbars=False, ) else: raise NotImplementedError(f"{datahub} is not supported [STAC_local, STAC_API, EarthExplorer, Scihub]") def __enter__(self): return self def _init_catalog(self): """Initializes an empty STAC Catalog.""" return pystac.catalog.Catalog( id=str(uuid.uuid4()), description=f"Creation Date: {datetime.datetime.now()}, Datahub: {self.src.value}", catalog_type=pystac.catalog.CatalogType.SELF_CONTAINED, ) def add_items_from_directory(self, item_dir, item_glob="*.json"): """Adds STAC items from a directory to a STAC Catalog. :param item_dir: Path to directory that holds the STAC items (String). :param item_glob: Optional glob pattern to identify STAC items in directory (String), (default: '*.json'). """ if self.src == Datahub.STAC_local: # get all json files in item_dir that match item_substr item_files = sorted(Path(item_dir).rglob(item_glob)) # load items from file and add to STAC Catalog for item_file in item_files: item = pystac.read_file(str(item_file)) self.api.add_item(item) else: raise TypeError(f"add_items_from_directory only works for Datahub.STAC_local and not with {self.src}.") def query_metadata(self, platform, date, aoi, cloud_cover=None): """Queries metadata from data source. :param platform: Image platform (<enum 'Platform'>). :param date: Date from - to in format yyyyMMdd (String or Datetime tuple). :param aoi: Area of interest as GeoJson file or bounding box tuple with lat lon coordinates (String, Tuple). :param cloud_cover: Percent cloud cover scene from - to (Integer tuple). :returns: Metadata catalog of products that match query criteria (PySTAC Catalog). """ if self.src == Datahub.STAC_local: # query STAC Catalog for metadata catalog = self._init_catalog() geom = self.prep_aoi(aoi) for item in self.api.get_all_items(): if item.ext.eo.cloud_cover and cloud_cover: if not cloud_cover[0] <= item.ext.eo.cloud_cover < cloud_cover[1]: continue if ( platform.value == item.common_metadata.platform and sentinelsat.format_query_date(date[0]) <= sentinelsat.format_query_date(parse(item.properties["acquisitiondate"]).strftime("%Y%m%d")) < sentinelsat.format_query_date(date[1]) and geometry.shape(item.geometry).intersects(geom) ): catalog.add_item(item) return catalog elif self.src == Datahub.STAC_API: raise NotImplementedError( f"Do this directly with our StacApi functionalities, see " f"https://ukis-pysat.readthedocs.io/en/latest/api/stacapi.html." ) elif self.src == Datahub.EarthExplorer: # query Earthexplorer for metadata bbox = self.prep_aoi(aoi).bounds kwargs = {} if cloud_cover: kwargs["max_cloud_cover"] = cloud_cover[1] products = self.api.search( dataset=platform.value, bbox=[bbox[1], bbox[0], bbox[3], bbox[2]], start_date=sentinelsat.format_query_date(date[0]), end_date=sentinelsat.format_query_date(date[1]), max_results=10000, **kwargs, ) else: # query Scihub for metadata kwargs = {} if cloud_cover and platform != platform.Sentinel1: kwargs["cloudcoverpercentage"] = cloud_cover products = self.api.query( area=self.prep_aoi(aoi).wkt, date=date, platformname=platform.value, **kwargs, ) products = self.api.to_geojson(products)["features"] # initialize empty catalog and add metadata items catalog = self._init_catalog() for meta in products: catalog.add_item(self.construct_metadata(meta=meta, platform=platform)) return catalog def query_metadata_srcid(self, platform, srcid): """Queries metadata from data source by srcid. :param platform: Image platform (<enum 'Platform'>). :param srcid: Srcid of a specific product (String). :returns: Metadata of product that matches srcid (PySTAC Catalog). """ if self.src == Datahub.STAC_local: # query Spatio Temporal Asset Catalog for metadata by srcid catalog = self._init_catalog() for item in self.api.get_all_items(): if item.id == srcid: catalog.add_item(item) continue return catalog elif self.src == Datahub.STAC_API: raise NotImplementedError( f"Do this directly with our StacApi functionalities, see " f"https://ukis-pysat.readthedocs.io/en/latest/api/stacapi.html." ) elif self.src == Datahub.EarthExplorer: # query EarthExplorer for metadata by srcid # TODO: could not figure out how to directly query detailed metadata by srcid, therefore here we # query first for scene acquisitiondate and footprint and use these to query detailed metadata. meta_src = self.api.request( "metadata", **{"datasetName": platform.value, "entityIds": self.api.lookup(platform.value, srcid, inverse=True)}, ) date_from = meta_src[0]["acquisitionDate"].replace("-", "") date_to = (datetime.datetime.strptime(date_from, "%Y%m%d") + datetime.timedelta(days=1)).strftime("%Y%m%d") aoi = geometry.shape(meta_src[0]["spatialFootprint"]).bounds # initialize empty catalog and add metadata items catalog = self._init_catalog() for item in self.query_metadata(platform=platform, date=(date_from, date_to), aoi=aoi).get_all_items(): if item.id == srcid: catalog.add_item(item) return catalog else: # query Scihub for metadata by srcid catalog = self._init_catalog() # initialize empty catalog and add metadata items for meta in self.api.to_geojson(self.api.query(identifier=srcid))["features"]: catalog.add_item(self.construct_metadata(meta=meta, platform=platform)) return catalog def construct_metadata(self, meta, platform): """Constructs a STAC item that is harmonized across the different satellite image sources. :param meta: Source metadata (GeoJSON-like mapping) :param platform: Image platform (<enum 'Platform'>). :returns: PySTAC item """ if self.src == Datahub.STAC_local or self.src == Datahub.STAC_API: raise NotImplementedError(f"construct_metadata not supported for {self.src}.") elif self.src == Datahub.EarthExplorer: item = pystac.Item( id=meta["displayId"], datetime=datetime.datetime.now(), geometry=meta["spatialFootprint"], bbox=_get_bbox_from_geometry_string(meta["spatialFootprint"]), properties={ "producttype": "L1TP", "srcurl": meta["dataAccessUrl"], "srcuuid": meta["entityId"], "acquisitiondate": parse(meta["acquisitionDate"], ignoretz=True, fuzzy=True).strftime("%Y-%m-%d"), "ingestiondate": parse(meta["modifiedDate"], ignoretz=True, fuzzy=True).strftime("%Y-%m-%d"), }, stac_extensions=[pystac.Extensions.EO, pystac.Extensions.SAT], ) if "cloudCover" in meta: item.ext.eo.cloud_cover = round(float(meta["cloudCover"]), 2) item.common_metadata.platform = platform.value relative_orbit = int( meta["summary"][meta["summary"].find("Path: ") + len("Path: ") : meta["summary"].rfind(", Row: ")] ) item.ext.sat.apply(orbit_state=sat.OrbitState.DESCENDING, relative_orbit=relative_orbit) else: # Scihub item = pystac.Item( id=meta["properties"]["identifier"], datetime=datetime.datetime.now(), geometry=meta["geometry"], bbox=_get_bbox_from_geometry_string(meta["geometry"]), properties={ "producttype": meta["properties"]["producttype"], "size": meta["properties"]["size"], "srcurl": meta["properties"]["link"], "srcuuid": meta["properties"]["uuid"], "acquisitiondate": parse(meta["properties"]["beginposition"], ignoretz=True, fuzzy=True).strftime( "%Y-%m-%d" ), "ingestiondate": parse(meta["properties"]["ingestiondate"], ignoretz=True, fuzzy=True).strftime( "%Y-%m-%d" ), }, stac_extensions=[pystac.Extensions.EO, pystac.Extensions.SAT], ) if "cloudcoverpercentage" in meta["properties"]: item.ext.eo.cloud_cover = round(float(meta["properties"]["cloudcoverpercentage"]), 2) item.common_metadata.platform = platform.value item.ext.sat.apply( orbit_state=sat.OrbitState[meta["properties"]["orbitdirection"].upper()], # for enum key to work relative_orbit=int(meta["properties"]["orbitnumber"]), ) return item def download_image(self, platform, product_uuid, target_dir): """Downloads satellite image data to a target directory for a specific product_id. Incomplete downloads are continued and complete files are skipped. :param platform: Image platform (<enum 'Platform'>). :param product_uuid: UUID of the satellite image product (String). :param target_dir: Target directory that holds the downloaded images (String, Path) """ if isinstance(target_dir, str): target_dir = Path(target_dir) if self.src == Datahub.STAC_local or self.src == Datahub.STAC_API: raise NotImplementedError( f"download_image not supported for {self.src}. It is much easier to get the asset yourself now." ) elif self.src == Datahub.EarthExplorer: # query EarthExplorer for srcid of product meta_src = self.api.request( "metadata", **{ "datasetName": platform.value, "entityIds": [product_uuid], }, ) product_srcid = meta_src[0]["displayId"] if not Path(target_dir.joinpath(product_srcid + ".zip")).is_file(): # download data from AWS if file does not already exist product = Product(product_srcid) product.download(out_dir=target_dir, progressbar=False) # compress download directory and remove original files shutil.make_archive( target_dir.joinpath(product_srcid), "zip", root_dir=target_dir.joinpath(product_srcid), ) shutil.rmtree(target_dir.joinpath(product_srcid)) else: self.api.download(product_uuid, target_dir, checksum=True) def download_quicklook(self, platform, product_uuid, target_dir): """Downloads a quicklook of the satellite image to a target directory for a specific product_id. It performs a very rough geocoding of the quicklooks by shifting the image to the location of the footprint. :param platform: Image platform (<enum 'Platform'>). :param product_uuid: UUID of the satellite image product (String). :param target_dir: Target directory that holds the downloaded images (String, Path) """ if isinstance(target_dir, str): target_dir = Path(target_dir) if self.src == Datahub.STAC_local or self.src == Datahub.STAC_API: raise NotImplementedError( f"download_quicklook not supported for {self.src}. It is much easier to get the asset yourself now, " f"when it is a COG you can read in an overview." ) elif self.src == Datahub.EarthExplorer: # query EarthExplorer for url, srcid and bounds of product meta_src = self.api.request( "metadata", **{ "datasetName": platform.value, "entityIds": [product_uuid], }, ) url = meta_src[0]["browseUrl"] bounds = geometry.shape(meta_src[0]["spatialFootprint"]).bounds product_srcid = meta_src[0]["displayId"] else: #
voters preferred candidate A and we started with a flat $$\operatorname{Beta}(\theta | 1, 1)$$ prior. We can check that our posterior is indeed $$\operatorname{Beta}(7, 5)$$ by working through the algebra. z <- 6 n <- 10 # posterior alpha z + 1 ## [1] 7 # posterior beta n - z + 1 ## [1] 5 Here’s how we compute the 95% HDIs. ( h <- hdi_of_icdf(name = qbeta, shape1 = 7, shape2 = 5) ) ## [1] 0.3182322 0.8414276 The $$\operatorname{Beta}(7, 5)$$ distribution looks like this. tibble(theta = seq(from = 0, to = 1, by = .01)) %>% mutate(density = dbeta(theta, shape1 = 7, shape2 = 5)) %>% ggplot(aes(x = theta, y = density)) + geom_area(fill = ce[3]) + geom_segment(x = h[1], xend = h[2], y = 0.01, yend = 0.01, size = 1.2, color = ce[9]) + annotate(geom = "text", x = .6, y = 1/3, label = "95% HDI", color = "white") + scale_x_continuous(NULL, breaks = c(0, h[1], z / n, h[2], 1) %>% round(2)) + scale_y_continuous(NULL, breaks = NULL, expand = expansion(mult = c(0, 0.05))) + ggtitle(expression("Beta"*(7*", "*5))) “It turns out, in this case, that we can never have a sample size large enough to achieve the goal of $$80\%$$ of the HDIs falling above $$\theta = 0.5$$. To see why,” keep reading in the text (p. 371). Happily, there is a more useful goal, however. Instead of trying to reject a particular value of $$\theta$$, we set as our goal a desired degree of precision in the posterior estimate. For example, our goal might be that the $$95\%$$ HDI has width less than $$0.2$$, at least $$80\%$$ of the time. (p. 371) If you look back up at our min_n_for_hdi_power() defining code, above, you’ll see that “One and only one of hdi_max_width and null_value must be specified.” So if we want to determine the necessary $$N$$ for an 95% HDI width of less than .2, we need to set hdi_max_width = .2 and null_value = NULL. min_n_for_hdi_power(gen_prior_mode = .75, gen_prior_n = 10, hdi_max_width = .2, # look here null_value = NULL, rope = NULL, desired_power = .8, aud_prior_mode = .5, aud_prior_n = 2, hdi_mass = .95, init_samp_size = 75, verbose = TRUE) ## For sample size = 75, power = 0.5089359 ## For sample size = 76, power = 0.5337822 ## For sample size = 77, power = 0.5235513 ## For sample size = 78, power = 0.5474934 ## For sample size = 79, power = 0.5706373 ## For sample size = 80, power = 0.5929882 ## For sample size = 81, power = 0.6145578 ## For sample size = 82, power = 0.6353626 ## For sample size = 83, power = 0.6554231 ## For sample size = 84, power = 0.6747629 ## For sample size = 85, power = 0.6934076 ## For sample size = 86, power = 0.7113842 ## For sample size = 87, power = 0.7287209 ## For sample size = 88, power = 0.7716517 ## For sample size = 89, power = 0.787177 ## For sample size = 90, power = 0.8266938 ## [1] 90 Just like in the last section, here I set init_samp_size to a higher value than in the text in order to keep the output reasonably short. To reproduce the results in Table 13.2, we’ll need to adjust the min_n_for_hdi_power() parameters within our sim_power() function. sim_power <- function(mode, power) { min_n_for_hdi_power(gen_prior_mode = mode, gen_prior_n = 10, hdi_max_width = .2, null_value = NULL, rope = NULL, desired_power = power, aud_prior_mode = .5, aud_prior_n = 2, hdi_mass = .95, init_samp_size = 50, verbose = TRUE) } sim <- crossing(mode = seq(from = .6, to = .85, by = .05), power = c(.7, .8, .9)) %>% mutate(results = map2_dbl(mode, power, sim_power)) Let’s make that table. sim %>% pivot_wider(names_from = mode, values_from = results) %>% knitr::kable() power 0.6 0.65 0.7 0.75 0.8 0.85 0.7 91 90 88 86 81 75 0.8 92 92 91 90 87 82 0.9 93 93 93 92 91 89 What did that audience prior look like? kappa <- 2 omega <- .5 tibble(theta = seq(from = 0, to = 1, by = .1)) %>% mutate(density = dbeta(theta, shape1 = omega * (kappa - 2) + 1, shape2 = (1 - omega) * (kappa - 2) + 1)) %>% ggplot(aes(x = theta, y = density)) + geom_area(fill = ce[3]) + scale_y_continuous(NULL, breaks = NULL, expand = expansion(mult = c(0, 0.05))) + labs(title = "Behold the uniform audience prior.", x = expression(theta)) Here are what the beta distributions based on the sim look like. sim %>% rename(n = results) %>% expand(nesting(mode, power, n), theta = seq(from = 0, to = 1, by = .01)) %>% mutate(density = dbeta(theta, shape1 = mode * (n - 2) + 1, shape2 = (1 - mode) * (n - 2) + 1), mode = str_c("omega == ", mode)) %>% ggplot(aes(x = theta, y = density)) + geom_vline(xintercept = .5, color = ce[8]) + geom_area(fill = ce[3]) + scale_x_continuous(expression(theta), labels = c("0", "", ".5", "", "1")) + scale_y_continuous(NULL, breaks = NULL, expand = expansion(mult = c(0, 0.05))) + ggtitle("The power and mode values are in the rows and columns, respectively.") + facet_grid(power ~ mode, labeller = label_parsed) Toward the end of the section, Kruschke mentioned the required sample size shoots up if our desired HDI width is 0.1. Here’s the simulation. sim_power <- function(mode, power) { min_n_for_hdi_power(gen_prior_mode = mode, gen_prior_n = 10, hdi_max_width = .1, null_value = NULL, rope = NULL, desired_power = power, aud_prior_mode = .5, aud_prior_n = 2, hdi_mass = .95, init_samp_size = 300, # save some time and up this parameter verbose = TRUE) } sim <- crossing(mode = seq(from = .6, to = .85, by = .05), power = c(.7, .8, .9)) %>% mutate(results = map2_dbl(mode, power, sim_power)) Display the results in a table like before. sim %>% pivot_wider(names_from = mode, values_from = results) %>% knitr::kable() power 0.6 0.65 0.7 0.75 0.8 0.85 0.7 373 370 364 352 332 303 0.8 378 376 373 367 354 334 0.9 380 380 379 378 373 363 ### 13.2.4 Monte Carlo approximation of power. The previous sections illustrated the ideas of power and sample size for a simple case in which the power could be computed by mathematical derivation. [If your field is like mine, this will not be the norm for your research projects.] In this section, we approximate the power by Monte Carlo simulation. The R script for this simple case serves as a template for more realistic applications. The R script is named Jags-Ydich-Xnom1subj-MbernBeta-Power.R, which is the name for the JAGS program for dichotomous data from a single “subject” suffixed with the word “Power.” As you read through the script, presented below, remember that you can find information about any general R command by using the help function in R, as explained in Section 3.3.1 (p. 39). (p. 372) The code in Kruschke’s Jags-Ydich-Xnom1subj-MbernBeta-Power.R file also makes use of the content in his Jags-Ydich-Xnom1subj-MbernBeta.R file. As is often the case, the code in both is JAGS and base-R centric. We’ll be taking a different approach. I’ll walk you through. First, let’s fire up brms. library(brms) This won’t be of much concern for some of the complex models we’ll be fitting in later chapters. But for simple models like this, a lot of the time you spend waiting for brms::brm() to return your posterior and its summary has to do with compilation time. The issue of compilation goes into technical details I just don’t have the will to go through right now. But if we can avoid or minimize compilation time, it’ll be a boon for our power simulations. As it turns out, we can. The first time we fit
# Bugs 88 posts, 8 voices distler Moderator 90 posts If you’ve found a mistake in the rake task, please send me a patch. jl345 6 posts edited almost 3 years ago --- instiki-0.19.4/lib/tasks/fixtures.rake.orig Sat Jun 30 19:40:02 2012 +++ instiki-0.19.4/lib/tasks/fixtures.rake Wed Jul 18 01:33:06 2012 @@ -74,7 +74,7 @@ task :import_all => :environment do ActiveRecord::Base.establish_connection Dir.glob(Rails.root.join('dump','fixtures',"*.yml")).each do |f| - table_name = f.gsub( Regexp.escape(Rails.root.join('dump','fixtures').to_s + File::SEPARATOR), '').gsub('.yml', '') + table_name = f.gsub( Rails.root.join('dump','fixtures').to_s + File::Separator, '').gsub('.yml', '') puts "Importing #{table_name}" import_table_fixture(table_name) end That’s what I have anyways, but be careful, because somehow I got File::SEPARATOR in the wrong case in my bumbling around. I guess I got lucky: File::Separator is defined too, so no harm done. distler Moderator 90 posts Hmmm. Both look wrong. What I think we want is: table_name = f.gsub( Regexp.new(Regexp.escape(Rails.root.join('dump','fixtures').to_s + File::SEPARATOR)), '').gsub('.yml', '') The point being that the output of Regexp.escape is a (properly-escaped) string which is suitable as input to Regexp.new. jl345 6 posts I tried a little test case… $cat subtest.rb #!/usr/bin/env ruby puts '/some/other/string'.gsub( '/some/other' + File::SEPARATOR, '' ) puts '/some/other/string'.gsub( Regexp.escape( '/some/other' + File::SEPARATOR), '' ) puts '/some/other/string'.gsub( Regexp.new( Regexp.escape( '/some/other' + File::SEPARATOR ) ), '' )$ ./subtest.rb string string string $ruby --version ruby 1.8.7 (2012-06-29 patchlevel 370) [x86_64-openbsd] I believe that my first and third examples are both correct and equivalent, going by official docs. The second example works, too, in this case, because a forward slash is apparently not one of the characters escaped by Regexp.escape(), which is curious, because now I am unable to reproduce the error that occurred in the rake task before I altered the code. So I’m more and more confused. Andrew Stacey 118 posts I’m unable to run the inbuilt SVG editor on my computer (running Mac OS X, Lion). The window launches but none of the icons are present and although the buttons highlight when I hover over them, nothing happens when I click on one. This is with Firefox, Chrome, and Safari. Not sure what additional information you would like on this. Andrew Stacey 118 posts And another one, this time in how maruku parses its meta-data. It would seem that not leaving a space at the end of {: #identifier} means that the } gets into the identifier. Presumably this is the case here as well: What’s the id of this element? I get: • What’s the id of this element? in the source. admin Administator 58 posts edited 2 years ago And another one, this time in how maruku parses its meta-data. This was actually only a problem for IALs attached to • elements. Fixed in the latest Maruku. (This commit gives the complete solution. Its predecessor was only a partial fix.) (N.b.: Maruku is now unvendored, so a ruby bundle update will pull in the latest version from Github.) I’m unable to run the inbuilt SVG editor on my computer (running Mac OS X, Lion). Works fine for me under Lion. (I haven’t updated to Mountain Lion, so I can’t make any promises about that. But I’d be surprised if there were any OS dependence of this; it ought to be a function of the Javascript engine in your browser. Or perhaps I misunderstood: were you running the server on Lion, or just the client?) Andrew Stacey 118 posts I know you don’t believe in the Cache Bug … I had a page on the nLab which I renamed. Looking at the logs, then I see the save command. It ends with: "alter_title"=>"1", "new_name"=>"adequate subcategory > history", "author"=>"Andrew Stacey", "web"=>"nlab", "id"=>"adequate subcategory"} The next log items are: 24808: 2012-10-18 14:22:26 +0400: Reading page 'adequate subcategory' from web ' nlab' 24808: 2012-10-18 14:22:26 +0400: Page 'adequate subcategory' found 24808: 2012-10-18 14:22:26 +0400: Checking DNSBL 214.7.76.192.bl.spamcop.net 24808: 2012-10-18 14:22:26 +0400: Checking DNSBL 214.7.76.192.sbl-xbl.spamhaus.org 24808: 2012-10-18 14:22:27 +0400: 192.76.7.214 added to DNSBL passed cache 24808: 2012-10-18 14:22:27 +0400: 192.76.7.214 24808: 2012-10-18 14:22:27 +0400: Reading page 'adequate subcategory' from web 'nlab' 24808: 2012-10-18 14:22:27 +0400: Page 'adequate subcategory' found 24808: 2012-10-18 14:22:27 +0400: Maruku took 0.148016386 seconds. 24808: 2012-10-18 14:22:27 +0400: Maruku took 0.165203678 seconds. 24808: 2012-10-18 14:22:28 +0400: Expired fragment: views/nlab/show/adequate+subcategory+>+history (0.3ms) There are then a slew of more expirations, the first ones being adequate+subcategory+>+history. The last ones are also adequate+subcategory+>+history. I just tried on my course wiki. Here are the steps I took: Edit a page and change it’s name. Remove the automatically-inserted redirect. Save the page. Result: the expiration sweep does not include the old name and includes the new name twice. I’m wondering if this could be the culprit: app/controllers/revision_sweeper.rb: def before_save(record) if record.is_a?(Revision) expire_cached_page(record.page.web, record.page.name) expire_cached_revisions(record.page) end end I notice that in the save post data then the name is the new name and the id is the old name. Could it be that the record object is populated with the new name before this action takes place? I’m not very good at tracing through ruby code, but if my guesses about what happens are right then the before_save action takes place just before the save action is executed in page.revise. If so, then by this time the name is the new name and the old name has been forgotten: in wiki.rb then the call is page.revise(content, new_name, revised_at, author, renderer) and very early on in the revise method then I see self.name = name. Perhaps the revise should save its name in old_name and then the before_save can use that if it differs from the current name? distler Moderator 90 posts Sorry, I follow exactly the steps you outlined, and it’s adequate+subcategory (and its variants) that get cleared from the cache, not adequate+subcategory+>+history. The scenario you outline (involving the old name being forgotten before the cache gets swept) is exactly what the before_save action is supposed to avoid. Then the cache gets swept again in an after_save action. Now, the only thing I can think of is that I tested this under SQLite3, rather than MySQL. Perhaps the driver for the latter does something funky. But that seems unlikely… Andrew Stacey 118 posts I just tried on a completely fresh install and found that it happened just as you described: the name used was the old name (and it gets swept twice). That was a sqlite3 database. I’ve tried to install mysql on my mac to test this there, but get to a crash when I run instiki. Not sure why, seems to be related to the gem not finding my mysql lib files but it’s taken too much of my time already to try to fix it to test further. I can try it on my linux machine later. However, the evidence certainly suggests that there is a difference between mysql and sqlite3 on this one. Andrew Stacey 118 posts There would appear to be a bug on pages with apostrophes in their names. See http://nforum.mathforge.org/discussion/4757/apostrophes-in-page-titles-lead-to-weird-behaviour for details. distler Moderator 90 posts I’m sorry. Could you please distill that long and rambling discussion in to a set of steps by which one might reproduce the bug? Francesco Ta... 6 posts In the process of migrating my wiki from Textile to Markdown I think I found a bug: The following snippet of text: punto!). was converted by pandoc as punto![]().. The latter text blows the editing page with the error “NameError in WikiController#save”. Step to reproduce create a new empty page write punto![](). (without apexes) save hitting the “Submit” button distler Moderator 90 posts Pandoc is clearly weird. But you have uncovered a regression in Maruku. I’ve fixed that bug in the latest version in my repository on Github. ruby bundle update will fix the problem. Francesco Ta... 6 posts Fixed. Thanks! Andrew Stacey 118 posts Distillation for
The reports from the MNHGs tell that efforts were made to improve communication between public sector care providers and mothers, and maybe with other household members [3], which may have motivated more mothers to attend antenatal care. The study province in northern Vietnam has a medium-level neonatal mortality and a comprehensive health system for maternal and neonatal services. Overall there were relatively few home deliveries but a great geographic and social inequity in coverage of perinatal services and level of neonatal survival [9],[10],[21]. The facilitation intervention with local stakeholder groups composed of primary care staff and local politicians working for 3 y with a perinatal problem-solving approach reduced neonatal mortality after a latent period. Incremental costs for this type of intervention are judged to be low, and mainly related to cost for the laywoman facilitator and the indirect costs of MNHGs monthly meetings (total incremental cost US$77,000 or US$6.5 per birthing woman). Quang Ninh province is relatively typical for many Vietnamese provinces, with the dominating ethnic majority in towns and most rural areas, and ethnic minority groups in the remote, mountainous areas [9]. We have shown that community-based participatory approaches to reduce neonatal mortality is not only effective in South Asian societies [3] but also in a South-East Asian society in rapid transition [40]. ### Supporting Information Alternative Language Abstract S1. Vietnamese translation of the abstract by NTN and DPH. doi:10.1371/journal.pmed.1001445.s004 (DOCX) Table S1. Neonatal mortality outcome analysed by nested case-referent approach. doi:10.1371/journal.pmed.1001445.s001 (DOCX) Text S1. Trial protocol. doi:10.1371/journal.pmed.1001445.s002 (PDF) Text S2. CONSORT statement. doi:10.1371/journal.pmed.1001445.s003 (DOC) ### Author Contributions Conceived and designed the experiments: LÅP NTN MM DTPH LE LW UE. Performed the experiments: LÅP NTN MM DTPH LE LW KS TQH DMD TVT VTTT UE. Analyzed the data: LÅP MM KS. Wrote the first draft of the manuscript: LÅP. Contributed to the writing of the manuscript: LÅP NTN MM DTPH LE LW KS TQH DMD TVT VTTT UE. ICMJE criteria for authorship read and met: LÅP NTN MM DTPH LE LW KS TQH DMD TVT VTTT UE. Agree with manuscript results and conclusions: LÅP NTN MM DTPH LE LW KS TQH DMD TVT VTTT UE. ### References 1. 1. Bhutta ZA, Chopra M, Axelson H, Berman P, Boerma T, et al. (2010) Countdown to 2015 decade report (2000-10): taking stock of maternal, newborn, and child survival. Lancet 375: 2032–2044. doi: 10.1016/s0140-6736(10)60678-2 2. 2. Lawn JE, Osrin D, Adler A, Cousens S (2008) Four million neonatal deaths: counting and attribution of cause of death. Paediatr Perinat Epidemiol 22: 410–416. doi: 10.1111/j.1365-3016.2008.00960.x 3. 3. Osrin D, Prost A (2010) Perinatal interventions and survival in resource-poor settings: which work, which don't, which have the jury out? Arch Dis Child 95: 1039–1046. doi: 10.1136/adc.2009.179366 4. 4. Jokhio AH, Winter HR, Cheng KK (2005) An intervention involving traditional birth attendants and perinatal and maternal mortality in Pakistan. N Engl J Med 352: 2091–2099. doi: 10.1056/nejmsa042830 5. 5. Carlo WA, Goudar SS, Jehan I, Chomba E, Tshefu A, et al. (2010) Newborn-care training and perinatal mortality in developing countries. N Engl J Med 362: 614–623. doi: 10.1056/nejmsa0806033 6. 6. Victora CG, Wagstaff A, Schellenberg JA, Gwatkin D, Claeson M, et al. (2003) Applying an equity lens to child health and mortality: more of the same is not enough. Lancet 362: 233–241. doi: 10.1016/s0140-6736(03)13917-7 7. 7. Lawn JE, Kerber K, Enweronu-Laryea C, Cousens S (2010) 3.6 million neonatal deaths–what is progressing and what is not? Semin Perinatol 34: 371–386. doi: 10.1053/j.semperi.2010.09.011 8. 8. Hoa DP, Nga NT, Målqvist M, Persson L-Å (2008) Persistent neonatal mortality despite improved under-five survival: a retrospective cohort study in northern Vietnam. Acta Paediatr 97: 166–170. doi: 10.1111/j.1651-2227.2007.00604.x 9. 9. Målqvist M, Nga NT, Eriksson L, Wallin L, Hoa DP, et al. (2011) Ethnic inequity in neonatal survival: a case-referent study in northern Vietnam. Acta Paediatr 100: 340–346. doi: 10.1111/j.1651-2227.2010.02065.x 10. 10. Målqvist M, Sohel N, Do TT, Eriksson L, Persson L-Å (2010) Distance decay in delivery care utilisation associated with neonatal mortality. A case referent study in northern Vietnam. BMC Public Health 10: 762. doi: 10.1186/1471-2458-10-762 11. 11. Manandhar DS, Osrin D, Shrestha BP, Mesko N, Morrison J, et al. (2004) Effect of a participatory intervention with women's groups on birth outcomes in Nepal: cluster-randomised controlled trial. Lancet 364: 970–979. doi: 10.1016/s0140-6736(04)17021-9 12. 12. Tripathy P, Nair N, Barnett S, Mahapatra R, Borghi J, et al. (2010) Effect of a participatory intervention with women's groups on birth outcomes and maternal depression in Jharkhand and Orissa, India: a cluster-randomised controlled trial. Lancet 375: 1182–1192. doi: 10.1016/s0140-6736(09)62042-0 13. 13. Borghi J, Thapa B, Osrin D, Jan S, Morrison J, et al. (2005) Economic assessment of a women's group intervention to improve birth outcomes in rural Nepal. Lancet 366: 1882–1884. doi: 10.1016/s0140-6736(05)67758-6 14. 14. Björkman M, Svensson J (2009) Power to the people: evidence from a randomized field experiment on community-based monitoring in Uganda. The Quarterly Journal of Economics 124: 735–769. doi: 10.1162/qjec.2009.124.2.735 15. 15. More NS, Bapat U, Das S, Alcock G, Patil S, et al. (2012) Community mobilization in Mumbai slums to improve perinatal care and outcomes: a cluster randomized controlled trial. PLoS Med 9: e1001257 doi:10.1371/journal.pmed.1001257. 16. 16. Azad K, Barnett S, Banerjee B, Shaha S, Khan K, et al. (2010) Effect of scaling up women's groups on birth outcomes in three rural districts in Bangladesh: a cluster-randomised controlled trial. Lancet 375: 1193–1202. doi: 10.1016/s0140-6736(10)60142-0 17. 17. Lewycka S, Mwansambo C, Kazembe P, Phiri T, Mganga A, et al. (2010) A cluster randomised controlled trial of the community effectiveness of two interventions in rural Malawi to improve health care and to reduce maternal, newborn and infant mortality. Trials 11: 88. doi: 10.1186/1745-6215-11-88 18. 18. Nair N, Tripathy P, Prost A, Costello A, Osrin D (2010) Improving newborn survival in low-income countries: community-based approaches and lessons from South Asia. PLoS Med 7: e1000246 doi:10.1371/journal.pmed.1000246. 19. 19. Wallin L, Målqvist M, Nga NT, Eriksson L, Persson L-Å, et al. (2011) Implementing knowledge into practice for improved neonatal survival; a cluster-randomised, community-based trial in Quang Ninh province, Vietnam. BMC Health Serv Res 11: 239. doi: 10.1186/1472-6963-11-239 20. 20. UNICEF (2012) State of the world's children 2012: children in an urban world. New York: UNICEF. 21. 21. Nga NT, Målqvist M, Eriksson L, Hoa DP, Johansson A, et al. (2010) Perinatal services and outcomes in Quang Ninh province, Vietnam. Acta Paediatr 99: 1478–1483. doi: 10.1111/j.1651-2227.2010.01866.x 22. 22. Målqvist M, Nga NT, Eriksson L, Wallin L, Ewald U, et al. (2008) Delivery care utilisation and care-seeking in the neonatal period: a population-based study in Vietnam. Ann Trop Paediatr 28: 191–198. doi: 10.1179/146532808x335633 23. 23. Målqvist M, Eriksson L, Nguyen TN, Fagerland LI, Dinh PH, et al. (2008) Unreported births and deaths, a severe obstacle for improved neonatal survival in low-income countries; a population based study. BMC Int Health Hum Rights 8: 4. doi: 10.1186/1472-698x-8-4 24. 24. Thatte N, Kalter HD, Baqui AH, Williams EM, Darmstadt GL (2009) Ascertaining causes of neonatal deaths using verbal autopsy: current methods and challenges. J Perinatol 29: 187–194. doi: 10.1038/jp.2008.138 25. 25. Nga NT, Hoa DTP, Målqvist M, Persson L-Å, Ewald U (2012) Causes of neonatal death: results from NeoKIP community-based trial in Quang Ninh province, Vietnam. Acta Paediatr 101: 368–373. doi: 10.1111/j.1651-2227.2011.02513.x 26. 26. R Development Core Team (2008) R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. 27. 27. Lesnoff M, Lancelot R (2012) aod: analysis of overdispersed data. R package version 1.3. Available: http://cran.r-project.org/web/packages/a​od/index.html. 28. 28. Bates AD, Maechler M (2012) lme4: linear mixed-effects model using S4 classes. R package version 0.999999-0. Available: http://cran.r-project.org/web/packages/l​me4/lme4.pdf. 29. 29. Rycroft-Malone J, Kitson A, Harvey G, McCormach B, Seers K, et al. (2002) Ingredients for change: revisiting a conceptual framework. Qual Saf Health Care 11: 174–80. doi: 10.1136/qhc.11.2.174 30. 30. Bhutta ZA, Darmstadt GL, Hasan BS, Haws RA (2005) Community-based interventions for improving perinatal and neonatal health outcomes in developing countries: a review of the evidence. Pediatrics 115: 519–617. 31. 31. Lassi ZS, Haider BA, Bhutta ZA (2010) Community-based
in the AnimeCeleb. \end{itemize} For the implementations of existing baselines, we follow the hyper-parameters given in the original papers and codes. \section{Additional Head Reenactment Results of \textit{AniMo}}~\label{animo_additional} \vspace{-0.4cm} This section contains additional head reenactment results with the \emph{AniMo}\xspace and the baselines as follows: \begin{itemize} \item Qualitative results on self-identity (VoxCeleb and AnimeCeleb), cross-identity (VoxCeleb and AnimeCeleb), and cross-domain head reenactment (Vox. $\rightarrow$ Anime. and Anime. $\rightarrow$ Vox.) tasks. \item Intuitive pose editing of an animation and human head images. \item Qualitative results on cross-domain head reenactment using various unseen head images. \item A user study to compare the characteristics with iCartoon and head angle distribution comparison between VoxCeleb. \end{itemize} \noindent\textbf{Additional Qualitative Head Reenactment Results of \textit{AniMo}.} In the experiments, we utilize two different training source: single dataset (VoxCeleb) and joint datasets (AnimeCeleb and VoxCeleb). We use the single dataset (VoxCeleb) to compare the original experimental setup of the previous studies~\cite{latentpose,firstorder,ren2021pirenderer}. For qualitative comparisons, we show the results of three tasks: (1) \textbf{self-identity head reenactment} where the identical being provides both a source and a driving image, (2) \textbf{cross-identity head reenactment} where the identities of a source and driving image are different within the same dataset, and (3) \textbf{cross-domain head reenactment} where two frames of different identities sampled from the AnimeCeleb and the VoxCeleb alternatively for the sake of serving as a source and a driving image; for example, Vox. $\rightarrow$ Anime. denotes a source and driving image are sampled from the AnimeCeleb and the VoxCeleb, respectively. Note the warping and the editing network for each domain: $W_A, G_A$ and $W_V, G_V$ are responsible for producing an animation and a real human head image, respectively. Fig.~\ref{supp-fig:baselines-self-identity-vox} shows qualitative comparisons on self-identity head reenactment using the VoxCeleb. As seen in Fig.~\ref{supp-fig:baselines-self-identity-vox}, our model produces the outputs that are perceptually realistic, as good as the baselines. Although the baselines show similar results on the task, there is a performance gap between the models when it comes to handling cross-identity inputs. As shown in Fig.~\ref{supp-fig:baselines-cross-identity-vox}, the FOMM~\cite{firstorder} often fails to produce photo-realistic results because a head structure of a driving image is involved to generate results (the \textit{3rd} and the \textit{5th} columns). Compared to these results, the models which rely on the 3DMM parameters successfully handle cross-identity inputs (the \textit{4th}, the \textit{6th} and the \textit{last} columns in Fig.~\ref{supp-fig:baselines-cross-identity-vox}). Meanwhile, when performing on self-identity head reenactment using the AnimeCeleb, it is obvious that the models trained only with the VoxCeleb do not work well (the \textit{3rd} and the \textit{4th} columns in Fig.~\ref{supp-fig:baselines-self-identity-anime}). In contrast, the models trained with the VoxCeleb and the AnimeCeleb show a promising performance (the \textit{6th} and the \textit{last} columns in Fig.~\ref{supp-fig:baselines-self-identity-anime}), yet the FOMM still has difficulty in synthesizing vivid textures of a source image (the \textit{5th} column in Fig.~\ref{supp-fig:baselines-self-identity-anime}). In addition, Fig.~\ref{supp-fig:baselines-cross-identity-anime} shows similar results on cross-identity head reenactment, where the models trained with the VoxCeleb have performed poorly (the \textit{3th} and the \textit{4th} columns). In contrast, the others trained with the AnimeCeleb successfully synthesize the outputs (the \textit{6th} and the \textit{last} columns\footnote{Note that both models use our pose mapping method.}) except for the FOMM (the \textit{5th} column). Furthermore, Fig.~\ref{supp-fig:baselines-cross-domain-anime-vox} and \ref{supp-fig:baselines-cross-domain-vox-anime} demonstrate that our model generates photo-realistic results compared to the baselines for cross domain head reenactment. \noindent\textbf{Intuitive Image Editing.} One of the important applications of our model is to explicit control of a facial expression and head rotation on both the animation and human domain. As shown in Fig.~\ref{supp-fig:control_animo}, the \emph{AniMo}\xspace is capable of generating high-quality images steered by diverse semantics. For example, an animation and human head can be controlled along roll, pitch and yaw axis (the \textit{1st} row in Fig.~\ref{supp-fig:control_animo}), and manipulating the facial expressions (\textit{i.e.}, eyes and a mouth) is achievable (the \textit{2nd} row in Fig.~\ref{supp-fig:control_animo}). \noindent\textbf{Head Reenactment of Other Animation Images.} In this experiment, we evaluate our model on multiple head image samples collected from different sources, including Waifu Labs, Naver Webtoon~\footnote{https://comic.naver.com/}, Face Sketches~\cite{ojha2021few}, 2D Disney~\footnote{https://toonify.photos/} as seen in Fig.~\ref{supp-fig:out-of-domain_animo}. Given the trained $W_A$ and $G_A$ of the \emph{AniMo}\xspace, the poses of other animation images can be controlled with the guidance of driving poses. However, we also find that there exist problems such as a background distortion and a lack of detailed expressions. We discuss such problems in Section~\ref{discussions}. \begin{figure}[t!] \centering \vspace{-0.3cm} \includegraphics[width=0.9\linewidth]{supp-figure/supp-figure-userstudy.pdf} \vspace{-0.5cm} \caption{(A) Comparison of head pose statistics between VoxCeleb and AnimeCeleb. (B) User study results for comparison between iCartoon and AnimeCeleb. The higher score is better.} \vspace{-0.75cm} \label{supp-fig:userstudy} \end{figure} \noindent\textbf{Head Angle Comparison and User Study.} Fig.~\ref{supp-fig:userstudy} (A) shows the ranges of head angles of 10K samples from each dataset. As can be seen, we determine the ranges of head poses in the scope of covering most samples of VoxCeleb. For purpose of quantitative comparison with iCartoon, we conduct a user study to compare the properties of datasets after see- ing 100 samples from each dataset. As shown in Fig.~\ref{supp-fig:userstudy} (B), users positively evaluate the style consistency, quality\footnote{A low-resolution or defocused image is considered as low-quality one.} and cleanness\footnote{If a face is occluded with an object or incompletely cropped, then it is considered as a noisy image} of AnimeCeleb. Also, the users respond that AnimeCeleb has a comparable diversity of head pose and expression. \clearpage \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{supp-figure/supp-figure-self-identity-vox.pdf} \caption{Qualitative comparison between our model and the baselines on self-identity head reenactment given the images of the Voxceleb.} \label{supp-fig:baselines-self-identity-vox} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{supp-figure/supp-figure-cross-identity-vox.pdf} \caption{Qualitative comparison between our model and the baselines on cross-identity head reenactment given the images of the Voxceleb.} \label{supp-fig:baselines-cross-identity-vox} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{supp-figure/supp-figure-self-identity-anime.pdf} \caption{Qualitative comparison between our model and the baselines on self-identity head reenactment given the images of the AnimeCeleb.} \label{supp-fig:baselines-self-identity-anime} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{supp-figure/supp-figure-cross-identity-anime.pdf} \caption{Qualitative comparison between our model and the baselines on cross-identity head reenactment given the images of the AnimeCeleb.} \label{supp-fig:baselines-cross-identity-anime} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{supp-figure/supp-figure-cross-domain-anime_pose-vox.pdf} \caption{Qualitative comparison between our model and the baselines on cross-domain head reenactment given the source image from the VoxCeleb and the driving image from the AnimeCeleb (Anime. $\rightarrow$ Vox.).} \label{supp-fig:baselines-cross-domain-anime-vox} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{supp-figure/supp-figure-cross-domain-vox_pose-anime.pdf} \caption{Qualitative comparison between our model and the baselines on cross-domain head reenactment given the source image from of the AnimeCeleb and the driving image from the VoxCeleb (Vox. $\rightarrow$ Anime.).} \label{supp-fig:baselines-cross-domain-vox-anime} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{supp-figure/supp-figure-control.pdf} \vspace{-0.5cm} \caption{Intuitive image editing results on animation and human heads via controlling the semantics and the head angles.} \vspace{-0.5cm} \label{supp-fig:control_animo} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{supp-figure/supp-figure-out-domain_animo.pdf} \vspace{-0.5cm} \caption{Additional head reenactment results on head images from various animation head samples.} \vspace{-0.5cm} \label{supp-fig:out-of-domain_animo} \end{figure} \section{Discussions}~\label{discussions} In this section, we discuss potential issues and directions for improvement of the AnimeCeleb and the \emph{AniMo}\xspace in further research. \noindent\textbf{Extension of Creation Protocol.} Due to the limited budget, the proposed pipeline is designed to generate a group of multi-pose yet single-view animation head images with the limited poses. However, we believe that the AnimeCeleb has room for improvement in three aspects: (1) constructing high-quality images higher than $256\times256$, (2) obtaining multi-view animation head images by rotating the camera, and (3) building a various light-conditioned animation head dataset from changing the light source position. To prove these concepts, we present these samples in Fig.~\ref{supp-fig:megapixel}. As seen in in Fig.~\ref{supp-fig:megapixel} (A), our data creation pipeline is able to render a higher resolution than $256\times256$ (\textit{e.g.}, $1024\times1024$). This definitely allow us to construct a high-quality dataset in future research. Next, the images of AnimeCeleb are created based on the frontal face, and thus do not span comprehensive appearances that can be created at various camera angles. This is mainly due to the goal of the AnimeCeleb lies in constructing the public animation dataset, which is suitable for head reenactment. A straightforward method to improve our creation process is to render an animation head at different camera angles in Blender as shown in Fig.~\ref{supp-fig:megapixel} (B). Also, as can be seen in Fig.~\ref{supp-fig:megapixel} (C), we can control the illumination for the aim of generating animation head images under different light conditions. \noindent\textbf{Diversity of the AnimeCeleb.} One of the AnimeCeleb strengths lies in a wide spanning of animation characters. However, we fixed the camera position with the aim of capturing frontal faces of animation characters during the AnimeCeleb generation process. Although this enables us to
for $2\leq k\leq \lfloor \frac{r+1}{2} \rfloor$, to calculate $V_k={\rm soc}_{(j_k,j_{k-1},\cdots,j_1)}(\hat{I}_{j_k})$, we consider the chain \[ 0=X_0\subset X_1\subset X_2\subset\cdots\subset X_k \subset\hat{I}_{j_k} \] such that $X_1={\rm soc}_{j_k}(\hat{I}_{j_k})=S_{j_k}$, $X_2/X_1={\rm soc}_{j_{k-1}}(\hat{I}_{j_k}/X_1)$, $X_3/X_2={\rm soc}_{j_{k-2}}(\hat{I}_{j_k}/X_2),\cdots$, $X_{k}/X_{k-1}={\rm soc}_{j_1}(\hat{I}_{j_k}/X_{k-1})$. By $(\ref{inj-mod})$, the module $\hat{I}_{j_k}/S_{j_k}$ has simple submodules isomorphic to $S_{j_k-1}$ and $S_{j_k+1}$. Since $\hat{I}_{j_k}/S_{j_k}$ has no simple submodules isomorphic to $S_{j_{k-1}},\ S_{j_{k-2}}, \cdots,\ S_{j_1}$, we have $X_1=X_2=\cdots=X_k$ and then \begin{equation}\label{iniex1-1} V_k=X_k=S_{j_k}. \end{equation} Next, for $\lfloor \frac{r+1}{2} \rfloor +1\leq k\leq r$, we consider the chain \[ 0=X_0\subset X_1\subset X_2\subset\cdots\subset X_k \subset\hat{I}_{j_k} \] such that $X_1={\rm soc}_{j_k}(\hat{I}_{j_k})=S_{j_k}$, $X_2/X_1={\rm soc}_{j_{k-1}}(\hat{I}_{j_k}/X_1),\cdots$. In the same way as in $(\ref{iniex1-1})$, we get $X_1=X_2=\cdots=X_{\lfloor \frac{r}{2} \rfloor}=S_{j_k}$. And $X_{\lfloor \frac{r}{2} \rfloor+1}/X_{\lfloor \frac{r}{2} \rfloor}={\rm soc}_{j_{k-\lfloor \frac{r}{2} \rfloor}}(\hat{I}_{j_k}/X_{\lfloor \frac{r}{2} \rfloor})=S_{j_k-1}$. So the module $X_{\lfloor \frac{r}{2} \rfloor+1}$ is described as \[ \begin{xy} (65,103) *{j_k-1}="2r-2k+1", (75,97)*{j_k}="2r-2k+2", (55,103) *{}="12", \ar@{->} "2r-2k+1";"2r-2k+2" \end{xy} \] Similarly, we obtain $X_{\lfloor \frac{r}{2} \rfloor+2}/X_{\lfloor \frac{r}{2} \rfloor+1}={\rm soc}_{j_{k-\lfloor \frac{r}{2} \rfloor-1}}(\hat{I}_{j_k}/X_{\lfloor \frac{r}{2} \rfloor+1})=S_{j_k+1}$. In the same way as in $(\ref{iniex1-1})$, we have $V_k=X_{k}=X_{k-1}=\cdots=X_{\lfloor \frac{r}{2} \rfloor+2}$. Thus, the module $V_k$ is described as \begin{equation}\label{iniex1-2} \begin{xy} (65,103)*{j_k-1}="2r-2k+1",(85,103)*{j_k+1}="2r-2k+3", (75,97)*{j_k}="2r-2k+2", (55,103) *{}="12", \ar@{->} "2r-2k+1";"2r-2k+2" \ar@{->} "2r-2k+3";"2r-2k+2" \end{xy} \end{equation} Next, for $r+1\leq k\leq \lfloor \frac{r+1}{2} \rfloor +r$, we consider the chain \[ 0=X_0\subset X_1\subset X_2\subset\cdots\subset X_k \subset\hat{I}_{j_k} \] such that $X_1={\rm soc}_{j_k}(\hat{I}_{j_k})=S_{j_k}$, $X_2/X_1={\rm soc}_{j_{k-1}}(\hat{I}_{j_k}/X_1),\cdots$. In the same way as in $(\ref{iniex1-1})$, we get $X_1=X_2=\cdots=X_{\lfloor \frac{r+1}{2} \rfloor-1}=S_{j_k}$. And $X_{\lfloor \frac{r+1}{2} \rfloor}/X_{\lfloor \frac{r+1}{2} \rfloor-1}={\rm soc}_{j_{k-\lfloor \frac{r+1}{2} \rfloor+1}}(\hat{I}_{j_k}/X_{\lfloor \frac{r+1}{2} \rfloor-1})=S_{j_k-1}$, where we set $S_j:=0$ for $j\leq 0$. We also get $X_{\lfloor \frac{r+1}{2} \rfloor+1}/X_{\lfloor \frac{r+1}{2} \rfloor}={\rm soc}_{j_{k-\lfloor \frac{r+1}{2} \rfloor}}(\hat{I}_{j_k}/X_{\lfloor \frac{r+1}{2} \rfloor})=S_{j_k+1}$, and \[ X_{\lfloor \frac{r+1}{2} \rfloor+1}=X_{\lfloor \frac{r+1}{2} \rfloor+2}=\cdots =X_{r-1}. \] We also obtain $X_r/X_{r-1}={\rm soc}_{j_{k-r+1}}(\hat{I}_{j_k}/X_{r-1})=S_{j_k-2}$, $X_{r+1}/X_{r}=S_{j_k}$, $X_{r+2}/X_{r+1}=S_{j_k+2}$ and $X_{r+2}=X_{r+3}=\cdots=X_{k}$. Therefore, the module $X_k=V_k$ is described as \begin{equation}\label{iniex1-3} \begin{xy} (50,113)*{j_k-2}="3r-2k-1",(70,113)*{j_k}="3r-2k+1a",(90,113)*{j_k+2}="3r-2k+3", (60,107)*{j_k-1}="3r-2k",(80,107)*{j_k+1}="3r-2k+2", (70,101)*{j_k}="3r-2k+1", (35,107)*{}="12", \ar@{->} "3r-2k";"3r-2k+1" \ar@{->} "3r-2k+2";"3r-2k+1" \ar@{->} "3r-2k-1";"3r-2k" \ar@{->} "3r-2k+1a";"3r-2k" \ar@{->} "3r-2k+1a";"3r-2k+2" \ar@{->} "3r-2k+3";"3r-2k+2" \end{xy} \end{equation} Finally, for $\lfloor \frac{r+1}{2} \rfloor +r+1\leq k\leq 2r$, we can verify that the module $V_{k}$ is described as: \begin{equation}\label{iniex1-4} \begin{xy} (35,123)*{_{j_k-3}}="4r-2k-1",(55,123)*{_{j_k-1}}="4r-2k+1b",(75,123)*{_{j_k+1}}="4r-2k+3b", (95,123)*{_{j_k+3}}="4r-2k+5", (45,118)*{_{j_k-2}}="4r-2k",(65,118)*{_{j_k}}="4r-2k+2a",(85,118)*{_{j_k+2}}="4r-2k+4", (55,113)*{_{j_k-1}}="4r-2k+1",(75,113)*{_{j_k+1}}="4r-2k+3", (65,108)*{_{j_k}}="4r-2k+2", (35,108)*{}="12", \ar@{->} "4r-2k+1";"4r-2k+2" \ar@{->} "4r-2k+3";"4r-2k+2" \ar@{->} "4r-2k";"4r-2k+1" \ar@{->} "4r-2k+2a";"4r-2k+1" \ar@{->} "4r-2k+2a";"4r-2k+3" \ar@{->} "4r-2k+4";"4r-2k+3" \ar@{->} "4r-2k-1";"4r-2k" \ar@{->} "4r-2k+1b";"4r-2k" \ar@{->} "4r-2k+1b";"4r-2k+2a" \ar@{->} "4r-2k+3b";"4r-2k+2a" \ar@{->} "4r-2k+3b";"4r-2k+4" \ar@{->} "4r-2k+5";"4r-2k+4" \end{xy} \end{equation} by the same argument as in $(\ref{iniex1-1})$, $(\ref{iniex1-2})$ and $(\ref{iniex1-3})$. In this case, we have $I_{c^2}=I_{\bf{{\rm i}}}=V_{r+1}\oplus\cdots\oplus V_{2r}$. \end{ex} \begin{rem}\label{diarem} When we see the quiver \[ \begin{xy} (35,123)*{_{j-3}}="4r-2k-1",(55,123)*{_{j-1}}="4r-2k+1b",(75,123)*{_{j+1}}="4r-2k+3b", (95,123)*{_{j+3}}="4r-2k+5", (45,118)*{_{j-2}}="4r-2k",(65,118)*{_{j}}="4r-2k+2a",(85,118)*{_{j+2}}="4r-2k+4", (55,113)*{_{j-1}}="4r-2k+1",(75,113)*{_{j+1}}="4r-2k+3", (65,108)*{_{j}}="4r-2k+2", (35,108)*{}="12", \ar@{->} "4r-2k+1";"4r-2k+2" \ar@{->} "4r-2k+3";"4r-2k+2" \ar@{->} "4r-2k";"4r-2k+1" \ar@{->} "4r-2k+2a";"4r-2k+1" \ar@{->} "4r-2k+2a";"4r-2k+3" \ar@{->} "4r-2k+4";"4r-2k+3" \ar@{->} "4r-2k-1";"4r-2k" \ar@{->} "4r-2k+1b";"4r-2k" \ar@{->} "4r-2k+1b";"4r-2k+2a" \ar@{->} "4r-2k+3b";"4r-2k+2a" \ar@{->} "4r-2k+3b";"4r-2k+4" \ar@{->} "4r-2k+5";"4r-2k+4" \end{xy} \] or its subquiver, if $j=1$, $2$ or $3$, we understand it means \[ \begin{xy} (15,95)*{_{4}}="4r-2k+5", (10,90)*{_{3}}="4r-2k+4", (5,85)*{_{2}}="4r-2k+3", (0,80)*{_{1}}="4r-2k+2", (35,95)*{_{3}}="3a",(45,95)*{_{5}}="5", (30,90)*{_{2}}="2a",(40,90)*{_{4}}="4", (25,85)*{_{1}}="1",(35,85)*{_{3}}="3", (30,80)*{_{2}}="2", (60,95)*{_{2}}="2bb",(70,95)*{_{4}}="4bb",(80,95)*{_{6}}="6b", (55,90)*{_{1}}="1b",(65,90)*{_{3}}="3bb",(75,90)*{_{5}}="5bb", (60,85)*{_{2}}="2b",(70,85)*{_{4}}="4b", (65,80)*{_{3}}="3b", \ar@{->} "4r-2k+3";"4r-2k+2" \ar@{->} "4r-2k+4";"4r-2k+3" \ar@{->} "4r-2k+5";"4r-2k+4" \ar@{->} "3a";"4" \ar@{->} "3a";"2a" \ar@{->} "5";"4" \ar@{->} "2a";"1" \ar@{->} "2a";"3" \ar@{->} "4";"3" \ar@{->} "1";"2" \ar@{->} "3";"2" \ar@{->} "2bb";"1b" \ar@{->} "2bb";"3bb" \ar@{->} "4bb";"3bb" \ar@{->} "4bb";"5bb" \ar@{->} "6b";"5bb" \ar@{->} "1b";"2b" \ar@{->} "3bb";"2b" \ar@{->} "3bb";"4b" \ar@{->} "5bb";"4b" \ar@{->} "2b";"3b" \ar@{->} "4b";"3b" \end{xy} \] respectively. Similarly, if $j=r$, $r-1$ or $r-2$, we understand it means \[ \begin{xy} (0,95)*{_{r-3}}="4r-2k+5", (5,90)*{_{r-2}}="4r-2k+4", (10,85)*{_{r-1}}="4r-2k+3", (15,80)*{_{r}}="4r-2k+2", (25,95)*{_{r-4}}="3a",(35,95)*{_{r-2}}="5", (30,90)*{_{r-3}}="2a",(40,90)*{_{r-1}}="4", (35,85)*{_{r-2}}="1",(45,85)*{_{r}}="3", (40,80)*{_{r-1}}="2", (50,95)*{_{r-5}}="2bb",(60,95)*{_{r-3}}="4bb",(70,95)*{_{r-1}}="6b", (55,90)*{_{r-4}}="1b",(65,90)*{_{r-2}}="3bb",(75,90)*{_{r}}="5bb", (60,85)*{_{r-3}}="2b",(70,85)*{_{r-1}}="4b", (65,80)*{_{r-2}}="3b", \ar@{->} "4r-2k+3";"4r-2k+2" \ar@{->} "4r-2k+4";"4r-2k+3" \ar@{->} "4r-2k+5";"4r-2k+4" \ar@{->} "5";"2a" \ar@{->} "3a";"2a" \ar@{->} "5";"4" \ar@{->} "2a";"1" \ar@{->} "4";"1" \ar@{->} "4";"3" \ar@{->} "1";"2" \ar@{->} "3";"2" \ar@{->} "2bb";"1b" \ar@{->} "4bb";"1b" \ar@{->} "4bb";"3bb" \ar@{->} "6b";"3bb" \ar@{->} "6b";"5bb" \ar@{->} "1b";"2b" \ar@{->} "3bb";"2b" \ar@{->} "3bb";"4b" \ar@{->} "5bb";"4b" \ar@{->} "2b";"3b" \ar@{->} "4b";"3b" \end{xy} \] \end{rem} \subsection{Mutation} For a $\Lambda$-module $T$ in mod$(\Lambda)$, let add$(T)$ denote the subcategory of mod$(\Lambda)$ whose objects are all $\Lambda$-modules which are isomorphic to finite direct sums of direct summands of $T$. \begin{defn}$\cite{ASS, LD, GLS}$ \begin{enumerate} \item A $\Lambda$-module $T$ is {\it rigid} if ${\rm Ext}^1_{\Lambda}(T,T)=0$. \item For a rigid module $T$ in $\mathcal{C}_v$, we say $T$ is a $\mathcal{C}_v$-{\it cluster-tilting} module if ${\rm Ext}^1_{\Lambda}(T,X)=0$ with $X\in \mathcal{C}_v$ implies $X\in {\rm add}(T)$. \item A $\Lambda$-module $T$ is said to be {\it basic}, if it is decomposed to a direct sum of pairwise non-isomorphic indecomposable modules. \item Let $T$, $X$ and $Y\in {\rm mod}(\Lambda)$. A morphism $f\in{\rm Hom}_{\Lambda}(X,Y)$ (resp. $f\in{\rm Hom}_{\Lambda}(Y,X)$) is said to be a {\it left} (resp. {\it right}) {\rm add}(T)-{\it approximation} of $X$ if $Y\in{\rm add}(T)$ and for an arbitrary $Y'\in {\rm add}(T)$ and $f'\in{\rm Hom}_{\Lambda}(X,Y')$ (resp. $f'\in{\rm Hom}_{\Lambda}(Y',X)$), there exists $g\in {\rm Hom}_{\Lambda}(Y,Y')$ (resp. $g\in {\rm Hom}_{\Lambda}(Y',Y)$) and $f'=g\circ f$ (resp. $f'=f\circ g$). \item For $V$, $W\in {\rm mod}(\Lambda)$, a morphism $f\in{\rm Hom}_{\Lambda}(V,W)$ is said to be {\it left} (resp. {\it right}) {\it minimal} if every endomorphism $g\in{\rm End}_{\Lambda}(W)$ (resp. $g\in{\rm End}_{\Lambda}(V)$) such that $g\circ f=f$ (resp. $f\circ g=f$) is an isomorphism. \end{enumerate} \end{defn} \begin{prop}\label{exseqprop}$\cite{LD,GLS,GLS2}$ Let $T=T_1\oplus T_2\oplus \cdots\oplus T_n$ be a basic $\mathcal{C}_{v}$-cluster-tilting object. We suppose that the $\{T_i\}_{i=1,2,\cdots,n}$ are indecomposable summands of $T$ and $T_{n-r+1}, \cdots, T_n$ are the $\mathcal{C}_{v}$-projective-injective modules. Then for $k \in\{1,2,\cdots, n-r\}$, there exists a short exact sequence \begin{equation}\label{exseqdef} 0\rightarrow T_k\overset{f}{\rightarrow} \ovl{T_k}\overset{g}{\rightarrow} T^*_k\rightarrow 0 \end{equation} such that \begin{enumerate} \item $f$ is a left minimal left {\rm add}$(T/T_k)$-approximation, \item $g$ is a right minimal right {\rm add}$(T/T_k)$-approximation, \item $T^*_k$ is indecomposable, \item $T^*_k\notin {\rm add}(T)$, \item $T/T_k\oplus T^*_k$ is basic $\mathcal{C}_{v}$-cluster-tilting. \end{enumerate} \end{prop} \begin{defn}$\cite{LD, GLS}$ In the setting of the previous proposition, the mutation $\mu_{T_k}(T)$ of $T$ in direction $T_k$ is defined as \begin{equation}\label{mudiretk} \mu_{T_k}(T):=T/T_k\oplus T^*_k. \end{equation} We call the short exact sequence (\ref{exseqdef}) in Proposition \ref{exseqprop} the {\it exchange sequence} associated to the direct summand $T_k$ of $T$. \end{defn} For a basic module $T=T_1\oplus\cdots\oplus T_n$ in $\mathcal{C}_{v}$, let $\Gamma_{T}$ be the quiver of ${\rm End}_{\Lambda}(T)^{{\rm op}}$, that is, ${\rm End}_{\Lambda}(T)^{{\rm op}}\cong\mathbb{C}\Gamma_{T}/(R)$ with an admissible ideal $(R)$ \cite{ASS}. Setting \[ {\rm Rad}(T_i,T_j)= \begin{cases} {\rm Hom}_{\Lambda}(T_i,T_j) & {\rm if}\ i\neq j,\\ \{{\rm nilpotent\ elements\ of}\ {\rm End}_{\Lambda}(T_i)\} & {\rm if}\ i= j, \end{cases} \] we have the following: \begin{lem}\label{titj}$\cite{ASS, LD}$ The quiver $\Gamma_{T}$ has $n$ vertices indexed by $\{1,2,\cdots,n\}$, and for $1\leq i, j\leq n$, the number of arrows $j\rightarrow i$ is equal to the dimension of the space \[ \frac{{\rm Rad}(T_i,T_j)}{\sum^{n}_{k=1}{\rm Rad}(T_k,T_j)\circ {\rm Rad}(T_i,T_k)}. \] \end{lem} \begin{defn} Let $T=T_1\oplus\cdots\oplus T_n$ be a basic module in $\mathcal{C}_{v}$. For $i,j\in[1,n]$ and a non-zero homomorphism $f\in {\rm Hom}_{\Lambda}(T_i,T_j)$, it is said that $f$ is {\it factorizable} in the direct summands of $T$ if it belongs to $\sum^{n}_{k=1}{\rm Rad}(T_k,T_j)\circ {\rm Rad}(T_i,T_k)$. \end{defn} Let $B(\Gamma_{T})=(b_{i,j})$ denote $n\times (n-r)$-matrix defined by \[ b_{i,j}=({\rm number\ of\ arrows}\ j\rightarrow i\ {\rm in}\ \Gamma_{T})-({\rm number\ of\ arrows}\ i\rightarrow j\ {\rm in}\ \Gamma_{T}). \] For $\textbf{i}=(j_n,\cdots,j_1)\in Q^n_0$, we define a quiver $\overline{\Gamma}_{\textbf{i}}$ as follows: We use the notation $k^-$ in \ref{cAi}. We also denote $k^+$ the smallest index $l$ such that $k<l$ and $|j_l|=|j_k|$ if it exists. If it does not exist, we set $k^+=n+1$. The vertices of $\overline{\Gamma}_{\textbf{i}}$ are $1,2,\cdots,n$. For two vertices $k,l\in[1,n]$ with $l<k$, there exists an arrow $k\rightarrow l$ (resp. $l\rightarrow k$) if and only if $l=k^-$ (resp. $k<l^+\leq k^+$ and $a_{i_k,i_l}<0$). \begin{thm}\label{GLSthm}$\cite{BIRS, GLS, GLS2}$ Let $n=l(v)$ and ${\rm \bf{i}}=(j_n,\cdots,j_1)$ be a reduced word of $v$. \begin{enumerate} \item The module $V_{{\rm \bf{i}}}$ defined in \ref{ppalg-cat-sub} is a basic $\mathcal{C}_{v}$-cluster-tilting object and $\Gamma_{V_{{\rm \bf{i}}}}=\overline{\Gamma}_{{\rm \bf{i}}}$. \item Let $T=T_1\oplus T_2\oplus \cdots\oplus T_n$ be a basic $\mathcal{C}_{v}$-cluster-tilting object. For $1\leq k \leq n-r$, we have $B(\Gamma_{\mu_{T_k}(T)})=\mu_k(B(\Gamma_{T}))$. \item For a basic $\mathcal{C}_{v}$-cluster-tilting object $T=T_1\oplus T_2\oplus \cdots\oplus T_n$ and $1\leq k \leq n-r$, the exchange sequence associated to the direct summand $T_k$ of $T$ is \[ 0\rightarrow T_k \rightarrow \bigoplus_{i\rightarrow k\ {\rm in}\ \Gamma_T}T_i\rightarrow T_k^*\rightarrow 0. \] \end{enumerate} \end{thm} \begin{ex} Let ${\rm \bf{i}}$ be the reduced word in $(\ref{redwords2})$. By Theorem \ref{GLSthm} (i), for $1\leq k\leq\lfloor \frac{r+1}{2} \rfloor$, the quiver $\Gamma_{V_{{\rm \bf{i}}}}$ is described as \begin{equation}\label{gammavex-1} \begin{xy} (100,86)*{\cdots}="emp1", (85,90) *{r+\lfloor \frac{r}{2} \rfloor + k}="3", (85,82)*{\lfloor \frac{r}{2} \rfloor +k}="-4", (60,90) *{r+k}="k", (60,82)*{k}="-j_k", (35,90) *{r+\lfloor \frac{r}{2} \rfloor +k+1}="4", (35,82)*{\lfloor \frac{r}{2} \rfloor +k+1}="-2", (10,90) *{r+k+1}="2", (10,82)*{k+1}="-1", (0,86)*{\cdots}="emp", \ar@{->} "3";"-4" \ar@{->} "k";"-j_k" \ar@{->} "-j_k";"-4" \ar@{->} "-j_k";"-2" \ar@{->} "-1";"-2" \ar@{->} "4";"-2" \ar@{->} "2";"-1" \ar@{->} "k";"3" \ar@{->} "k";"4" \ar@{->} "2";"4" \ar@{->} "-4";"k" \ar@{->} "-2";"k" \ar@{->} "-2";"2" \end{xy} \end{equation} \end{ex} \begin{ex}\label{initialex2} In the setting of Example \ref{initialex1}, let us consider the mutation of $V_{\rm \bf{i}}$ in direction $V_k$ $(1\leq k\leq r)$. Let us constitute the exchange sequence \[ 0\rightarrow V_k \rightarrow \ovl{V_k}\rightarrow V^*_k\rightarrow 0 \] associated to the direct summand $V_k$ of $V_{{\rm \bf{i}}}$. For $1\leq k\leq \lfloor \frac{r+1}{2} \rfloor$, recall that $V_k=S_{j_k}$. In $\{V_i|\ 1\leq i\leq 2r,\ i\neq k\}$, the module $V_{r+k}$ has the simple socle isomorphic to $S_{j_k}$ and the others do not so since their simple socles are $S_l$ $(l\neq j_k)$ by $(\ref{iniex1-2})$, $(\ref{iniex1-3})$, $(\ref{iniex1-4})$. The module $V_{r+k}$ is described as \[ \begin{xy} (50,113)*{j_k-2}="3r-2k-1",(70,113)*{j_k}="3r-2k+1a",(90,113)*{j_k+2}="3r-2k+3", (60,107)*{j_k-1}="3r-2k",(80,107)*{j_k+1}="3r-2k+2", (70,101)*{j_k}="3r-2k+1", (35,107)*{}="12", \ar@{->} "3r-2k";"3r-2k+1" \ar@{->} "3r-2k+2";"3r-2k+1" \ar@{->} "3r-2k-1";"3r-2k" \ar@{->} "3r-2k+1a";"3r-2k" \ar@{->} "3r-2k+1a";"3r-2k+2" \ar@{->} "3r-2k+3";"3r-2k+2" \end{xy} \] and bottom $j_k$ means a basis generating the simple socle isomorphic to $S_{j_k}$ $((\ref{inj-mod}),\ (\ref{iniex1-3}))$. Hence, there exists an injective homomorphism $V_k\rightarrow V_{r+k}$, and its image is the simple socle. By the above argument, we have {\rm Hom}$(V_k,V_{r+k})\cong\mathbb{C}$ and {\rm Rad}$(V_k,V_{t})=\{0\}$ for $t\neq r+k$. We get $\ovl{V_k}=V_{r+k}$ by Lemma \ref{titj} and Theorem \ref{GLSthm} (iii), which yields that $V^*_k$ $(1\leq k< \lfloor \frac{r+1}{2} \rfloor)$ and $V^*_{\lfloor \frac{r+1}{2} \rfloor}$ are described as \begin{equation}\label{mutex1} \begin{xy} (60,113)*{_{j_k-2}}="3r-2k-1",(70,113)*{_{j_k}}="3r-2k+1a",(80,113)*{_{j_k+2}}="3r-2k+3", (65,106)*{_{j_k-1}}="3r-2k",(75,106)*{_{j_k+1}}="3r-2k+2", (87,106)*{,}=",", (100,113)*{_3}="3", (95,106)*{_2}="2", \ar@{->} "3r-2k-1";"3r-2k" \ar@{->} "3r-2k+1a";"3r-2k" \ar@{->} "3r-2k+1a";"3r-2k+2" \ar@{->} "3r-2k+3";"3r-2k+2" \ar@{->} "3";"2" \end{xy} \end{equation} respectively. Next, for $\lfloor \frac{r+1}{2} \rfloor+1\leq k \leq r$, the module $V_k$ is given as $(\ref{iniex1-2})$. The module $V_{r+k}$ is described as $(\ref{iniex1-4})$ and it has the submodule isomorphic to $V_k$, which is generated by the basis $j_k-1$, $j_k$ and $j_k+1$ lower one in $(\ref{iniex1-4})$. Let $c_{j_k-1}$, $c_{j_k}$ and $c_{j_k+1}$ denote these three bases. Thus, there exists an injective homomorphism $V_k\rightarrow V_{r+k}$. Since $V_k$ has the simple quotients isomorphic to $S_{j_k-1}$, $S_{j_k+1}$, there exist surjective homomorphisms $V_k\rightarrow V_{k-\lfloor \frac{r}{2} \rfloor}=S_{j_k-1}$ and $V_k\rightarrow V_{k-\lfloor \frac{r}{2} \rfloor-1}=S_{j_k+1}$ $($note that $j_{k-\lfloor
# conx - a neural network library # # Copyright (c) Douglas S. Blank <[email protected]> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, # Boston, MA 02110-1301 USA """ The network module contains the code for the Network class. """ import collections import operator from functools import reduce import signal import string import numbers import random import pickle import base64 import json import html import copy import sys import io import os import re from typing import Any import PIL import numpy as np import matplotlib.pyplot as plt import keras from keras.callbacks import Callback, History import keras.backend as K from .utils import * from .layers import Layer from .dataset import Dataset try: from IPython import get_ipython except: get_ipython = lambda: None #------------------------------------------------------------------------ class ReportCallback(Callback): def __init__(self, network, verbose, report_rate, mpl_backend, record): # mpl_backend is matplotlib backend super().__init__() self.network = network self.verbose = verbose self.report_rate = report_rate self.mpl_backend = mpl_backend self.in_console = self.network.in_console(mpl_backend) self.record = record def on_epoch_end(self, epoch, logs=None): self.network.history.append(logs) self.network.epoch_count += 1 if (self.verbose > 0 and self.in_console and (epoch+1) % self.report_rate == 0): self.network.report_epoch(self.network.epoch_count, logs) if self.record != 0 and (epoch+1) % self.record == 0: self.network.weight_history[self.network.epoch_count] = self.network.get_weights() class PlotCallback(Callback): def __init__(self, network, report_rate, mpl_backend): # mpl_backend te matplotlib backend string code # super().__init__() self.network = network self.report_rate = report_rate self.mpl_backend = mpl_backend self.in_console = self.network.in_console(mpl_backend) self.figure = None def on_epoch_end(self, epoch, logs=None): if epoch == -1: # training loop finished, so make a final update to plot # in case the number of loop cycles wasn't a multiple of # report_rate self.network.plot_results(self) if not self.in_console: plt.close(self.figure[0]) elif (epoch+1) % self.report_rate == 0: self.network.plot_results(self) class FunctionCallback(Callback): """ 'on_batch_begin', 'on_batch_end', 'on_epoch_begin', 'on_epoch_end', 'on_train_begin', 'on_train_end', """ def __init__(self, network, on_method, function): super().__init__() self.network = network self.on_method = on_method self.function = function def on_batch_begin(self, batch, logs=None): if self.on_method == "on_batch_begin": self.function(self.network, batch, logs) def on_batch_end(self, batch, logs=None): if self.on_method == "on_batch_end": self.function(self.network, batch, logs) def on_epoch_begin(self, epoch, logs=None): if self.on_method == "on_epoch_begin": self.function(self.network, epoch, logs) def on_epoch_end(self, epoch, logs=None): if self.on_method == "on_epoch_end": self.function(self.network, epoch, logs) def on_train_begin(self, logs=None): if self.on_method == "on_train_begin": self.function(self.network, logs) def on_train_end(self, logs=None): if self.on_method == "on_train_end": self.function(self.network, logs) class StoppingCriteria(Callback): def __init__(self, item, op, value, use_validation_to_stop): super().__init__() self.item = item self.op = op self.value = value self.use_validation_to_stop = use_validation_to_stop def on_epoch_end(self, epoch, logs=None): key = ("val_" + self.item) if self.use_validation_to_stop else self.item if key in logs: # we get what we need directly: if self.compare(logs[key], self.op, self.value): self.model.stop_training = True else: ## ok, then let's sum/average anything that matches total = 0 count = 0 for item in logs: if self.use_validation_to_stop: if item.startswith("val_") and item.endswith("_" + self.item): count += 1 total += logs[item] else: if item.endswith("_" + self.item) and not item.startswith("val_"): count += 1 total += logs[item] if count > 0 and self.compare(total/count, self.op, self.value): self.model.stop_training = True def compare(self, v1, op, v2): if v2 is None: return False if op == "<": return v1 < v2 elif op == ">": return v1 > v2 elif op == "==": return v1 == v2 elif op == "<=": return v1 <= v2 elif op == ">=": return v1 >= v2 class Network(): """ The main class for the conx neural network package. Arguments: name: Required. The name of the network. Should not contain special HTML characters. sizes: Optional numbers. Defines the sizes of layers of a sequential network. These will be created, added, and connected automatically. config: Configuration overrides for the network. Note: To create a complete, operating network, you must do the following items: 1. create a network 2. add layers 3. connect the layers 4. compile the network 5. set the dataset 6. train the network See also :any:`Layer`, :any:`Network.add`, :any:`Network.connect`, and :any:`Network.compile`. Examples: >>> net = Network("XOR1", 2, 5, 2) >>> len(net.layers) 3 >>> net = Network("XOR2") >>> net.add(Layer("input", 2)) 'input' >>> net.add(Layer("hidden", 5)) 'hidden' >>> net.add(Layer("output", 2)) 'output' >>> net.connect() >>> len(net.layers) 3 >>> net = Network("XOR3") >>> net.add(Layer("input", 2)) 'input' >>> net.add(Layer("hidden", 5)) 'hidden' >>> net.add(Layer("output", 2)) 'output' >>> net.connect("input", "hidden") >>> net.connect("hidden", "output") >>> len(net.layers) 3 >>> net = Network("NMIST") >>> net.name 'NMIST' >>> len(net.layers) 0 >>> net = Network("NMIST", 10, 5, 1) >>> len(net.layers) 3 >>> net = Network("NMIST", 10, 5, 5, 1, activation="sigmoid") >>> net.config["activation"] 'sigmoid' >>> net["output"].activation == "sigmoid" True >>> net["hidden1"].activation == "sigmoid" True >>> net["hidden2"].activation == "sigmoid" True >>> net["input"].activation is None True >>> net.layers[0].name == "input" True """ OPTIMIZERS = ("sgd", "rmsprop", "adagrad", "adadelta", "adam", "adamax", "nadam", "tfoptimizer") ERROR_FUNCTIONS = ['binary_crossentropy', 'categorical_crossentropy', 'categorical_hinge', 'cosine', 'cosine_proximity', 'hinge', 'kld', 'kullback_leibler_divergence', 'logcosh', 'mae', 'mape', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_error', 'mean_squared_logarithmic_error', 'mse', 'msle', 'poisson', 'sparse_categorical_crossentropy', 'squared_hinge'] def __init__(self, name: str, *sizes: int, load_config=True, debug=False, build_propagate_from_models=True, **config: Any): if not isinstance(name, str): raise Exception("first argument should be a name for the network") self.debug = debug self.build_propagate_from_models = build_propagate_from_models ## Pick a place in the random stream, and remember it: ## (can override randomness with a particular seed): if not isinstance(name, str): raise Exception("conx layers need a name as a first parameter") self._check_network_name(name) self.name = name if "seed" in config: seed = config["seed"] del config["seed"] else: seed = np.random.randint(2 ** 31 - 1) self.seed = seed np.random.seed(self.seed) self.reset_config() ## Next, load a config if available, and override defaults: self.layers = [] if load_config: self.load_config() ## Override those with args: self.config.update(config) ## Set initial values: self.num_input_layers = 0 self.num_target_layers = 0 self.input_bank_order = [] self.output_bank_order = [] self.dataset = Dataset(self) self.compile_options = {} self.train_options = {} self._tolerance = K.variable(0.1, dtype='float32', name='tolerance') self.layer_dict = {} self.epoch_count = 0 self.history = [] self.weight_history = {} self.update_pictures = get_ipython() is not None self._comm = None self.model = None self.prop_from_dict = {} ## FIXME: can be multiple paths self.keras_functions = {} self._svg_counter = 1 self._need_to_show_headings = True self._initialized_javascript = False # If simple feed-forward network: for i in range(len(sizes)): if i > 0: self.add(Layer(autoname(i, len(sizes)), shape=sizes[i], activation=self.config["activation"])) else: self.add(Layer(autoname(i, len(sizes)), shape=sizes[i])) # Connect them together: for i in range(len(sizes) - 1): self.connect(autoname(i, len(sizes)), autoname(i+1, len(sizes))) def reset_config(self): """ Reset the config back to factor defaults. """ self.config = { "font_size": 12, # for svg "font_family": "monospace", # for svg "border_top": 25, # for svg "border_bottom": 25, # for svg "hspace": 150, # for svg "vspace": 30, # for svg, arrows "image_maxdim": 200, # for svg "image_pixels_per_unit": 50, # for svg "activation": "linear", # Dense default, if none specified "arrow_color": "black", "arrow_width": "2", "border_width": "2", "border_color": "black", "show_targets": False, "show_errors": False, "pixels_per_unit": 1, "precision": 2, "svg_scale": None, # for svg, 0 - 1, or None for optimal "svg_rotate": False, # for rotating SVG "svg_preferred_size": 400, # in pixels "svg_max_width": 800, # in pixels "dashboard.dataset": "Train", "dashboard.features.bank": "", "dashboard.features.columns": 3, "dashboard.features.scale": 1.0, "config_layers": {}, } def _check_network_name(self, name): """ Check to see if a network name is appropriate. Raises exception if invalid name. """ valid_chars = string.ascii_letters + string.digits + " _-" if len(name) == 0: raise Exception("network name must not be length 0: '%s'" % name) if not all(char in valid_chars for char in name): raise Exception("network name must only contain letters, numbers, '-', ' ', and '_': '%s'" % name) def __getstate__(self): return { "name": self.name, "layers": [layer.__getstate__() for layer in self.layers], "outgoing_connections": {layer.name: [layer2.name for layer2 in layer.outgoing_connections] for layer in self.layers}, "config": self.config, } def __setstate__(self, state): from .layers import make_layer Network.__init__(self, state["name"]) self.config = state["config"] for layer_state in state["layers"]: self.add(make_layer(layer_state)) for layer_from in self.layers: for layer_to in state["outgoing_connections"][layer_from.name]: self.connect(layer_from.name, layer_to) def _get_tolerance(self): return
marks. Later in 2004, TIPO introduced a whole new administrative mechanism for “the registration of geographical indications as certification marks.”Footnote 60 It incorporated the TRIPS Agreement’s definition of GIs and established procedures to ensure the existence of a required link between the product and the place of origin.Footnote 61 It also introduced a decision-making process through which the decision to grant GI protection was a joint decision by TIPO and the relevant government authorities in charge of the products identified by the GI, such as Ministry of Agriculture and Ministry of Treasury.Footnote 62 However, this mechanism was abolished in 2007 when TIPO introduced the registration of “geographical certification marks” and “geographical collective trademarks.” At the heart of this new mechanism was the requirement of distinctiveness of the geographical term – a link between the product and place was no longer required. The TIPO was now the sole authority for granting GI protection. Further change came in mid-2011 with the enactment of the Trademark Act 2012 (TMA 2012),Footnote 63 which codified the terms “geographical certification mark” and “geographical collective mark.” In doing so, the TRIPS definition of GIs was formally incorporated into the definition of “geographical certification marks” and “geographical collective marks.” Surprisingly, the requirement of distinctiveness was abolished and a joint decision-making process was reintroduced – not to qualify the product but to qualify the applicant. The Trademark Act 2003, the first official response to GI obligation, simply added “place of origin” to the categories certifiable by certification marks. Under Taiwanese trademark law, a “certification mark is used to certify the characteristics, quality, precision, place of origin or other matters of another person’s goods or services shall apply for certification mark registration.”Footnote 64 This means that, unlike general trademarks, a certification mark is not used to indicate a single business source. Instead, it is “used by multiple people who comply with the labelling requirements in connection with their respective goods or services.”Footnote 65 Only “a juristic person, an organization or a government agency capable of certifying another person’s goods or services” is eligible to apply for a certification mark.Footnote 66 However, the owner of a certification mark is not allowed to use the mark. Rather, he is obliged to “control the use of the mark, supervise the authorized users’ use, and ensure that the certified goods or services meet the articles governing use.”Footnote 67 Also, the owner of a certification mark must allow any person who complies with the requirements to apply to use the certification mark.Footnote 68 The year 2003 saw the registration of what the TIPO claims to be the first geographical certification mark:“池上米” (Chinese characters for “Chi-Shang rice”).Footnote 69 This certification mark was registered by the Chi-Shang Township Office of Taitung County to certify rice originating from the Chi-Shang Township of Taitung County, and that its quality met the “Criteria Governing Chi-Shang Rice quality rice logo” that was established by the owner of the mark.Footnote 70 The main effect of registration of a geographical certification mark is that, after such registration, any application to register the same “geographical name” as a trademark would be rejected pursuant to Article 23–1(11) of the Trademark Act 2003 because the latter application might mislead the public with respect to the quality, nature, or place of origin of the goods that the second mark would identify, if registered. In other words, after “池上米” is registered, another person’s application to register the same geographical name as part of a trademark, which is likely to mislead the public with respect to the place of origin, shall be rejected. However, any registered trademark acquired prior to the registration of the corresponding geographical certification mark is not affected. Furthermore, the owner of the geographical certification mark would not have the right to prohibit the owner of the trademark from using that geographical name in good faith and in a reasonable manner.Footnote 71 However, it is noteworthy that TIPO’s narrative does not entirely align with reality. Certification marks were first included under TMA 1993. Under the TMA 1993, certification marks are used to certify characteristics, quality, precision, or other matters of goods or services.Footnote 72 It has been pointed out that this provision is broad enough to cover even the “place of origin.”Footnote 73 Moreover, a survey of TIPO’s trademark register also confirms that there were certification marks registered before the TMA 2003 came into force on November 28, 2003, which may certify the place of origin of products. Some examples include the following: the mark “CALIFORNIA” with a device to certify that the cling peach products it identifies originated from California, US, and that they comply with the quality standards set by the proprietor of the mark (the certifier);Footnote 74 the mark “QUALITY USA” with a device that certifies “the certified peanut products are absolutely originated in the USA and comply the relevant US Federal standards and regulations”;Footnote 75 the mark “IQF EDAMAME OF TAIWAN” with a map of Taiwan to certify that their edamames originate from Taiwan and that their quality and sanitation methods comply with the standards set by the certifier;Footnote 76 and the mark “JAMAICA BLUE MOUNTAIN” that certifies that the coffee beans identified by the mark originate from the Jamaican Blue Mountain area and that their storage, processing, and packaging comply with the requirements of the certifier.Footnote 77 Thus, it is argued that listing the words “place of origin” in the TMA 2003 does not create a new legal right. It is only a declaratory gesture used to express Taiwan’s determination to implement its TRIPS obligations.Footnote 78 ###### 3.2Main Points for the Registration of Geographical Indications of 2004 In September 2004, TIPO adopted the “Main Points for the Registration of Geographical Indications as Certification Marks” (GI Registration Points 2004).Footnote 79 The GI Registration Points 2004 established a whole new administrative mechanism for the registration of GIs as certification marks and has three main features. First, the GI Registration Point 2004 incorporated the TRIPS Agreement’s definition of GIs.Footnote 80 TIPO further refined this definition into three elements: (i) the indication must be a geographical name, a picture, or word related to that geographical term which identifies the nexus between a particular good and that geographical area; (ii) the geographical area in question may encompass a WTO Member’s entire territory, or a single administrative unit, a combination of several administrative units, or a specific area where the raw materials grow or processing takes place; and (iii) there must be a nexus between a given quality, reputation, or other characteristic of the good and that geographical area.Footnote 81 Second, it established procedures to verify the existence of a link between the product and the place of origin. TIPO set out three alternative criteria to determine the existence of the product–place nexus. First, all stages of production (growth of raw materials, processing, and packaging) must take place within the designated area. Second, the main raw materials (tea leaves, for example) must originate from the designated area and only a small portion of raw materials may be supplied from other areas; or, third, the production stage which gives the product its distinctive feature must take place within the designated area.Footnote 82 The applicant must also submit a product specification with the following information: (i) definition of the geographical area; (ii) raw materials and their place of origin; (iii) description of the raw materials, including physical, chemical, microbiological, sensual characters and evidence of such characters; (iv) description of methods of production, including the local conventional or unvarying methods; and (v) description and evidence of the specific facts or factors in relation to the geographical environment, such as the soil, climate, wind, water quality, altitudes, humidity, and their connection to the product.Footnote 83 Third, it also introduced a
# Automatic threshold determination for anomaly detection I am working with a time series of anomaly scores (the background is anomaly detection in computer networks). Every minute, I get an anomaly score $x_t \in [0, 5]$ which tells me how "unexpected" or abnormal the current state of the network is. The higher the score, the more abnormal the current state. Scores close to 5 are theoretically possible but occur almost never. Now I want to come up with an algorithm or a formula which automatically determines a threshold for this anomaly time series. As soon as an anomaly score exceeds this threshold, an alarm is triggered. The frequency distribution below is an example for an anomaly time series over 1 day. However, it is not safe to assume that every anomaly time series is going to look like that. In this special example, an anomaly threshold such as the .99-quantile would make sense since the few scores on the very right can be regarded as anomalies. And the same frequency distribution as time series (it only ranges from 0 to 1 since there are no higher anomaly scores in the time series): Unfortunately, the frequency distribution might have shapes, where the .99-quantile is not useful. An example is below. The right tail is very low, so if the .99-quantile is used as threshold, this might result in many false positives. This frequency distribution does not seem to contain anomalies so the threshold should lie outside the distribution at around 0.25. Summing up, the difference between these two examples is that the first one seems to exhibit anomalies whereas the second one does not. From my naive point of view, the algorithm should consider these two cases: • If the frequency distribution has a large right tail (i.e. a couple abnormal scores), then the .99-quantile can be a good threshold. • If the frequency distribution has a very short right tail (i.e. no abnormal scores), then the threshold should lie outside the distribution. /edit: There is also no ground truth, i.e. labeled data sets available. So the algorithm is "blind" against the nature of the anomaly scores. Now I am not sure how these observations can be expressed in terms of an algorithm or a formula. Does anyone have a suggestion how this problem could be solved? I hope that my explanations are sufficient since my statistical background is very limited. • Just a note, the first graph does not look like anything normal for me. – mpiktas May 4 '11 at 8:06 • @cryptron, the key question is what is a sound threshold. For example if each raised alarm and non-raised alarm incurs certain costs, the threshold can be chosen such that minimises total costs. For that we need cost data. Without the exact definition of sound it is impossible to measure how to evaluate the method chosen for picking the threshold. – mpiktas May 4 '11 at 8:16 • @mpiktas: I have to admit, the word "sound" was unfortunate in this context because I have no way of rigorously evaluating the threshold (hence, I edited it away). Basically, the threshold is supposed to minimize false positives because they are way more costly than false negatives in network anomaly detection. – cryptron May 4 '11 at 8:57 • @cryptron, do you have any data on what is a false positive? – mpiktas May 4 '11 at 9:05 • I'm confused by your plots. This is a univariate time series $\{x_t\}$ taking values in $0<x_t \leq 5$? Or should it be $0 <x_t \leq 0.5$? (from looking at the x axis in your first plot). A traceplot would be more helpful too. For example, do you get high scores for a sustained period of time or in short bursts (or both)? If both, is one more troubling than the other? If you can put down a reasonable model for the data you can use theoretical quantiles from the fitted distribution, which would solve the problem you've identified with the sample quantiles. – JMS May 4 '11 at 17:49 You might find this paper of interest. See also more detailed presentation of similar models in West & Harrison. There are other examples of this sort of monitoring as well, many which are more recent, but this isn't exactly my wheelhouse :). Undoubtedly there are suitable implementations of these models, but I don't know what they might be offhand... The basic idea is that you have a switching model where some observations/sequence of observations are attributed to abnormal network states while the rest are considered normal. A mixture like this could account for the long right tail in your first plot. A dynamic model could also alert you to abnormal jumps like at 8:00 and 4:00 in real-time by assigning high probability to new observations belonging to a problem state. It could also be easily extended to include things like predictors, periodic components (perhaps your score rises/falls a bit with activity) and that sort of thing. Edit: I should also add, this kind of model is "unsupervised" in the sense that anomalies are caught either by showing a large mean shift or increase in variance. As you gather data you can improve the model with more informative prior distributions. But perhaps once you have enough data (and hard-won training examples by dealing with network problems!) you could devise some simple monitoring rules (thresholds, etc) Do you have any 'labeled' examples of what constitutes an anomaly? i.e. values associated with a network failure, or something like that? One idea you might consider applying is a ROC curve, which is useful for picking threshholds that meet a specific criteria, like maximizing true positives or minimizing false negatives. Of course, to use a ROC curve, you need to label your data in some way. • Unfortunately, I have no labeled data sets. There is only the assumption, that long tails or outliers indicate anomalies in the data set. – cryptron May 3 '11 at 19:15 • @cryptron I see. So what you need is a way to dynamically identify outliers. – Zach May 3 '11 at 20:15 • That would solve a part of the problem, yes. – cryptron May 3 '11 at 21:39 The graph of the "original series" does not have to exhibit any pre-defined structure. What is critical is that the graph of the "residuals from a suitable model series" need to exhibit either a gaussian structure . This "gaussian structure" can usually obtained by incorporating one or more of the following "transformations" 1. an arima MODEL 2. Adjustments for Local Level Shifts or Local Time Trends or Seasonal Pulses or Ordinary Pulses 3. a weighted analysis exploiting proven variance heterogeneity 4. a possible power transformation ( logs etc ) to deal with a specific variance heterogenity 5. the detection of points in time where the model/parameters may have changed. Intervention Detection will yield a statement about the statistical significance of the most recent event suggesting either normalcy or an anomaly In the OP's reponse to my prior answer he has posted his data to the web. 60 readings per hour for 24 hours for 6 days . Since this is time series cross-sectional tools like DBSCAN have limited relevance as the data has temporal dependence. With data like this one normally looks for intra-hour and intra-day structure. In addition to these kinds of structure one can pursue the detection of anomalies which can be either one time only (pulse) or systematic in nature (level shift) using methods that are well documented (see the literature of Tsay,Tiao,Chen et.al.) These procedures yielded the following "anomalies'.Note that a level shift is essentially suggestive of separate "clusters". HOUR/MINUTE TIME After a friend of mine pointed me into the direction of clustering algorithms, I stumbled across DBSCAN which builds clusters in n-dimensional space according to two predefined parameters. The basic idea is density-based clustering,
which gives important information about the sample[3]. It is desirable to use LDOS to express the current because this value does not change as the volume changes, while probability density does[3]. Thus the tunneling current is given by $I propto V rho_s (0, E_f) e^{-2 kappa W}$ where ρs(0,Ef) is the LDOS near the Fermi level of the sample at the sample surface[3]. By using equation (6), this current can also be expressed in terms of the LDOS near the Fermi level of the sample at the tip surface, $I propto V rho_s (W, E_f) V$ The exponential term in (9) is very significant in that small variations in W greatly influence the tunnel current. If the separation is decreased by 1 Ǻ, the current increases by an order of magnitude, and vice versa[4]. This approach fails to account for the rate at which electrons can pass the barrier. This rate should affect the tunnel current, so it can be accounted for by using Fermi’s Golden Rule with the appropriate tunneling matrix element. John Bardeen solved this problem in his study of the metal-insulator-metal junction, MIM[5]. He found that if he solved Schrödinger’s equation for each side of the junction separately to obtain the wave functions ψ and χ for each electrode, he could obtain the tunnel matrix, M, from the overlap of these two wave functions[3]. This can be applied to STM by making the electrodes the tip and sample, assigning ψ and χ as sample and tip wave functions, respectively, and evaluating M at some surface S between the metal electrodes at z=zo, where z=0 at the sample surface and z=W at the tip surface[3]. John Bardeen (May 23, 1908 – January 30, 1991) was an American physicist and electrical engineer, who won the Nobel Prize in Physics twice: first in 1956 with William Shockley and Walter Brattain for the invention of the transistor; and again in 1972 with Leon Neil Cooper and John Robert Schrieffer... Now, Fermi’s Golden Rule gives the rate for electron transfer across the barrier, and is written $w = frac{2 pi}{hbar} |M|^2 delta (E_{psi} - E_{chi} )$, where δ(Eψ-Eχ) restricts tunneling to occur only between electron levels with the same energy[3]. The tunnel matrix element, given by $M= frac{hbar}{2 pi} int_{z=z_0} ( chi* frac {partial psi}{partial z}-psi frac{partial chi*}{partial z}) dS$, is a description of the lower energy associated with the interaction of wave functions at the overlap, also called the resonance energy[3]. Summing over all the states gives the tunneling current as $I = frac{4 pi e}{hbar}int_{-infty}^{+infty} [f(E_f -eV) - f(E_f + epsilon)] rho_s (E_f - eV + epsilon) rho_T (E_f + epsilon)|M|^2 d epsilon$, where f is the Fermi function, ρs and ρT are the density of states in the sample and tip, respectively[3]. The Fermi distribution function describes the filling of electron levels at a given temperature T. Fermi-Dirac distribution as a function of ε/μ plotted for 4 different temperatures. ... ## Procedure First the tip is brought into close proximity of the sample by some coarse sample-to-tip control. The values for common sample-to-tip distance, W, range from about 4-7 Ǻ, which is the equilibrium position between attractive (3<W<10Ǻ) and repulsive (W<3Ǻ) interactions[3]. Once tunneling is established, piezoelectric transducers are implemented to move the tip in three directions. As the tip is rastered across the sample in the x-y plane, the density of states and therefore the tunnel current changes. This change in current with respect to position can be measured itself, or the height, z, of the tip corresponding to a constant current can be measured[3]. These two modes are called constant height mode and constant current mode, respectively. An angstrom, angström, or Ã¥ngström (symbol Ã…) is a unit of length. ... Piezoelectricity is the ability of certain crystals to produce a voltage when subjected to mechanical stress. ... In constant current mode, feedback electronics adjust the height by a voltage to the piezoelectric height control mechanism[6]. This leads to a height variation and thus the image comes from the tip topography across the sample and gives a constant charge density surface; this means contrast on the image is due to variations in charge density[4]. In constant height, the voltage and height are both held constant while the current changes to keep the voltage from changing; this leads to an image made of current changes over the surface, which can be related to charge density[4]. The benefit to using a constant height mode is that it is faster, as the piezoelectric movements require more time to register the change in constant current mode than the voltage response in constant height mode[4]. In addition to scanning across the sample, information on the electronic structure of the sample can be obtained by sweeping voltage and measuring current at a specific location[2]. This type of measurement is called scanning tunneling spectroscopy (STS). Scanning tunneling spectroscopy (STS) performed with a scanning tunneling microscope (STM) is a technique which gives information about the local density of electronic states on surfaces at atomic or molecular scale. ... Framerates of at least 1 Hz enable so called Video-STM (up to 50 Hz possible). This can be used to scan surface diffusion. diffusion (disambiguation). ... ## Instrumentation Schematic view of an STM The components of an STM include scanning tip, piezoelectric controlled height and x,y scanner, coarse sample-to-tip control, vibration isolation system, and computer[6]. Image File history File links Download high resolution version (1114x912, 33 KB) Description: Schematic diagram of a scanning tunneling microscope Source: Michael Schmid, TU Wien; adapted from the IAP/TU Wien STM Gallery Date: 2005-Jun-07 Author: Michael Schmid Permission: Michael Schmid put it under Creative Commons Attribution ShareAlike... Image File history File links Download high resolution version (1114x912, 33 KB) Description: Schematic diagram of a scanning tunneling microscope Source: Michael Schmid, TU Wien; adapted from the IAP/TU Wien STM Gallery Date: 2005-Jun-07 Author: Michael Schmid Permission: Michael Schmid put it under Creative Commons Attribution ShareAlike... The resolution of an image is limited by the radius of curvature of the scanning tip of the STM. Additionally, image artifacts can occur if the tip has two tips at the end rather than a single atom; this leads to “double-tip imaging,” a situation in which both tips contribute to the tunneling[2]. Therefore it has been essential to develop processes for consistently obtaining sharp, usable tips. Recently, carbon nanotubes have been used in this instance. Image resolution describes the detail an image holds. ... A closeup of a simple scanning tunneling microscope head at the University of St Andrews scanning MoS2 using a Platinum-Iridium stylus. The tip is often made of tungsten or platinum-iridium, though gold is also used[2]. Tungsten tips are usually made by electrochemical etching, and platinum-iridium tips by mechanical shearing[2]. Both processes are outlined in C. Bai’s book, reference[2] below. Image File history File links Metadata Size of this preview: 800 × 600 pixelsFull resolution (2048 × 1536 pixel, file size: 231 KB, MIME type: image/jpeg) File historyClick on a date/time to view the file as it appeared at that time. ... Image File history File links Metadata Size of this preview: 800 × 600 pixelsFull resolution (2048 × 1536 pixel, file size: 231 KB, MIME type: image/jpeg) File historyClick on a date/time to view the file as it appeared at that time. ... St Marys College Bute Medical School St Leonards College[5][6] Affiliations 1994 Group Website http://www. ... For other uses, see Tungsten (disambiguation). ... GOLD refers to one of the following: GOLD (IEEE) is an IEEE program designed to garner more student members at the university level (Graduates of the Last Decade). ... Due to the extreme sensitivity of tunnel current to height, proper vibration isolation is imperative for obtaining usable results. In the first STM by Binnig and Rohrer, magnetic levitation was used to keep the STM free
a novel and capable approach for learning the semantics of video and tracking in particular. It does this by employing the cycle-consistency of sequential patches in a video. We used the model distributed by the authors, which was trained on VLOG \cite{Fouhey18}. The strength of this model depends on the size of the image crop, however the representations can be very large. For half of our models - those with a nonlinear conversion (see Sec.~\ref{sec:tal-procedure}) - we use a crop of $(256, 256)$. This produces a representation of size $933888$, which was too large for the other half of our models that do not convert the representation. In these cases, we reduced the crop to $(128, 128)$ with an associated representation of size $237568$. For the CIFAR experiments, we further reduced this to the size of the image, $(32, 32)$, which produced an $8192$ sized representation. Video Correspondence Flow \cite{Lai19} (\textbf{CorrFlow}) was chosen because it is a self-supervised model for video that attempts to solve a similar problem to TimeCycle. It achieves stronger numbers by employing a common technique in the literature - colorization \cite{DBLP:journals/corr/abs-1806-09594}. Comparing this and TimeCycle on an unrelated task would potentially shed more light on the differences between their representations. We used the model distributed by the authors, which was trained on Kinetics \cite{DBLP:journals/corr/KayCSZHVVGBNSZ17} and has representation size $225280$ for our localization tasks and $4096$ for CIFAR. \subsection{The Datasets} We use Thumos14 and Gymnastics to compare the models on the temporal localization task. For the more common classification task, we use CIFAR-10 and CIFAR-100. {\bf CIFAR} \cite{Krizhevsky09learningmultiple} is split into CIFAR-10 and CIFAR-100. The former has 6000 examples of each of 10 classes and the latter has 600 examples of each of 100 classes, where each example is a $(32, 32)$ image. {\bf Thumos14} is the most popular dataset for temporal activity localization. While Thumos14 is quite large, a smaller subset of temporally annotated and untrimmed videos are commonly used for this task. In total, it has 200 training videos (taken from the validation set) and 213 test videos (taken from the test set) covering twenty action classes. \subsubsection{Gymnastics} Thumos14 has historically been considered the most appropriate dataset for temporal activity localization. Compared to previous datasets such as ActivityNet \cite{conf/cvpr/HeilbronEGN15}, Thumos14 has more action instances per video and each video contains a larger portion of background activity. However, it itself is rather small with only 11.6 hours of training data spanning twenty classes (detailed in Table \ref{table:thumos-tiou5}). Further, the videos are taken from YouTube and sourced in a largely automated fashion. While there are extensive steps taken to ensure that the videos are on topic, they are still guided by engineering intuition rather than intentional inclusion and thus allow for errors. The clean-up and validation steps are then performed by laymen and thus also do not satisfy the ideal levels of data cleanliness. This is further exacerbated in that it is also laymen who provide the temporal annotations. These can be quite dirty, especially on the boundaries where actions begin and end, which are the most difficult part of the task at hand. In addition, the videos often include moving cameras and hard cuts among scenes. While data cleanliness is a high priority in any machine learning pipeline, this would not necessarily be the most pressing problem if the dataset was very large. However, while Thumos14 is a large dataset, the subset that is useful for temporal localization is not. In comparison, the Gymnastics dataset started from a very different hypothesis than Thumos14 or ActivityNet. We aimed to solve real-world problems around human motion, starting with Gymnastics. This includes temporal localization but also other sought-after tasks such as error analysis and deduction awareness. Consequently, the dataset includes both full training sessions and competition performances. It is realistic in that it was filmed by gymnasts or coaches for their own library with cameras fixed and positioned to capture the entire session. The results were then expertly annotated and otherwise left untrimmed. Currently, it contains 412 videos of men's gymnastics split into train (322), validation (44), and test (46), with train having 36.8 hours of data on five apparatuses: floor exercise, pommel horse, still rings, parallel bars, and horizontal bar\footnote{We did not include vault in this version because most of the data we have of that apparatus was non-stationary.}. Within that time, there are 14.1 hours of action split amongst 1940 threads. Each thread is annotated with the following information: the athlete ID, the start and end time within the larger video, the region of interest in the view, and the skills performed within that thread. An example thread would be an athlete performing a routine on horizontal bar. In total, there are 17618 annotated skills over 317 named classes for an average of 55 skills per class, each of which are also annotated with start and end within the thread. Examples of skills include Giants, Forward uprises, and Haradas. The mean duration of a skill is 2.27 seconds. Gymnastics also contains videos with concurrent threads, which consequently results in multi-class prediction tasks. This is a common feature of real videos of motion and which has historically been missing from academic datasets. ActivityNet has a very small number of these; Thumos14's annotations suggest that it has this feature but those are actually just mislabeled annotations of cut scenes. Please see Table \ref{table:dataset-comparison} for a comparison of Gymnastics with both Thumos14 and ActivityNet. \begin{table} \begin{center} { \small \begin{tabular}{|c|c|c|c|c|c|} \hline & Thumos14 & ActivityNet & Gymnastics \\ \hline Total Videos & 200 & 9649 & 322 \\ Duration & 209.0 & 117.5 & 412.1 \\ Instances & 15.4 & 1.5 & 11.1 \\ Background & 64.0\% & 33.4\% & 27.3\% \\ Concurrent Labels & 0 & 0.018 & .957 \\ Source & YouTube & YouTube & Original Footage \\ \hline \end{tabular} } \end{center} \caption{Thumos14, ActivityNet, and Gymnastics training statistics. Besides total videos, all other columns are per-video. Duration is in seconds. For concurrent instances, the annotations in Thumos14 suggests that these exist, however the videos do not show them as such. ActivityNet lists its sources as `Online video sharing sites'. We interpret this to mean YouTube given the videos in the dataset.} \label{table:dataset-comparison} \end{table} \section{Image Classification} We show classification performance on CIFAR-10 and CIFAR-100 for three reasons. The first is that neither CorrFlow nor TimeCycle have public results on this task. It would be illuminating to ask how they compare to the rest of the literature given that they have only been trained on a self-supervised task over video. The second is that it strengthens our other results in that it verifies that the chosen representations for AMDIM and ResNet are satisfactory for approximately matching the known performance in the literature. The third reason is that we found a reproducible discrepancy for TimeCycle that suggests that linearly separating images is an insufficient test for adjudicating the success of self-supervised models. \begin{table}[ht] \begin{center} { \small \begin{tabular}{|c|c|c|c|c|c|} \hline Pretrained? & Model & CIFAR-10 & CIFAR-100 \\ \hline \multirow{4}*{Yes} & AMDIM & 87.7\% & 66.7\% \\ & CorrFlow & 59.7\% & 31.4\% \\ & ResNet & 90.5\% & 72.9\%\\ & TimeCycle & \textbf{67.2\%} & \textbf{38.9\%} \\ \hline \multirow{4}*{No} & AMDIM & 55.4\% & 32.6\% \\ & CorrFlow & 52.0\% & 25.2\% \\ & ResNet & 37.6\% & 14.2\% \\ & TimeCycle & \textbf{81.3\%} & \textbf{61.8\%} \\ \hline \end{tabular} } \end{center} \caption{Classification performance on CIFAR-10 and CIFAR-100. Note the large boost in performance between pretrained and random TimeCycle. We do not use TSN because there is no notion of optical flow in this dataset and so consequently the results would be hard to meaningfully interpret.} \label{table:cifar-classification} \end{table} \subsection{Procedure} We train our models in two ways. The first way is by loading a pre-trained checkpoint (as specified in Sec.~\ref{sec:models}), freezing that model, and then training a linear classifier on top of the representations that model outputs. The second is to do the same but randomly initialize the network instead of loading the checkpoint. Table \ref{table:cifar-classification} delineates these two as,
\{k\} = B_j} b_j$. Since a core vector also has to fulfill $p(N) = v(N)$, we get \begin{align*} \sum_{k\in N} \left( \sum_{i:\{k\} = A_i} a_i - \sum_{j: \{k\} = B_j} b_j \right) &\leq \sum_{k\in N} p_k \\ & = p(N)\\ & = v(N)\\ & = \sum_{k\in N} \left( \sum_{i:\{k\} = A_i} a_i - \sum_{j: \{k\} = B_j} b_j \right). \end{align*} Thus, $p(N) = \sum_{k\in N} \left( \sum_{i:\{k\} = A_i} a_i - \sum_{j: \{k\} = B_j} b_j \right)$ is the only core vector. \end{proof} A famous core payment is the \emph{Shapley value} (cf. \cite{shapley1953value}). Formally, for a cooperative game~$(N,v)$ the Shapley value of a player~$k$ is defined as \begin{align*} \phi_v(k) \coloneqq \sum_{S\subseteq N\backslash\{k\}} \frac{|S|!(n - |S| - 1)!}{n!}(v(S\cup\{k\}) - v(S)). \end{align*} This can be interpreted as the average marginal profit the player~$k$ adds to an existing coalition~$S$ when joining the coalition over all possible permutations. The next theorem states that the Shapley value can be computed efficiently for RPS games. \begin{theorem}\label{thm: characterization of RPS games} Given an RPS game, the Shapley value~$\phi_v$ for a player~$k$ is given by \begin{align} \phi_v(k) \coloneqq \sum_{i\in N} \frac{a_i}{|A_i|} - \sum_{j\in N} \frac{b_j}{|B_j|} \label{eq: shapley value} \end{align} for $k=1,\dots, n$ and, thus, can be computed efficiently. \end{theorem} \begin{proof} For each reward set~$A_i\in\mathcal{A}$ a player~$k$ adds value $a_i$ to the coalition value if and only if all other players of~$A_i$ are already contained in the coalition, i.e. in $\frac{1}{|A_i|}$ of all cases. Hence, on average each player in~$A_i$ contributes $\frac{a_i}{|A_i|}$ to the value of the coalition. On the other side, for each penalty set~$B_j\in\mathcal{B}$, a player~$k$ incurs cost~$b_j$ if and only if she enters the coalition first. Again, on average, each player in $B_j$ incurs cost~$\frac{b_j}{|B_j|}$ to the coalition. By summing up over all reward and penalty sets we obtain~\eqref{eq: shapley value}. In order to compute the Shapley value of a player, the contribution of each player is initialized with zero. Now we iterate over all reward and penalty sets and update the contribution of each player according to~\eqref{eq: shapley value}. \end{proof} We now investigate the relation of RPS games to the set of convex games. \begin{lemma}\label{lem: modeling with RPS games} The following statements are true: \begin{enumerate}[(i)] \item Every convex cooperative game with three players can be modeled as an RPS game. \item The set of all convex cooperative games is a strict superset of the set of RPS games, meaning that there exist convex cooperative games with four players that cannot be modeled as an RPS game. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i)] \item Let $(N,v)$ be a convex cooperative game with player set~$N=\{1,2,3\}$. W.l.o.g. we can assume that the value of a singleton is always zero since we can add singleton reward and penalty sets while keeping convexity. Let $d\coloneqq\min\{v(\{1,2\}), v(\{1,3\}), v(\{2,3\})\}$ be the minimal value of a coalition consisting of two players. By convexity, $d$ is non-negative. If $d$ is greater than $0$, we add a penalty set $B = \{1,2,3\}$ with penalty $b = d$. In order to obtain the amount given by the characteristic function~$v$, we add reward sets \begin{align*} A_1 = \{1\} &\text{ with } a_1 = d \\ A_2 = \{2\} &\text{ with } a_2 = d \\ A_3 = \{3\} &\text{ with } a_3 = d \\ A_4 = \{1,2\} &\text{ with } a_4 = v(\{1,2\}) - d \\ A_5 = \{2,3\} &\text{ with } a_5 = v(\{2,3\}) - d \\ A_6 = \{1,3\} &\text{ with } a_6 = v(\{1,3\}) - d \end{align*} Note that all rewards are non-negative by the definition of~$d$. Finally, we add $A_7 = \{1,2,3\} \text{ with } a_7 = v(\{1,2,3\}) - \sum_{i=1}^{6} a_i - b$. With this, an arbitrary game consisting of three players can be modeled as an RPS game. \item Let $(N,v)$ be the convex cooperative game with player set~$N=\{1,2,3,4\}$ and characteristic function \begin{align*} v(S) = \begin{cases} 0, \qquad \quad \text{ if } |S|\leq 2, \\ 1, \qquad \quad \text{ if } |S| = 3, \\ 2, \qquad \quad \text{ if } S = N. \\ \end{cases} \end{align*} Suppose $(N,v)$ can be modeled by an RPS game. Let $i,j\in N$ be two different players. It holds that $v(\{i,j\}) = 0 = v(i) + v(j)$. Since $v(\{i,j\}) \geq v(i) + v(j)$ is always true due to convexity, there cannot be a penalty set containing both $i$ and $j$, because otherwise for the coalition the penalty incurs once while the penalty incurs for both singletons~$\{i\}, \{j\}$. Adding more sets, the gap only increases which does not help in order to obtain $v(\{i,j\}) = v(i) + v(j)$. Hence, no penalty set of size two or larger can be used to model the game. As every two player coalition receives value zero, reward sets of size exactly two cannot exist in said game at the same time. In order to generate a profit of one for all three-player coalitions, we have to add a reward set of value one for each of those coalitions. But then, since there are four such coalitions, the RPS game must grant a profit of at least four to the grand coalition --- a contradiction. \end{enumerate} \end{proof} \section{Characterization of Core Elements}\label{sec: core characterization} In this section we give a characterization of core elements of an instance of an RPS game. In order to do this, we define a profit sharing graph and prove that any feasible flow in this graph of a certain flow value induces a core vector and vice versa. As a byproduct of this characterization we obtain an alternative proof to the polynomial time computability of a core element stated in Corollary~\ref{cor:poly}. We assume that the reader is familiar with the basics of network flows~\cite{ahuja1988network}. Our approach of a characterization of core elements as feasible flows is based on results of Ackermann et al.~(cf. \cite{ackermann2014modeling}). First, we define the \emph{profit sharing graph} for RPS games. \begin{definition}[Profit Sharing Graph for RPS Games] Let $(N,v)$ be an RPS game with player set~$N=\{1,2,\dots,n\}$, a collection of reward sets~$\mathcal{A} = \{A_1,\dots,A_k\}$ and a collection of penalty sets~$\mathcal{B} = \{B_1,\dots,B_l\}$. The \emph{profit sharing graph} for $(N,v)$ is given by the directed graph~$G=(V,E)$ with nodes \begin{align*} V\coloneqq \{s,t,\overline{s},\overline{t}\}\cup N \cup \mathcal{A} \cup \mathcal{B}, \end{align*} and edges \begin{alignat*}{2} E \coloneqq& \{(s,A): A\in\mathcal{A}\} \cup \{(B,t): B\in\mathcal{B}\} \cup \{(s,\overline{s}), (\overline{t}, t)\} \cup \\ & \{(\overline{s},n): n\in N\} \cup \{(n,\overline{t}): n\in N\} \, \cup \\ & \{(A,i): i\in A, A\in\mathcal{A}\} \cup \{(i,B): i\in B, B\in\mathcal{B}\} \cup \{(\overline{s}, \overline{t})\}. \end{alignat*} We set the edge capacities to be given by the function~$c\colon E\to \Z$ defined by \begin{align*} c(e) \coloneqq \begin{cases} a_i,& \text{for $e=(s,A_i)$ and $A_i\in\mathcal{A}$}\\ b_j, & \text{for $e=(B_j,t)$ and $B_j\in\mathcal{B}$}\\ \sum_{j=1}^l b_j,& \text{for $e=(s,\overline{s})$} \\ \sum_{i=1}^k a_i, & \text{for $e=(\overline{t},t)$} \\ \infty,& \text{otherwise.} \end{cases} \end{align*} \end{definition} \begin{figure} \centering \begin{tikzpicture}[every node/.style={fill=white,rectangle}, every edge/.style={draw=black,very thick}] \begin{scope}[every node/.style={circle,thick,draw}] \node (s) at (0,3) {$s$}; \node (A1) at (2.5,5) {$A_1$}; \node (A2) at (2.5,3) {$A_2$}; \node (A3) at (2.5,1) {$A_3$}; \node (B1) at (7.5,5) {$B_1$}; \node (B2) at (7.5,1) {$B_2$}; \node (t) at (10, 3) {$t$}; \end{scope} \begin{scope} [every node/.style={circle, draw}] \node (n1) at (5,5) {$1$}; \node (n2) at (5,4) {$2$}; \node (n3) at (5,3) {$3$}; \node (n4) at (5,2) {$4$}; \node (n5) at (5,1) {$5$}; \node (s1) at (3.75, -0.5) {$\overline{s}$}; \node (t1) at (6.25, -0.5) {$\overline{t}$}; \end{scope} \begin{scope}[ every node/.style={fill=white,rectangle,sloped}, every edge/.style={draw=black,thick}] \path [->] (s) edge node {$[0,a_1]$} (A1); \path [->] (s) edge node{$[0,a_2]$} (A2); \path [->] (s) edge node{$[0,a_3]$} (A3); \path [->] (A1) edge (n1); \path [->] (A2) edge (n1); \path [->] (A2) edge (n3); \path [->] (A3) edge (n2); \path [->] (A1) edge (n3); \path [->] (A2) edge (n5); \path [->] (A2) edge (n4); \path [->] (A3) edge (n4); \path [->] (n2) edge (B1); \path [->] (n2) edge (B2); \path [->] (n3) edge (B2); \path [->] (n1) edge (B1); \path [->] (n2) edge (B2); \path [->] (n3) edge (B2); \path [->] (n4) edge (B1); \path [->] (n5) edge (B1); \path [->] (n3) edge (B2); \path [->] (B1) edge node{$[0,b_1]$} (t); \path [->] (B2) edge node{$[0,b_2]$} (t); \path [->] (t1) edge[bend left=-40] node{$[0,\sum a_i]$} (t); \path [->] (s) edge[bend left=-40] node{$[0,\sum b_j]$} (s1); \path [->] (s1) edge (t1); \path [->] (n1) edge[out=0, in=90] (t1); \path [->] (n2) edge[out=0, in=90] (t1); \path [->] (n3) edge[out=0, in=90] (t1); \path [->] (n4) edge[out=0, in=90] (t1); \path [->] (n5) edge[out=0, in=90] (t1); \path [<-] (n1) edge[out=180, in=90] (s1); \path [<-] (n2) edge[out=180, in=90] (s1); \path [<-] (n3) edge[out=180, in=90] (s1); \path [<-] (n4) edge[out=180, in=90] (s1); \path [<-] (n5) edge[out=180, in=90] (s1); \end{scope} \end{tikzpicture} \caption{Example of a \emph{profit sharing
much smaller. ## Can we use a large number of genes? {#lnum} Yes. In fact, in OncoSimulR there is no pre-set limit on genome size. However, large numbers of genes can lead to unacceptably large returned object sizes and/or running time. We discuss several examples next that illustrate some of the major issues to consider. Another example with 50,000 genes is shown in section \@ref(mcf50070). We have seen in \@ref(bench1) and \@ref(common1) that for the Exp model, benchmark results using detectionProb require a lot of care and can be misleading. Here, we will fix initial population sizes (to 500) and all final population sizes will be set to$\geq 10^6$. In addition, to avoid the confounding factor of the onlyCancer = TRUE argument, we will set it to FALSE, so we measure directly the time of individual runs. ### Exponential model with 10,000 and 50,000 genes {#exp50000} #### Exponential, 10,000 genes, example 1 {#exp100001} We will start with 10000 genes and an exponential model, where we stop when the population grows over$10^6$individuals: {r exp10000, echo = TRUE, eval = FALSE} ng <- 10000 u <- allFitnessEffects(noIntGenes = c(rep(0.1, ng/2), rep(-0.1, ng/2))) t_e_10000 <- system.time( e_10000 <- oncoSimulPop(5, u, model = "Exp", mu = 1e-7, detectionSize = 1e6, detectionDrivers = NA, detectionProb = NA, keepPhylog = TRUE, onlyCancer = FALSE, mutationPropGrowth = TRUE, mc.cores = 1)) {r exp10000-out, echo = TRUE, eval = FALSE} t_e_10000 ## user system elapsed ## 4.368 0.196 4.566 summary(e_10000)[, c(1:3, 8, 9)] ## NumClones TotalPopSize LargestClone FinalTime NumIter ## 1 5017 1180528 415116 143 7547 ## 2 3726 1052061 603612 131 5746 ## 3 4532 1100721 259510 132 6674 ## 4 4150 1283115 829728 99 6646 ## 5 4430 1139185 545958 146 6748 print(object.size(e_10000), units = "MB") ## 863.9 Mb Each simulation takes about 1 second but note that the number of clones for most simulations is already over 4000 and that the size of the returned object is close to 1 GB (a more detailed explanation of where this 1 GB comes from is deferred until section \@ref(wheresizefrom)). #### Exponential, 10,000 genes, example 2 {#exp10000_2} We can decrease the size of the returned object if we use the keepEvery = NA argument (this setting was explained in detail in section \@ref(bench1)): {r exp10000b, eval = FALSE, echo = TRUE} t_e_10000b <- system.time( e_10000b <- oncoSimulPop(5, u, model = "Exp", mu = 1e-7, detectionSize = 1e6, detectionDrivers = NA, detectionProb = NA, keepPhylog = TRUE, onlyCancer = FALSE, keepEvery = NA, mutationPropGrowth = TRUE, mc.cores = 1 )) {r exp10000b-out, echo = TRUE, eval = FALSE} t_e_10000b ## user system elapsed ## 5.484 0.100 5.585 summary(e_10000b)[, c(1:3, 8, 9)] ## NumClones TotalPopSize LargestClone FinalTime NumIter ## 1 2465 1305094 727989 91 6447 ## 2 2362 1070225 400329 204 8345 ## 3 2530 1121164 436721 135 8697 ## 4 2593 1206293 664494 125 8149 ## 5 2655 1186994 327835 191 8572 print(object.size(e_10000b), units = "MB") ## 488.3 Mb #### Exponential, 50,000 genes, example 1 {#exp500001} Let's use 50,000 genes. To keep object sizes reasonable we use keepEvery = NA. For now, we also set mutationPropGrowth = FALSE so that the mutation rate does not become really large in clones with many mutations but, of course, whether or not this is a reasonable decision depends on the problem; see also below. {r exp50000, echo = TRUE, eval = FALSE} ng <- 50000 u <- allFitnessEffects(noIntGenes = c(rep(0.1, ng/2), rep(-0.1, ng/2))) t_e_50000 <- system.time( e_50000 <- oncoSimulPop(5, u, model = "Exp", mu = 1e-7, detectionSize = 1e6, detectionDrivers = NA, detectionProb = NA, keepPhylog = TRUE, onlyCancer = FALSE, keepEvery = NA, mutationPropGrowth = FALSE, mc.cores = 1 )) t_e_50000 ## user system elapsed ## 44.192 1.684 45.891 summary(e_50000)[, c(1:3, 8, 9)] ## NumClones TotalPopSize LargestClone FinalTime NumIter ## 1 7367 1009949 335455 75.00 18214 ## 2 8123 1302324 488469 63.65 17379 ## 3 8408 1127261 270690 72.57 21144 ## 4 8274 1138513 318152 80.59 20994 ## 5 7520 1073131 690814 70.00 18569 print(object.size(e_50000), units = "MB") ## 7598.6 Mb Of course, simulations now take longer and the size of the returned object is over 7 GB (we are keeping more than 7,000 clones, even if when we prune all those that went extinct). #### Exponential, 50,000 genes, example 2 {#exp50000_2} What if we had not pruned? {r exp50000np, echo = TRUE, eval = FALSE} ng <- 50000 u <- allFitnessEffects(noIntGenes = c(rep(0.1, ng/2), rep(-0.1, ng/2))) t_e_50000np <- system.time( e_50000np <- oncoSimulPop(5, u, model = "Exp", mu = 1e-7, detectionSize = 1e6, detectionDrivers = NA, detectionProb = NA, keepPhylog = TRUE, onlyCancer = FALSE, keepEvery = 1, mutationPropGrowth = FALSE, mc.cores = 1 )) t_e_50000np ## user system elapsed ## 42.316 2.764 45.079 summary(e_50000np)[, c(1:3, 8, 9)] ## NumClones TotalPopSize LargestClone FinalTime NumIter ## 1 13406 1027949 410074 71.97 19469 ## 2 12469 1071325 291852 66.00 17834 ## 3 11821 1089834 245720 90.00 16711 ## 4 14008 1165168 505607 77.61 19675 ## 5 14759 1074621 205954 87.68 20597 print(object.size(e_50000np), units = "MB") ## 12748.4 Mb The main effect is not on execution time but on object size (it has grown by 5 GB). We are tracking more than 10,000 clones. #### Exponential, 50,000 genes, example 3 {#exp50000_3} What about the mutationPropGrowth setting? We will rerun the example in \@ref(exp500001) leaving keepEvery = NA but with the default mutationPropGrowth: {r exp50000mpg, echo = TRUE, eval = FALSE} ng <- 50000 u <- allFitnessEffects(noIntGenes = c(rep(0.1, ng/2), rep(-0.1, ng/2))) t_e_50000c <- system.time( e_50000c <- oncoSimulPop(5, u, model = "Exp", mu = 1e-7, detectionSize = 1e6, detectionDrivers = NA, detectionProb = NA, keepPhylog = TRUE, onlyCancer = FALSE, keepEvery = NA, mutationPropGrowth = TRUE, mc.cores = 1 )) t_e_50000c ## user system elapsed ## 84.228 2.416 86.665 summary(e_50000c)[, c(1:3, 8, 9)] ## NumClones TotalPopSize LargestClone FinalTime NumIter ## 1 11178 1241970 344479 84.74 27137 ## 2 12820 1307086 203544 91.94 33448 ## 3 10592 1126091 161057 83.81 26064 ## 4 11883 1351114 148986 65.68 25396 ## 5 10518 1101392 253523 99.79 26082 print(object.size(e_50000c), units = "MB") ## 10904.9 Mb As expected (because the mutation rate per unit time is increasing in the fastest growing clones), we have many more clones, larger objects, and longer times of execution here: we almost double the time and the size of the object increases by almost 3 GB. What about larger population sizes or larger mutation rates? The number of clones starts growing fast, which means much slower execution times and much larger returned objects (see also the examples below). #### Interlude: where is that 1 GB coming from? {#wheresizefrom} In section \@ref(exp100001) we have seen an apparently innocuous simulation producing a returned object of almost 1 GB. Where is that coming from? It means that each simulation produced almost 200 MB of output. Let us look at one simulation in more detail: {r sizedetail, eval = FALSE, echo = TRUE} r1 <- oncoSimulIndiv(u, model = "Exp", mu = 1e-7, detectionSize = 1e6, detectionDrivers = NA, detectionProb = NA, keepPhylog = TRUE, onlyCancer = FALSE, mutationPropGrowth = TRUE ) summary(r1)[c(1, 8)] ## NumClones FinalTime ## 1 3887 345 print(object.size(r1), units = "MB") ## 160 Mb ## Size of the two largest objects inside: sizes <- lapply(r1, function(x) object.size(x)/(1024^2)) sort(unlist(sizes), decreasing = TRUE)[1:2] ## Genotypes pops.by.time ## 148.28 10.26 dim(r1$Genotypes) ## [1] 10000 3887 The above shows the reason: the Genotypes matrix is a 10,000 by 3,887 integer matrix (with a 0 and 1 indicating not-mutated/mutated for each gene in each genotype) and in R integers use 4 bytes each. The pops.by.time matrix is 346 by 3,888 (the 1 in $346 = 345 + 1$ comes from starting at 0 and going up to the final time, both included; the 1 in $3888 = 3887
solution, but once again it was the wrong one!  I stumbled upon some example BASIC code that used assembly language subroutines (encoded as DATA lines in the BASIC program), as well as INTERRUPT routines that took advantage of the underlying DOS and BIOS services. This led me down the path of learning Intel 286 assembly language (another few months of studying), and encoding it into my BASIC programs!  This solved the issue of responsiveness, but there was still the issue of graphics, or lack thereof.  Fortunately, I found a book at the local public library about VGA graphics programming. Even more fortunately, the book contained sample source code, using a language they called “C“…. And my eyes were open! It hit me like a freight train. I was lucky that I didn’t have a seizure right there at the library.  I realized that I had been learning the wrong things all along!  (Of course learning assembly language was sort of right, but my application of it was still misguided.) Learning C and C++ from that point forward wasn’t particularly difficult, but I still feel like it would have been a lot easier if my mind hadn’t been polluted by the programming style and structure that I learned from BASIC.  It makes me wonder how things might have been different, had I accidentally picked up a book on C++ instead of a book on BASIC during my earliest exploits with computers. In all fairness, I’m sure I learned some rudimentary programming principles from BASIC, but I’m not sure that this redeems BASIC as a learning tool. There were just too many moments where, while learning C++, I thought, “So that’s the way it really works!”  And I’m sure it’s also my fault for trying to learn everything on my own, instead of seeking guidance from someone else who might have told me, “You’re doing it wrong.” All of this makes me wonder what programming language would be appropriate for teaching today’s generation of young programmers.  Based on my comically tragic experience with BASIC, my gut instinct is to advise aspiring developers to stay away from interpreted languages (such as Python), or at the very least understand that the interpreted language they’re learning is useless for developing actual software. I don’t think there’s any harm in diving right into a compiled language (such as C++), and learning how it hugs the underlying hardware in a way that no interpreted language ever could. That being said, I don’t wish any of this to reflect negatively on Dwyer and Critchfield’s BASIC and the Personal Computer.  It’s a solid book, and I still own the original copy.  There’s no denying that it was one of the first books that got me interested in programming, and for that I’m thankful.  However, sometimes I regret that I didn’t find Stroustrup’s The C++ Programming Language at the same garage sale as where I found BASIC and the Personal Computer.  Or, alternatively, perhaps Dwyer and Critchfield could have included the following disclaimer in large bold letters: This is not the way actual software is written!  But perhaps it’s time to let it go. I didn’t turn out so bad, right? DiskDigger now available for Android! I’m happy to announce that DiskDigger is now available for Android devices (phones and tablets running rooted Android 2.2 and above)! You can get the app by searching for it on the Google Play Store from your Android device.  Please note that the app only works on rooted devices. At the moment, the app is in an early Beta stage, meaning that it’s not meant to be as powerful or complete as the original DiskDigger for Windows, and is still in active development.  Nevertheless, it uses the same powerful carving techniques to recover .JPG and .PNG images (the only file types supported so far; more will follow) from your device’s memory card or internal memory. So, if you’ve taken a photo with your phone or tablet and then deleted it, or even reformatted your memory card, DiskDigger can recover it! I’ve written a quick guide that has more information and a brief introduction to using the app!  If you have questions, comments, or suggestions about the app, don’t hesitate to share them! Update: thanks to Lifehacker for writing a nice article! I seem to be very minimal in my strategy of organizing my digital photo collection. I have a single folder on my computer called “Pictures,” and subfolders that correspond to every year (2011, 2010, …) since the year I was born. Some of the years contain subfolders that correspond to noteworthy trips that I’ve taken. This method makes it extremely easy to back up my entire photo collection by dragging the “Pictures” folder to a different drive. It also makes it easy to reference and review the photos in rough chronological order. This is why I’ve never understood the purpose of third-party “photo management” software, since most such software inevitably reorganizes the underlying directories in its own crazy way, or builds a proprietary index of photos that takes the user away from the actual directory structure. If you’re aware of the organization of your photos on your disk, then any additional management software becomes superfluous. At any rate, there is one slight issue with this style of organizing photos: all of the various sources of photos (different cameras, scanners, cell phones, etc) give different file names to the photos! So, when all the photos are combined into a single directory, they often conflict with each other, or at the very least become a disjointed mess. For example, the file names can be in the form DSC_xxxx, IMG_xxxx, or something similar, which isn’t very meaningful. Photos taken will cell phones are a little better; they’re usually composed of the date and time the photo was taken, but the naming format is still not uniform across all cell phone manufacturers. Thus, the optimal naming scheme for photos would be based on the date/time, but in a way that is common between all sources of photos. This would organize the photos in natural chronological order. The vast majority of cameras and cell phones encode the date and time into the EXIF block of each photo. If only there was a utility that would read each photo, and rename it based on the date/time stored within it. Well, now there is: This is a very minimal utility that takes a folder full of photos and renames each one based on its date/time EXIF tag. As long as you set the time on your camera(s) correctly, this will ensure that all your photos will be named in a natural and uniform way. The tool lets you select the “pattern” of the date and time that you’d like to apply as the file name. The default pattern will give you file names similar to “20111028201345.jpg” (for a photo taken on Oct 28 2011, 20:13:45), which means that you’ll be able to sort the photos chronologically just by sorting them by name! Pi is wrong! Long live Tau! At one point or another, we’ve all had a feeling that something is not quite right in the world. It’s a huge relief, therefore, to discover someone else who shares your suspicion. (I’m also surprised that it’s taken me this long to stumble on this!) It has always baffled me why we define $$\pi$$ to be the ratio of the circumference of a circle to its diameter, when it should clearly be the ratio of the circumference to its radius. This would make $$\pi$$ become the constant 6.2831853…, or 2 times the current definition of $$\pi$$. Why should we do this? And what effect would this have? Well, for starters, this would remove an unnecessary factor of 2 from a vast number of equations in modern
import os import numpy as np import shutil import pickle class Visualizer: def __init__(self, idx2entity, idx2entity_type, idx2relation, save_dir, mid2name_filename=None): self.idx2entity = idx2entity self.idx2entity_type = idx2entity_type self.idx2relation = idx2relation self.save_dir = save_dir if not os.path.exists(self.save_dir): os.mkdir(self.save_dir) self.mid2name = None if mid2name_filename is not None: self.mid2name = pickle.load(open(mid2name_filename, "rb")) # this is a dictionary from query relation to another dictionary mapping from relation paths to contradictions self.rel_path2contradictions = {} def visualize_paths(self, inputs, labels, type_weights, path_weights, rel, split, epoch, filter_negative_example=False, filter_false_prediction=False, probs=None, top_k_path=None, minimal_path_weight=None): """ This method is used to visualize paths with details. Specifically, entity hierarchy for each entity will be printed. :param inputs: :param labels: :param type_weights: :param path_weights: :param rel: :param split: :param epoch: :param filter_negative_example: :param filter_false_prediction: :param probs: :param top_k_path: :param minimal_path_weight: :return: """ num_ent_pairs, num_paths, num_steps, num_types = type_weights.shape highest_weighted_type_indices = np.argmax(type_weights, axis=3) rel_dir = os.path.join(self.save_dir, rel) if not os.path.exists(rel_dir): os.mkdir(rel_dir) rel_split_dir = os.path.join(rel_dir, split) if not os.path.exists(rel_split_dir): os.mkdir(rel_split_dir) file_name = os.path.join(rel_split_dir, str(epoch) + ".detailed.tsv") with open(file_name, "a") as fh: for ent_pairs_idx in range(num_ent_pairs): paths = [] subj = None obj = None label = labels[ent_pairs_idx] # filter out negative examples if filter_negative_example: if label == 0: continue # filter out wrong predictions if filter_false_prediction: if probs is not None: prob = probs[ent_pairs_idx] if abs(prob - label) > 0.5: continue for path_idx in range(num_paths): # Each path string should be: ent1[type1:weight1,...,typeC:weightC] - rel1 - ent2[type1:weight1,...,typeC:weightC] # filter by path weight if minimal_path_weight is not None and 0 < minimal_path_weight < 1: if path_weights[ent_pairs_idx, path_idx] < minimal_path_weight: continue # processing a path path = [] start = False for stp in range(num_steps): feats = inputs[ent_pairs_idx, path_idx, stp] entity = feats[-2] entity_name = self.idx2entity[entity] # use dict to map freebase mid to name if self.mid2name is not None: if entity_name != "#PAD_TOKEN": entity_name = entity_name.split(":")[1] if entity_name in self.mid2name: entity_name = self.mid2name[entity_name] # ignore pre-paddings if not start: if entity_name != "#PAD_TOKEN": start = True if subj is None: subj = entity_name else: assert subj == entity_name if start: rel = feats[-1] types = feats[0:-2] weights = type_weights[ent_pairs_idx, path_idx, stp] types_str = [] for i in range(len(types)): type_name = self.idx2entity_type[types[i]] weight = weights[i] type_str = type_name + ":" + "%.3f" % weight types_str.append(type_str) types_str = "[" + ",".join(types_str) + "]" rel_name = self.idx2relation[rel] path += [entity_name + types_str] if rel_name != "#END_RELATION": path += [rel_name] if stp == num_steps - 1: if obj is None: obj = entity_name else: assert obj == entity_name path_str = "-".join(path) paths.append((path_str, path_weights[ent_pairs_idx, path_idx])) if not paths: continue paths = sorted(paths, key=lambda x: x[1], reverse=True) # keep only top K paths if top_k_path is not None and top_k_path > 0: paths = paths[0:min(len(paths), top_k_path)-1] weighted_paths = [p[0] + "," + str(p[1]) for p in paths] paths_str = " -#- ".join(weighted_paths) fh.write(subj + "," + obj + "\t" + str(label) + "\t" + paths_str + "\n") def visualize_paths_with_relation_and_type(self, inputs, labels, type_weights, path_weights, rel, split, epoch, filter_negative_example=False, filter_false_prediction=False, probs=None, top_k_path=None, minimal_path_weight=None): """ This method is used to visualize paths in a compact way. Specifically, only the highest weighted entity type for each entity will be printed. :param inputs: :param labels: :param type_weights: :param path_weights: :param rel: :param split: :param epoch: :param filter_negative_example: :param filter_false_prediction: :param probs: :param top_k_path: :param minimal_path_weight: :return: """ num_ent_pairs, num_paths, num_steps, num_types = type_weights.shape highest_weighted_type_indices = np.argmax(type_weights, axis=3) rel_dir = os.path.join(self.save_dir, rel) if not os.path.exists(rel_dir): os.mkdir(rel_dir) rel_split_dir = os.path.join(rel_dir, split) if not os.path.exists(rel_split_dir): os.mkdir(rel_split_dir) file_name = os.path.join(rel_split_dir, str(epoch) + ".tsv") with open(file_name, "a") as fh: for ent_pairs_idx in range(num_ent_pairs): paths = [] subj = None obj = None label = labels[ent_pairs_idx] # filter out negative examples if filter_negative_example: if label == 0: continue # filter out wrong predictions if filter_false_prediction: if probs is not None: prob = probs[ent_pairs_idx] if abs(prob - label) > 0.5: continue for path_idx in range(num_paths): # Each path string should be: type1 - rel1 - type2 # filter by path weight if minimal_path_weight is not None and 0 < minimal_path_weight < 1: if path_weights[ent_pairs_idx, path_idx] < minimal_path_weight: continue # processing a path path = [] start = False for stp in range(num_steps): feats = inputs[ent_pairs_idx, path_idx, stp] entity = feats[-2] entity_name = self.idx2entity[entity] # use dict to map freebase mid to name if self.mid2name is not None: if entity_name != "#PAD_TOKEN": entity_name = entity_name.split(":")[1] if entity_name in self.mid2name: entity_name = self.mid2name[entity_name] # ignore pre-paddings if not start: if entity_name != "#PAD_TOKEN": start = True if subj is None: subj = entity_name else: assert subj == entity_name if start: rel = feats[-1] types = feats[0:-2] rel_name = self.idx2relation[rel] highest_weighted_type = types[highest_weighted_type_indices[ent_pairs_idx, path_idx, stp]] type_name = self.idx2entity_type[highest_weighted_type] path += [type_name] if rel_name != "#END_RELATION": path += [rel_name] if stp == num_steps - 1: if obj is None: obj = entity_name else: assert obj == entity_name path_str = "-".join(path) paths.append((path_str, path_weights[ent_pairs_idx, path_idx])) if not paths: continue paths = sorted(paths, key=lambda x: x[1], reverse=True) # keep only top K paths if top_k_path is not None and top_k_path > 0: paths = paths[0:min(len(paths), top_k_path)-1] weighted_paths = [p[0] + "," + str(p[1]) for p in paths] paths_str = " -#- ".join(weighted_paths) fh.write(subj + "," + obj + "\t" + str(label) + "\t" + paths_str + "\n") def visualize_contradictions(self, inputs, labels, type_weights, path_weights, relation, split, filter_false_prediction=False, probs=None, minimal_path_weight=None): """ This method is used to extract contradiction examples. Another method needs to be called to print these examples :param inputs: :param labels: :param type_weights: :param path_weights: :param relation: :param split: :param filter_false_prediction: :param probs: :param minimal_path_weight: :return: """ num_ent_pairs, num_paths, num_steps, num_types = type_weights.shape highest_weighted_type_indices = np.argmax(type_weights, axis=3) if split != "test": print("Skip generation of contradictions for split other than test") return if relation not in self.rel_path2contradictions: self.rel_path2contradictions[relation] = {} for ent_pairs_idx in range(num_ent_pairs): subj = None obj = None label = labels[ent_pairs_idx] # filter out wrong predictions if filter_false_prediction: if probs is not None: prob = probs[ent_pairs_idx] if abs(prob - label) > 0.5: continue for path_idx in range(num_paths): # filter by path weight if minimal_path_weight is not None and 0 < minimal_path_weight < 1: if path_weights[ent_pairs_idx, path_idx] < minimal_path_weight: continue # processing a path path = [] rel_path = [] start = False for stp in range(num_steps): feats = inputs[ent_pairs_idx, path_idx, stp] entity = feats[-2] entity_name = self.idx2entity[entity] # use dict to map freebase mid to name if self.mid2name is not None: if entity_name != "#PAD_TOKEN": entity_name = entity_name.split(":")[1] if entity_name in self.mid2name: entity_name = self.mid2name[entity_name] # ignore pre-paddings if not start: if entity_name != "#PAD_TOKEN": start = True if subj is None: subj = entity_name else: assert subj == entity_name if start: rel = feats[-1] types = feats[0:-2] rel_name = self.idx2relation[rel] highest_weighted_type = types[highest_weighted_type_indices[ent_pairs_idx, path_idx, stp]] type_name = self.idx2entity_type[highest_weighted_type] path += [entity_name + "[" + type_name + "]"] if rel_name != "#END_RELATION": path += [rel_name] rel_path += [rel_name] if stp == num_steps - 1: if obj is None: obj = entity_name else: assert obj == entity_name path_str = "-".join(path) rel_path_str = "-".join(rel_path) if rel_path_str not in self.rel_path2contradictions[relation]: self.rel_path2contradictions[relation][rel_path_str] = [] # each example will be (subj, obj, label): weight, subj[type1]-ent2[type2]-obj[type3] example_str = "(" + subj + ", " + obj + ", " + str(label) + "): " + str(path_weights[ent_pairs_idx, path_idx]) + ", " + path_str if label == 0: self.rel_path2contradictions[relation][rel_path_str].append(example_str) else: self.rel_path2contradictions[relation][rel_path_str].insert(0, example_str) def print_contradictions(self, rel): """ This method is used to write contradiction examples. :param rel: :return: """ if rel not in self.rel_path2contradictions: print("Relation {} does not have any contradictory examples".format(rel)) return rel_dir = os.path.join(self.save_dir, rel) if not os.path.exists(rel_dir): os.mkdir(rel_dir) rel_split_dir = os.path.join(rel_dir, "test") if not os.path.exists(rel_split_dir): os.mkdir(rel_split_dir) file_name = os.path.join(rel_split_dir, "contradictions.tsv") with open(file_name, "a") as fh: for idx, rel_path in enumerate(self.rel_path2contradictions[rel]): for example in self.rel_path2contradictions[rel][rel_path]: fh.write(str(idx) + "\t" + rel_path + "\t" + example + "\n") def save_space(self, rel, best_epoch): """ This method is used to delete visualizations that are not from the best models
of different metrics to capture observed high-flow behaviors across a diverse range of US basins. We also discuss the implications of the choice of different calibration metrics based on stochastic ensemble simulations generated based on remaining model residuals. The remainder of this paper is organized as follows. Section 2 shows how the use of NSE for model calibration is counter-intuitively problematic when focusing on high-flow estimation. This part of the study is motivated by our experience with CONUS-wide annual peak flow estimates and serves to motivate the need for our large-sample study . Section 3 describes the data, models, and calibration strategy adopted. Section 4 then presents the results followed by discussion in Sect. 5. Concluding remarks are provided in Sect. 6. 2 Motivation One of the earliest developments of a metric used for model development was by , who proposed assessing MSE relative to the observation mean: NSE. A key motivation was to quantify how well the updated model outputs performed when compared against a simple benchmark (the observation mean). Since then, such squared error metrics have been predominantly used for model evaluation as well as for model calibration. Furthermore, MSE-based metrics have been thought to be useful in model calibration to reduce simulation errors associated with high-flow values, because these metrics typically magnify the errors in higher flows more than in the lower flows due to the fact that the errors tend to be heteroscedastic. Although showed theoretically how and why the use of NSE and other MSE-based metrics for calibration results in the underestimation of peak flow events, our experience indicates that this notion continues to persist almost a decade later . Via an algebraic decomposition of the NSE into “mean error”, “variability error”, and “correlation” terms, demonstrate that use of NSE for calibration will underestimate the response variability by a proportion equal to the achievable correlation between the simulated and observed responses; i.e., the only situation in which variability is not underestimated is the ideal but unachievable one when the correlation is 1.0. They further show that the consequence is a tendency to underestimate high flows while overestimating low flows (see Fig. 3 in ). Figure 1Spatial distribution of Hydro-Climate Data Network (HCDN) basins; (b) cumulative distribution of percent bias of annual peak flow (APFB) over 1989–2008 simulated with three different sets of VIC parameters used in at HCDN basins. (c) Relationships between variability error (α: simulation-to-observation ratio of daily flow variability) and APFB. (d) Relationships between mean error (β: simulation-to-observation ratio of mean flow) and APFB. Our recent large-sample calibration study made us strongly aware of the practical implications of this problem associated with the use of NSE for model calibration. Figure 1 illustrates the bias in the model's ability to reproduce high flows when calibrated with NSE. The plot shows distributions of annual peak flow bias at 492 Hydro-Climate Data Network (HCDN) basins across the CONUS for the VIC model using three different parameter sets determined by . Note that the collated parameter set is a patchwork quilt of partially calibrated parameter sets, while the other two sets were obtained via calibration with NSE using the observed data at each basin. The results clearly demonstrate the strong tendency to underestimate annual peak flows at the vast majority of the basins (although calibration at individual basins results in less severe underestimation than the other cases). Figure 1b–d clearly show that annual peak bias is strongly related to variability error but not to mean error (i.e., water balance error). Even though the calibrations resulted in statistically unbiased results over the sample of basins, there is a strong tendency to severely underestimate annual peak flow due to the fact that NSE results in poor statistical simulation of variability. Clearly, the use of NSE-like metrics for model calibration is problematic for the estimation of high flows and extremes. However, improving only simulated flow variability may not improve high-flow estimates in time. It likely also requires improvement of the mean state and daily correlation. In general, it is impossible to improve the simulation of flow variability (to improve high-flow estimates) without simultaneously affecting the mean and correlation properties of the simulation. To provide a way to achieve balanced improvement of simulated mean flow, flow variability, and daily correlation, proposed the KGE as a weighted combination of the three components that appear in the theoretical NSE decomposition formula and showed that this formulation improves flow variability estimates. KGE is expressed as $\begin{array}{ll}\mathrm{KGE}& =\mathrm{1}-\sqrt{\left[{S}_{r}\left(r-\mathrm{1}\right){\right]}^{\mathrm{2}}+\left[{S}_{\mathit{\alpha }}\left(\mathit{\alpha }-\mathrm{1}\right){\right]}^{\mathrm{2}}+\left[{S}_{\mathit{\beta }}\left(\mathit{\beta }-\mathrm{1}\right){\right]}^{\mathrm{2}}},\\ \text{(1)}& \mathit{\alpha }& =\frac{{\mathit{\sigma }}_{\mathrm{s}}}{{\mathit{\sigma }}_{\mathrm{o}}},\mathit{\beta }=\frac{{\mathit{\mu }}_{\mathrm{s}}}{{\mathit{\mu }}_{\mathrm{o}}},\end{array}$ where Sr, Sα, and Sβ are user-specified scaling factors for the correlation (r), variability ratio (α), and mean ratio (β) terms; σs and σo are the standard deviation values for the simulated and observed responses, respectively, and μs and μo are the corresponding mean values. In a balanced formulation, Sr, Sα, and Sβ are all set to 1.0. By changing the relative sizes of the Sr, Sα, or Sβ weights, the calibration can be altered to more strongly emphasize the reproduction of flow timing, statistical variability, or long-term water balance. The results of the large-sample study motivated us to carry out further experiments to investigate how the choice of performance metric affects the estimation of peak and high flow. Here, we examine the extent to which altering the scale factors in KGE can result in improved high-flow simulations compared to NSE. We also examine the results provided by use of an application-specific metric, here taken as the percent bias in annual peak flows. 3 Models, datasets, and methods We use two hydrologic models: VIC (version 4.1.2h) and mHM (version 5.8). The VIC model, which includes explicit soil–vegetation–snow processes, has been used for a wide range of hydrologic applications, and has recently been evaluated in a large-sample predictability benchmark study . The mHM has been shown to provide robust hydrologic simulations over both Europe and the US and is currently being used in application studies (e.g., ). We use observed streamflow data at the HCDN basins and daily basin meteorological data from for the period from 1980 through 2008, as compiled by the CONUS large-sample basin dataset over a wide range of climate regimes . The use of the large-sample dataset is recommended to obtain general and statistically robust conclusions . In the context of flood mechanisms across the CONUS, large flood events are due to precipitation excess in conjunction with antecedent soil moisture states at the majority of the catchments, except that rapid snowmelt events are primarily responsible for floods over the mountainous west . Both models are run at a daily time step, and each model is calibrated separately for each of the 492 study basins (see Fig. 1a for the basin locations) using several different performance metrics. Although sub-daily simulation is preferable for some flood events, such as flash floods, the effects of the performance metrics on the calibrated high-flow estimates are independent of the simulation time step. Furthermore, instantaneous peak flow (at sub-daily scale) is strongly correlated with daily mean flows , justifying daily simulations still providing useful information for instantaneous peak flow estimates. We use a split-sample approach (Klemes1986) for the model evaluation. The hydrometeorological data are split into a calibration period (1 October 1999–30 September 2008) and an evaluation period (1 October 1989–30 September 1999), with a prior 10-year warm-up when computing the statistics for each period. The model parameters calibrated for each model are the same as previously discussed: VIC and mHM . Although alternative calibration parameter sets have also been used by others, particularly for VIC , the purpose of this study is purely to examine the effects of performance metrics used for calibration, and not to obtain “optimal” parameter sets. Each model is identically configured for each of the 492 basins. Both models use the same set of underlying physiographical and meteorological datasets, so that performance differences can be attributed mainly to
$U^t$: \[ U^t = \textbf{E}\left[\begin{pmatrix} V \\ t \end{pmatrix} \begin{pmatrix} k \\ t \end{pmatrix}^{-1}\right]. \] This result can be immediately applied to yield an unbiased estimator of $p_t$, when $t \leq k$: \begin{equation}\label{eq:ustat} \hat{p}_t^{UN} = \frac{1}{km}\sum_{i=1}^k\sum_{j=1}^{m} \begin{pmatrix} V_{i, j} \\ t-1 \end{pmatrix} \begin{pmatrix} k \\ t-1 \end{pmatrix}^{-1}. \end{equation} However, since $\hat{p}_t^{UN}$ is undefined for $k \geq t$, we can use exponential extrapolation to define an extended estimator $\hat{p}_t^{EXP}$ for $k > t$. Let $\hat{\alpha}$ be a measure defined by solving the optimization problem \[ \text{minimize}_{\alpha} \sum_{t=2}^{k} \left(\hat{p}_t^{UN} - \int_0^\infty \exp[-t\kappa] d\alpha(\kappa)\right)^2. \] After discretizing the measure $\hat{\alpha}$, we obtain a convex optimization problem which can be solved using non-negative least squares [9]. Then define \[ \hat{p}_t^{EXP} = \begin{cases} \hat{p}_t^{UN}&\text{ for }t \leq k,\\ \int_0^\infty \exp[-t\kappa] d\hat{\alpha}(\kappa))&\text{ for }t > k. \end{cases} \] \subsection{Maximum pseudolikelihood} \begin{figure} \centering \begin{tabular}{ccrl} Estimated density $\hat{\eta}$ &Estimated moment $\textbf{E}[U^t]$ & \\ \multirow{5}{*}{\includegraphics[scale = 0.5, clip=true, trim=0.2in 0.6in 0 0.7in]{gu_est.pdf}} & \multirow{5}{*}{\includegraphics[scale = 0.5, clip=true, trim=0.2in 0.6in 0 0.7in]{gu_est_moments.pdf}} & & \\ & & \crule[black]{0.2cm}{0.2cm} & Truth\\ & & & \\ & & \crule[blue]{0.2cm}{0.2cm} & MPLE \\ & & & \\ & & \crule[red]{0.2cm}{0.2cm} & CONS \\ & & & \\ & & & \\ & & & \\ $u$& $t$& & \\ \end{tabular} \caption{Maximum pseudolikelihood (MPLE) versus constrained pseudolikelihood (CONS). Adding constraints improves the estimation of the density $\eta(u)$, as well as moment estimation.} \end{figure} The (log) pseudolikelihood is defined as \begin{equation}\label{eq:psuedo} \ell(\eta) = \sum_{i=1}^k \sum_{j=1}^{m} \log\left(\int u^{V_{i, j}} (1-u)^{k - V_{i, j}} \eta(u) du\right), \end{equation} and a maximum pseudolikelihood estimator (MPLE) is defined as any density $\hat{\eta}$ such that \[ \ell(\hat{\eta}_{MPLE}) = \sup_{\eta} \ell(\eta). \] The motivation for $\hat{\eta}_{MPLE}$ is that it consistently estimates $\eta$ in the limit where $k \to \infty$. However, in finite samples, $\hat{\eta}_{MPLE}$ is not uniquely defined, and if we define the plug-in estimator \[ \hat{p}_t^{MPLE} = \int u^{t-1} \hat{\eta}_{MPLE}(u) du, \] $\hat{p}_t^{MPLE}$ can vary over a large range, depending on which $\hat{\eta} \in \text{argmax}_{\eta} \ell_t(\eta)$ is selected. These shortcomings motivate the adoption of additional constraints on the estimator $\hat{\eta}$. Theorem 3.4. motivates the \emph{monotonicity constraint} that $\frac{d\hat{\eta}}{du} > 0$. A second constraint is to restrict the $k$-th moment of $\hat{\eta}$ to match the unbiased estimate. The addition of these constraints yields the constrained PMLE $\hat{\eta}_{CON}$, which is obtained by solving \[ \text{maximize }\ell(\eta) \text{ subject to }\int u^{k-1} \eta(u) du = \hat{p}_k^{UN}\text{ and }\frac{d\hat{\eta}}{du} > 0. \] By discretizing $\eta$, all of the above maximization problems can be solved using a general-purpose convex solver\footnote{ We found that the disciplined convex programming language CVX, using the ECOS second-order cone programming solver, succeeds in optimizing the problems where the dimension of the discretized $\eta$ is as large as 10,000 [10, 11].}. While the added constraints do not guarantee a unique solution, they improve estimation of $\eta$ and thus improve moment estimation (Figure 1.) \subsection{High-dimensional asymptotics} Under a number of conditions on the distribution $\mathbb{F}$, including (but not limited to) having a large dimension $p$, Anon [4] relate the accuracy $p_t$ of the Bayes classifier to the mutual information between the label $z$ and the response $y$: \[ p_t = \bar{\pi}_t(\sqrt{2I(Z; Y)}). \] where \[ \bar{\pi}_k(c) = \int_{\mathbb{R}} \phi(z - c) \Phi(z)^{k-1} dz. \] While our goal is not to estimate the mutual information, we note that the results of Anon 2016 imply a relationship between $p_k$ and $p_K$ for the Bayes accuracy under the high-dimensional regime: \[ p_K = \bar{\pi}_K\left(\bar{\pi}_k^{-1}(p_k)\right). \] Therefore, under the high-dimensional conditions of [4] and assuming that the classifier approximates the Bayes classifier, we naturally obtain the following estimator \[ \hat{p}_t^{HD} = \bar{\pi}_K\left(\bar{\pi}_k^{-1}(\hat{p}_k^{UN})\right). \] \section{Results} We applied the methods described in Section 4 on a simulated gaussian mixture (Figure 2) and on a Telugu character classification task [12] (Table 1.) For the simulated gaussian mixture, we vary the size of the initial subset from $k=3$ classes to $k=K=50$ classes, and extrapolate the performance for gaussian mixture model, multinomial logistic, and one-layer neural network (with 10 sigmoidal units.) Figure 3 shows how the predicted $K$-class accuracy changes as $k$ is varied. We see that the predicted accuracy curves for QDA and Logistic have similar behavior, even though QDA is generative and multinomial logistic is not. All three methods perform better on QDA and logistic classifiers than on the neural network: in fact, for the neural network, the test accuracy of the initial set, $\text{acc}^{(k)}$, becomes a better estimator of $\text{acc}^{(K)}$ than the three proposed methods for most of the curve. We also see that the exponential extrapolation method, $\hat{p}^{EXP}$, is more variable than constrained pseudolikelihood $\hat{p}^{CONS}$ and high-dimensional estimator $\hat{p}^{HD}$. Additional simulation results can be found in the supplement. In the character classification task, we predict the 400-class accuracy of naive Bayes, multinomial logistic regression, SVM [6], $\epsilon$-nearest neighbors\footnote{$k$-nearest neighbors with $k = \epsilon n$ for fixed $\epsilon > 0$}, and deep neural networks\footnote{The network architecture is as follows: {\tt 48x48-4C3-MP2-6C3-8C3-MP2-32C3-50C3-MP2-200C3-SM.} 48x48 binary input image, $m$C3 is a 3x3 convolutional layer with $m$ output maps, MP2 is a 2x2 max-pooling layer, and SM is a softmax output layer on 20 or 400 classes.} using 20-class data with 103 training examples per class (Table 1). Taking the test accuracy on 400 classes (using 50 test examples per class) as a proxy for $\text{acc}^{(400)}$, we compare the performance of the three extrapolation methods; as a benchmark, also consider using the test accuracy on 20 classes as an estimate. The exponential extrapolation method performs well only for the deep neural network. Meanwhile, constrained PMLE achieves accurate extrapolation for two out of four classifiers: logistic and SVM but failed to converge for the the deep neural network (due to the high test accuracy). The high-dimensional estimator $\hat{p}^{HD}$ performs well on the multinomial logistic, SVM, and deep neural network classifiers. All three methods beat the benchmark (taking the test accuracy at 20) for the first four classifiers; however, the benchmark is the best estimator for the deep neural network, similarly to what we observe in the simulation (albeit with a shallow network rather than a deep network.) \begin{figure} \centering \begin{tabular}{cccrl} QDA & Logistic & Neural Net & \\ \multirow{5}{*}{\includegraphics[scale = 0.5, clip=true, trim=0.2in 0.6in 0.2in 0.7in]{sim_qda.pdf}} & \multirow{5}{*}{\includegraphics[scale = 0.5, clip=true, trim=0.75in 0.6in 0.2in 0.7in]{sim_glmnet.pdf}} & \multirow{5}{*}{\includegraphics[scale = 0.5, clip=true, trim=0.75in 0.6in 0.2in 0.7in]{sim_nnet.pdf}} & & \\ && & &\\ && & \crule[black]{0.2cm}{0.2cm} & $\text{acc}^{(k)}$ \\ && & \crule[green]{0.2cm}{0.2cm} & $\hat{p}^{EXP}$ \\ && & \crule[red]{0.2cm}{0.2cm} & $\hat{p}^{CONS}$ \\ && & \crule[blue]{0.2cm}{0.2cm} & $\hat{p}^{HD}$ \\ && & & \\ && & & \\ && & & \\ $k$ &$k$& $k$& & \\ \end{tabular} \caption{Predictions for $\text{acc}^{(50)}$ as $k$, the size of the subset, is varied. Our methods work better for QDA and Logistic than Neural Net; overall, $\hat{p}^{EXP}$ has higher variability than $\hat{p}^{CONS}$ and $\hat{p}^{HD}$.} \end{figure} \begin{table} \centering \begin{tabular}{|c||c|c|c|c|c|}\hline Classifier & Test $\text{acc}^{(20)}$ & Test $\text{acc}^{(400)}$ & $\hat{p}^{EXP}_{400}$ & $\hat{p}^{CONS}_{400}$ & $\hat{p}^{HD}_{400}$\\ \hline Naive Bayes & 0.947 & 0.601 & 0.884 & \textbf{0.659} & 0.769 \\ \hline Logistic & 0.922 & 0.711 & 0.844 & \textbf{0.682} & 0.686 \\ \hline SVM & 0.860 & 0.545 & 0.737 & 0.473 & \textbf{0.546} \\ \hline $\epsilon$-NN & 0.964 & 0.591 & 0.895 & \textbf{0.395} & 0.839\\ \hline Deep neural net & \textbf{0.995} & 0.986 & 0.973 & (*) & 0.957\\ \hline \end{tabular} \caption{Performance extrapolation: predicting the accuracy on 400 classes using data from 20 classes on a Telugu character dataset. (*) indicates failure to converge. $\epsilon = 0.002$ for $\epsilon$-nearest neighbors.} \end{table} \section{Discussion} Empirical results indicate that our methods generalize beyond generative classifiers. A possible explanation is that since the Bayes classifier is generative, any classifier which approximates the Bayes classifier is also `approximately generative.' However, an important caveat is that the classifier must already attain close to the Bayes accuracy on the smaller subset of classes. If the classifier is initially far from the Bayes classifier, and then becomes more accurate as more classes are added, our theory could underestimate the accuracy on the larger subset. This is a non-issue for generative classifiers when the training data per class is fixed, since a generative classifier approximates the Bayes rule if and only if the single-class classification function approximates the Bayes optimal single-class classification function. On the other hand, for classifiers with built-in \emph{model selection} or \emph{representation learning}, it is expected that the classification functions become more accurate, in the sense that they better approximate a monotonic function of the Bayes classification functions, as data from more classes is added. Our results are still too inconclusive for us to recommend the use of any of these estimators in practice. Theoretically, it still remains to derive confidence bounds for the generative case; practically, additional experiments are needed to establish the reliability of these estimators in specific applications. There also remains plenty of room for
of functions defined by \begin{equation} \label{con4} \widehat{\psi^P_j}(k):=\lambda^j_{k} \widehat{\varphi^P_{j+1}}(k), \end{equation} where $\lambda^j_{k}= e^{2 \pi i k/2^{-j}} \overline{\mu^j_{k+2^{j-1}}}. $ Then the family $\Psi^P =\{\varphi^{P}_{0}, \, \psi^{P}_{j,k} : \ j\in \mathbb{Z}_+, \ k=0,\dots,2^j-1 \}$ forms a Parseval wavelet frame for $L_{2}(\mathbb{T}).$ \end{teo} The functions $\varphi^P_j,$ $\psi^P_j$ and sequences $(\mu^j_k)_{k\in\z},$ $(\lambda^j_k)_{k\in\z}$ are called \texttt{a scaling function, a wavelet function, a scaling mask, and a wave\-let mask} respectively. It is convenient to construct both systems starting with scaling masks. Namely, let $\nu^{j}_{k}$ be a sequence such that $\nu^{j}_{k}=\nu^{j}_{k+2^j}$. We define $\widehat{\xi}_{j}(k):=\prod_{r=j+1}^{\infty}\nu^{r}_{k}.$ If the above infinite product converges, then the scaling function, scaling mask, wavelet mask, and wavelet function in a periodic setting are defined respectively as \begin{gather} \widehat{\varphi_{j}}(k):=2^{-j/2}\widehat{\xi}_{j}(k),\qquad\quad \mu^{j}_k:=\sqrt{2} \nu^{j}_k,\ \notag\\ \lambda^{j}_k:=e^{2\pi i 2^{-j}k}\mu^{j}_{k+2^{j-1}},\qquad \quad {\widehat{\psi}}_{j}(k):=\lambda^{j+1}_{k} \widehat{\varphi}_{j+1}(k). \notag \end{gather} Analogously, suppose $m_j$ is a $1$-periodic function, $\widehat{\varphi_j}(\xi):=\prod_{r=1}^{\infty}m_{j+r}(\xi/2^r)$ is well-defined, then the scaling function, scaling mask, and wavelet function in a nonstationary setting are defined respectively as \begin{gather} \label{aux_n} \widehat{\varphi^N_j}(\xi) = 2^{-j/2}\widehat{\varphi_j}(\xi/2^j),\qquad\quad m_j(\xi)=2^{-1/2} m^N_{0,j}(\xi),\ \\ \widehat{\psi^N_j} (\xi) = 2^{-j/2} m_{1,j+1}(\xi/2^{j+1}) \widehat{\varphi_{j+1}}(\xi/2^{j+1})=2^{-j/2} \widehat{\psi_j}(\xi/2^j) \notag \end{gather} Auxiliary scaling and wavelet functions $\varphi_j$ and $\psi_j$ are connected with nonstationary scaling and wavelet functions as $\psi^N_j(x)=2^{j/2}\psi_j(2^j x)$ and $\varphi^N_j(x)=2^{j/2}\varphi_j(2^j x).$ In stationary case $\varphi_j$ and $\psi_j$ are coincided with scaling and wavelet functions $\varphi$ and $\psi$. Comparing these two UEPs we get the following \begin{prop} \label{nst_to_per} If $\Psi^N = \{\varphi^{N}_{0,k}, \, \psi^{N}_{j,k} \}_{j\in \mathbb{Z}_+,\, k \in \mathbb{Z}}$ is a Parseval wavelet frame generated by UEP, $\varphi^{N}_{0}, \psi^{N}_{j} \in L_1(\mathbb{R}),$ and $$ \psi^P_j(x):=\sum_{n\in\mathbb{Z}} \psi^N_j (x-n), $$ then $\Psi^P =\{\varphi^{P}_{0}, \, \psi^{P}_{j,k} \}_{j\in \mathbb{Z}_+,\, k=0,\dots,2^j-1}$ is a Parseval wavelet frame. \end{prop} \textbf{Proof.} Since $\varphi^{N}_{0}, \psi^{N}_{j} \in L_1(\mathbb{R}),$ periodization $ \psi^P_j(x)=\sum_{n\in\mathbb{Z}} \psi^N_j (x-n) $ is well-defined. In the Fourier domain the last equality is rewritten as $\widehat{\psi^N_j}(k)=\widehat{\psi^P_j}(k).$ Therefore, substituting $k\in \z$ or $k 2^{-j},$ where $k \in \z,$ $j\in\z_+$ instead of $\xi\in \r$ in conditions (\ref{ncon1})-(\ref{ncon4}) we immediately get (\ref{con1})-(\ref{con4}). Proposition \ref{nst_to_per} is proved. \hfill $\Diamond$ We recall the definitions of the UCs and the uncertainty principles. \texttt{The Heisenberg UC} of $f \in L_2(\mathbb{R})$ is the functional $UC_H(f):=\Delta(f)\Delta(\widehat{f})$ such that $$ \Delta^2(f):= \frac{1}{\|f\|^{2}} \int_{\mathbb{R}} \bigl(x-c(f)\bigr)^2 \bigl|f(x)\bigr|^2 \, dx , \quad c(f):= \frac{1}{\|f\|^{2}} \int_{\mathbb{R}} x \bigl|f(x)\bigr|^2 \, dx, $$ where $\Delta(f),$ $\Delta(\widehat{f}),$ $c(f),$ and $c(\widehat{f})$ are called \texttt{time variance, frequency variance, time centre,} and \texttt{frequency centre} respectively. The Heisenberg uncertainty principle says that $UC_H(f)\geq 1/2$ for $f \in L_2(\mathbb{R})$, and the equality is attained iff $f$ is the Gaussian function. In \cite[p. 137]{Bat} the following refinement of the Heisenberg uncertainty principle is proved. If $f \in L_2(\r),$ $\ c(\widehat{f})$ $=0$, and $\int_{\r} f = 0,$ then $UC_H(f)\geq 3/2.$ The UC for periodic functions is introduced in \cite{B}. Let $f =\sum_{k \in \mathbb{Z}} c_k \mathrm{e}^{ \mathrm{i} k \cdot}\in L_{2,2\pi}.$ \texttt{ The first trigonometric moment} is defined as $$ \tau(f):=\frac{1}{2 \pi} \int_{-\pi}^{\pi} \mathrm{e}^{ \mathrm{i} x} |f(x)|^2\, \mathrm{d}x = \sum_{k \in \mathbb{Z}} c_{k-1} \overline{c_{k}}. $$ \texttt{The angular variance} of the function $f$ is defined by $$ {\rm var_A }(f):= \frac{\left(\sum_{k \in \mathbb{Z}}|c_k|^2\right)^2}{ \left|\sum_{k \in \mathbb{Z}}c_{k-1} \bar{c}_{k}\right|^2}-1 = \frac{\|f\|^4}{|\tau(f)|^2}-1. $$ \texttt{The frequency variance} of the function $f$ is defined by $$ {\rm var_F }(f):= \frac{\sum_{k \in \mathbb{Z}}k^2 |c_k|^2}{\sum_{k \in \mathbb{Z}}|c_k|^2}- \frac{\left(\sum_{k \in \mathbb{Z}}k|c_k|^2\right)^2}{\left(\sum_{k \in \mathbb{Z}}|c_k|^2\right)^2} = \frac{\|f'\|^2}{\|f\|^2}-\frac{(i f',\, f)^2}{\|f\|^4}. $$ The quantity $ UC_B(\{c_k\}):=UC_B(f):=\sqrt{\mathrm{var_A}(f)\mathrm{var_F}(f)} $ is called \texttt{the Breitenberger (periodic) UC}. The corresponding uncertainty principle \cite{B, PQ} says that if $f \in L_{2,2\pi}$, $f(x)\neq C \mathrm{e}^{ \mathrm{i} k x},$ $C \in \mathbb{R},$ $k \in \mathbb{Z}$, then $UC_B(f) > 1/2$ and there is no function such that $UC_B(f) = 1/2$ \section{Results} \subsection{Nonstationary wavelets generated by periodic wavelets} \begin{lem} \label{def_mask} Let $\nu^j_k\in\mathbb{R},$ $\nu^j_k \geq 0,$ be a two-parametric sequence such that $\nu^j_k=\nu^j_{k+2^j},$ $|\nu^j_k|^2 + |\nu^j_{k+2^{j-1}}|^2=1,$ $\nu^j_k = \nu^j_{-k}.$ By definition, put $\theta^j_k:=\arccos \nu^j_k.$ Let a function $z^K_j$ defined on the interval $[0,\,1/4]$ be a uniform spline of order $K+1$ of minimal defect such that $z^K_j(k 2^{-j}) = \theta^j_k,$ $k=0,\dots,2^{j-2},$ $(z^K_j)^{(l)}(0)=0,$ $l=1,\dots,K-1.$ Finally, let $m^K_j$ be an even $1$-periodic function defined as \begin{equation} \label{mask} m^K_j(\xi)= \left\{ \begin{array}{ll} \cos(z^K_j(\xi)), & \xi \in [0,\,1/4], \\ \sin(z^K_j(1/2-\xi)), & \xi \in (1/4,\,1/2]. \end{array} \right. \end{equation} Then $|m^K_j(\xi)|^2+|m^K_j(\xi+1/2)|^2=1,$ $m^K_j \in C^{K-1}(\mathbb{T}),$ $(m^K_j)^{(l)}(1/2) = 0,$ $l=1,\dots,K-1.$ \end{lem} \textbf{Proof.} The equality $|m^K_j(\xi)|^2+|m^K_j(\xi+1/2)|^2=1$ follows immediately from the detailed definition of $m^K_j$ $$ m^K_j(\xi)= \left\{ \begin{array}{ll} \sin(z^K_j(1/2+\xi-k)), & \xi \in [-1/2+k,\, -1/4+k), \\ \cos(z^K_j(-\xi+k)), & \xi \in [-1/4+k,\, k), \\ \cos(z^K_j(\xi-k)), & \xi \in [k,\, 1/4+k), \\ \sin(z^K_j(1/2-\xi+k)), & \xi \in [1/4+k,\, 1/2+k), k \in \z. \end{array} \right. $$ For instance, for $\xi \in [-1/2+k,\, -1/4+k)$ we get $$ |m^K_j(\xi)|^2+|m^K_j(\xi+1/2)|^2= \sin^2(z^K_j(1/2+\xi-k)) + \cos^2(z^K_j(1/2+\xi-k)) =1. $$ Since $z^K_j$ is a spline of order $K+1$ of minimal defect, we obtain $z^K_j \in C^{K-1}(\mathbb{T}).$ Therefore, $m^K_j$ is $K-1$ times continuously differentiable at all points except $n/4,$ where $n\in\z.$ The smoothness of $m^K_j$ at the origin follows from the fact that $\cos$ is smooth and even. Since $m^K_j$ is periodic and even, it remains to check smoothness at points $\xi = 1/4,$ $\xi=1/2.$ At the point $\xi=1/4$, we get for $n = 0, \dots, K-1$ $$ \left.(m^K_j)^{(n)}(\xi)\right|_{\xi =1/4-0} = \left. \left(\cos \left( z^K_j(\xi) \right)\right)^{(n)} \right|_{\xi =1/4-0} $$ $$ \left.(m^K_j)^{(n)}(\xi)\right|_{\xi =1/4+0} = \left. \left(\cos \left(\pi/2 - z^K_j(1/2 - \xi) \right)\right)^{(n)} \right|_{\xi =1/4+0}. $$ Thus, using $z^K_j(1/4)=\theta^j_{2^{j-2}}=\pi/4$, we get $(m^K_j)^{(n)}(1/4-0) = (m^K_j)^{(n)}(1/4+0)$ for $n = 0, \dots, K-1.$ At the point $\xi=1/2$ we have $$ \left.(m^K_j)^{(n)}(\xi)\right|_{\xi =1/2-0} = \left.(\sin(z^K_j(1/2-\xi)))^{(n)} \right|_{\xi =1/2-0} $$ $$ \left.(m^K_j)^{(n)}(\xi)\right|_{\xi =1/2+0} = \left.(\sin(z^K_j(-1/2+\xi)))^{(n)} \right|_{\xi =1/2+0} $$ So, by the condition $(z^K_j)^{(l)}(0)=0,$ $l=1,\dots,K-1$ we immediately obtain $$ (m^K_j)^{(n)}(1/2-0) = (m^K_j)^{(n)}(1/2+0) = 0. $$ This concludes the proof of Lemma \ref{def_mask}. \hfill $\Diamond$ \begin{lem} \label{prodL2} If $a_j\in L_2(\mathbb{T}),$ $a_j(0)=1$, and $\sum_{j}\|a''_j\|_2/2^j<\infty$, then $\widehat{\varphi_j}(\xi)=\prod_{r=1}^{\infty} a_{j+r}(\xi/2^r)$ uniformly and absolutely converges on any $[a,\,b]\subset \mathbb{R}.$ If additionally $|a_j(\xi)|^2+|a_j(\xi+1/2)|^2=1,$ then $\widehat{\varphi_j} \in L_2(\mathbb{R})$ and $\|\widehat{\varphi_j}\|_2\leq 1.$ \end{lem} \textbf{Proof.} This is a slight modification of the corresponding stationary result (see \cite[Proposition 2.4.1]{NPS}). Suppose $a_j(\xi) = \sum_{k\in \z}c_{j,k} e^{2 \pi i k \xi}$ and $a''_j \in L_2(\mathbb{T})$. Using $a_j(0)=1$, we get $$ |a_j(\xi) - 1|= \left|\sum_{k\in \z} c_{j,k} \left( e^{2 \pi i k \xi} - 1\right)\right| \leq 2\pi \sum_{k\in \z} |c_{j,k}| |k| |\xi| \leq C |\xi| \left(\sum_{k\in \z} | k^2 c_{j,k}|^2 \right)^{1/2} = C_1 |\xi| \|a''_j\|_2. $$ Therefore, $$ \sum_{r=1}^{\infty}|a_{j+r}(\xi/2^r) - 1| \leq C_1 |\xi| \sum_{r=1}^{\infty} \frac{\|a''_{j+r}\|_2}{2^r} = C_1 2^j |\xi| \sum_{n=1}^{\infty} \frac{\|a''_{n}\|_2}{2^n}. $$ Hence, the infinite product $\prod_{r=1}^{\infty} a_{j+r}(\xi/2^r)$ uniformly with respect to $\xi$ and absolutely converges on any $[a,\,b]\subset \mathbb{R}.$ The proof of the facts $\widehat{\varphi_j} \in L_2(\mathbb{R})$ and $\|\widehat{\varphi_j}\|_2\leq 1$ can be rewritten from stationary case \cite[Lemma 4.1.3]{NPS}. Lemma \ref{prodL2} is proved. \hfill $\Diamond$ \begin{lem} \label{my_prodL2} If $m^K_j$ is defined by (\ref{mask}), $\nu^j_0=1,$ and $$ \sum_{j}\frac{1}{2^j}\left(\int_{0}^{1/4}((z^K_j)'(\xi))^4+((z^K_j)''(\xi))^2\,d\xi\right)^{1/2}<\infty, $$ then $\widehat{\varphi_j}(\xi)=\prod_{r=1}^{\infty} m^K_{j+r}(\xi/2^r)$ uniformly and absolutely converges on any $[a,\,b]\subset \mathbb{R},$ $\widehat{\varphi_j} \in L_2(\mathbb{R})$ and $\|\widehat{\varphi_j}\|_2\leq 1.$ \end{lem} \textbf{Proof.} Assumptions of Lemma \ref{my_prodL2} are specifications of conditions of Lemma \ref{prodL2} for the masks $m^K_j$. Indeed, $$ \|(m^K_j)''\|^2_2 = \int_0^1 |(m^K_j)''(\xi)|^2 \, d\xi = 2 \left(\int_0^{1/4} (\cos''(z^K_j(\xi)))^2 \, d\xi + \int_{1/4}^{1/2} (\sin''(z^K_j(1/2 - \xi)))^2 \, d\xi \right) $$ $$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = 2\left(\int_0^{1/4} ((z^K_j)'(\xi))^4 + ((z^K_j)''(\xi))^2 \, d\xi \right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Diamond $$ \subsection{Adjustment of localization} The following theorem describes a connection between $UC_H$ and $UC_B$ in nonstationary case. For stationary setup, this theorem is proved in \cite{prqurase03}. \begin{teo} \label{nstPQRS} Let $\Psi^P =\{\varphi^{P}_{0}, \, \psi^{P}_{j,k} \}_{j\in \mathbb{Z}_+,\, k=0,\dots,2^j-1}$ and $\Psi^N = \{\varphi^{N}_{0,k}, \, \psi^{N}_{j,k} \}_{j\in \mathbb{Z}_+,\, k \in \mathbb{Z}}$ be periodic and nonstationary Parseval wavelet frames, and $$ \psi^P_j(x):=\sum_{n\in\mathbb{Z}} \psi^N_j (x-n). $$ If there exist functions $f,\, f_1\in L_2(\mathbb{R})$ such that $|2^{-j/2}\psi^{N}_j(2^{-j} x)|\leq f(x),$ $|(2^{-j/2}\psi^{N}_j(2^{-j} x))'(x)|\leq f_1(x),$ and $f\in AC_{loc}(\mathbb{R})$, $f(x)=O(|x|^{-3/2-\varepsilon}),$ $f_1(x)=O(|x|^{-1-\varepsilon})$ as $x\to \infty,$ $\varepsilon > 0$, then $$ \lim_{j\to \infty} UC_B(\psi^P_j) = \lim_{j\to \infty}UC_H(\psi^N_j). $$ \end{teo} We omit the proof of the theorem since it can be straightforwardly checked that all the steps of the proof of Theorem 3 \cite{prqurase03} holds true for nonstationary case under the assumptions of Theorem \ref{nstPQRS}. It is not convenient to apply Theorem \ref{nstPQRS}. Indeed, our stating point is a periodic wavelet system, however the main condition (existence of majorants $f$, $f_1$) concerns the resulting nonstationary system. The next theorem is free of this drawback and provides sufficient conditions for an adjustment of localization in terms of initial periodic masks. \begin{teo} \label{adjust_loc} Let $\Psi^P =\{\varphi^{P}_{0}, \, \psi^{P}_{j,k} \}_{j\in \mathbb{Z}_+,\, k=0,\dots,2^j-1}$ be a periodic Parseval wavelet frame, $(\mu^j_k)_k=(2^{1/2} \nu^j_k)_k$ be its scaling masks. Let $(\nu^j_k)_k$ satisfy the conditions of Lemma \ref{def_mask}. By definition, put $\theta^j_k:=\arccos \nu^j_k,$ $\overline{\nu}^j_k:=\max\{\nu^j_k,\,\nu^j_{k+1}\}$, $\underline{\nu}^j_k:=\min\{\nu^j_k,\,\nu^j_{k+1}\}$. If \begin{enumerate} \item the series $\sum_{k\in\z}|k b^j_k|$ uniformly converges and uniformly bounded with respect
HOMFLY polynomial of certain cablings of the knots). J.~Murakami \cite{Mu} showed that any invariant or order at most 10 does not distinguish mutants. So order 11 is the smallest where Vassiliev invariants detect mutants. \subsection{Mutation of the Kontsevich integral} \label{ki_mutation} Let us describe the behaviour of the Kontsevich integral with respect to knot mutation. First, recall the definition of a {\em share} from Section~\ref{cd-w-ig}: it is a part of the Wilson loop of a chord diagram, consisting of two arcs, such that each chord has either both or no endpoints on it. A mutation of a chord diagram is an operation of flipping the share with all the chords on it. In the construction of the Kontsevich integral of a knot $K$ the Wilson loop of the chord diagrams is parametrized by the same circle as $K$. For each chord diagram participating in $Z(K)$, the mutation of $K$ with respect to a subtangle $T$ gives rise to a flip of two arcs on the Wilson loop. \begin{xtheorem}[\cite{Le}]\label{th:Le} Let $M_T(K)$ be the mutant of a knot $K$ with respect to a subtangle $T$. Then $Z(K)$ consists only of diagrams for which the part of the Wilson loop that corresponds to $T$ is a share. Moreover, if $M_T(Z(K))$ is obtained from $Z(K)$ by flipping the $T$-share of each diagram, we have $$ Z(M_T(K))=M_T(Z(K)). $$ \end{xtheorem} \begin{proof} The proof is a straightforward application of the combinatorial construction of the Kontsevich integral. Write $K$ as a product $K=A\cdot (T\otimes B) \cdot C$ where $A,B,C$ are some tangles. Then the mutation operation consists in replacing $T$ in this expression by its flip $T'$ about a vertical axis. First, observe that rotating a parenthesized tangle with two points at the top and two points at the bottom by $180^{\circ}$ about a vertical axis results in the same operation on its combinatorial Kontsevich integral. Moreover, since there is only one choice of parentheses for a product of two factors, the non-associative monomials on the boundary of $T$ are the same as those of $T'$ (all are equal to $(xx)$). Choose the non-associative monomials for $B$ to be $u$ at the top and $v$ at the bottom. Then $$Z(K)=Z(A,1,(xx)u)\cdot \Bigl( Z(T,(xx),(xx)) \otimes Z(B,u,v)\Bigr) \cdot Z(C, (xx)v, 1),$$ where we write simply $Z$ for $Z_{comb}$, and $$Z(M_T(K))=Z(A,1,(xx)u)\cdot \Bigl( Z(T',(xx),(xx)) \otimes Z(B,u,v)\Bigr) \cdot Z(C, (xx)v, 1).$$ Both expressions only involve diagrams for which the part of the Wilson loop that corresponds to $T$ is a share; they differ exactly by the mutation of all the $T$-shares of the diagrams. \end{proof} \subsection{Counterexample to the Intersection Graph Conjecture} \label{IGCwrong} \index{Intersection graph!conjecture} It is easy to see that the mutation of chord diagrams does not change the intersection graph. Thus, if the intersection graph conjecture (see \ref{IGC}) were true, the Kontsevich integrals of mutant knots would coincide, and all Vassiliev invariants would take the same value on mutant knots. But this contradicts Theorem \ref{MCthm}. \subsection{} Now we can prove the theorem announced on page~\pageref{IGTM}: \begin{xtheorem}[\cite{CL}] The symbol of a Vassiliev invariant that does not distinguish mutant knots depends on the intersection graph only. \end{xtheorem} \begin{proof} The idea of the proof can be summarized in one sentence: a mutation of a chord diagram is always induced by a mutation of a singular knot. Let $D_1$ and $D_2$ be chord diagrams of degree $n$ with the same intersection graph. We must prove that if a Vassiliev knot invariant $v$, of order at most $n$, does not distinguish mutants, then the symbol of $v$ takes the same value on $D_1$ and $D_2$. According to the theorem of Section~\ref{cd-w-ig} (page~\pageref{cd-mut-theorem}), $D_2$ can be obtained from $D_1$ by a sequence of mutations. It is sufficient to consider the case when $D_1$ and $D_2$ differ by a single mutation in a share $S$. Let $K_1$ be a singular knot with $n$ double points whose chord diagram is $D_1$. The share $S$ corresponds to two arcs on $K_1$; the double points on these two arcs correspond to the chords with endpoints on $S$. Now, shrinking and deforming the two arcs, if necessary, we can find a ball in $\R^3$ whose intersection with $K_1$ consists of these two arcs and a finite number of other arcs. These other arcs can be pushed out of the ball, though not necessarily by an isotopy, that is, passing through self-intersections. The result is a new singular knot $K_1'$ with the same chord diagram $D_1$, whose double points corresponding to $S$ are collected in a tangle $T_S$. Performing an appropriate rotation of $T_S$ we obtain a singular knot $K_2$ with the chord diagram $D_2$. Since $v$ does not distinguish mutants, its values on $K_1$ and $K_2$ are equal. The theorem is proved. \end{proof} \medskip To illustrate the proof, consider the chord diagram $D_1$ below. Pick a singular knot $K_1$ representing $D_1$. $$D_1 =\ \risS{-25}{esche-cd-st-ex}{}{55}{30}{30}\hspace{3cm} K_1 =\ \risS{-25}{esche-kn-sin1}{}{80}{15}{8} $$ By deforming $K_1$ we achieve that the two arcs of the share form a tangle (placed on its side in the pictures below), and then push all other segments of the knot out of this subtangle: $$\risS{-25}{esche-kn-sin2}{ \put(-5,-10){\mbox{\scriptsize deforming the knot to form the subtangle}} }{100}{40}{50}\hspace{2.4cm} \risS{-25}{esche-kn-sin3}{ \put(5,-10){\mbox{\scriptsize pushing out other segments}} }{100}{40}{50} $$ Combining the last theorem with \ref{th:Le} we get the following corollary. \begin{xcorollary} Let $w$ be a weight system on chord diagrams with $n$ chords. Consider a Vassiliev invariant $v(K):=w\circ I(K)$. Then $v$ does not distinguish mutants if and only if the weight system $w$ depends only on the intersection graph. \end{xcorollary} \section{Canonical Vassiliev invariants}\label{canvi} Theorem~\ref{Ko_part_fund_th}) on the universality of the Kontsevich integral and its framed version in Section~\ref{frKo_part_fund_th} provide a means to recover a Vassiliev invariant of order $\leqslant n$ from its symbol, up to invariants of smaller order. It is natural to consider those remarkable Vassiliev invariants whose recovery gives precisely the original invariant. \begin{definition}\label{def_cani} (\cite{BNG}) \index{Vassiliev!invariant!canonical} \index{Canonical!invariant} A (framed) Vassiliev invariant $v$ of order $\leqslant n$ is called {\it canonical} if for every (framed) knot $K$, $$v(K) = \symb(v)(I(K))\,.$$ In the case of framed invariants one should write $I^{fr}(K)$ instead of $I(K)$. A power series invariant $f=\sum_{n=0}^\infty f_n h^n$, with $f_n$ of order $\leq n$ is called {\em canonical} \index{Canonical!series} if $$f(K) = \sum_{n=0}^\infty \bigl(w_n(I(K))\bigr) h^n\,$$ for every knot $K$, where $w=\sum_{n=0}^\infty w_n$ is the symbol of $f$. And, again, in the framed case one should use $I^{fr}(K)$ instead of $I(K)$. \end{definition} Recall that the power series invariants were defined on page~\pageref{pol-Vas-inv} and their symbols --- in the remark after Proposition~\ref{sym-hom}. Canonical invariants define a grading in the filtered space of Vassiliev invariants which is consistent with the filtration. \begin{xexample} The trivial invariant of order $0$ which is identically equal to 1 on all knots is a canonical invariant. Its weight system is equal to $\bo_0$ in the notation of Section \ref{bialgWS}. \end{xexample} \begin{xexample} The Casson invariant $c_2$ is canonical. This follows from the explicit formula \ref{v2_th} that defines it in terms of the knot diagram. \end{xexample} \noindent{\bf Exercise.} Prove that the invariant $j_3$ (see \ref{jones_vi}) is canonical. \medskip Surprisingly many of the classical knot invariants discussed in Chapters \ref{kn_inv} and \ref{FT_inv} turn out to be canonical. The notion of a canonical invariant allows one to reduce various relations between Vassiliev knot invariants to some combinatorial relations between their symbols, which gives a powerful tool to study knot invariants. This approach will be used in Section~\ref{melmor} to prove the Melvin--Morton conjecture. Now we shall give examples of canonical invariants following \cite{BNG}. \subsection{Quantum invariants}\label{can_qi} Building on the work of Drinfeld \cite{Dr1,Dr2} and Kohno \cite{Koh2}, T.~Le and J.~Murakami \cite[Th 10]{LM3}, and C.~Kassel \cite[Th XX.8.3]{Kas} \index{Theorem!Le--Murakami--Kassel} (see also \cite[Th 6.14]{Oht1}) proved that the quantum knot invariants $\theta^{fr}(K)$ and $\theta(K)$ introduced in Section \ref{qi} become canonical series after substitution $q=e^h$ and expansion into a power series in $h$. The initial data for these invariants is a semi-simple Lie algebra $\g$ and its finite dimensional irreducible representation $V_\lambda$, where $\lambda$ is its highest weight. To emphasize this data, we shall write $\theta^{V_\lambda}_{\g}(K)$ for $\theta(K)$ and $\theta^{fr,V_\lambda}_{\g}(K)$ for $\theta^{fr}(K)$ . \index{Quantum invariant} The quadratic Casimir element $c$ (see Section~\ref{LAWS_A}) acts on $V_\lambda$ as multiplication by a constant, call it $c_\lambda$. The relation between the framed and unframed quantum invariants is $$\theta^{fr,V_\lambda}_{\g}(K) = q^{\frac{c_\lambda\cdot w(K)}{2}} \theta^{V_\lambda}_{\g}(K)\,,$$ where $w(K)$ is the writhe of $K$. Set $q=e^h$. Write $\theta^{fr,V_\lambda}_{\g}$ and $\theta^{V_\lambda}_{\g}$ as power series in $h$: $$\theta^{fr,V_\lambda}_{\g} = \sum_{n=0}^\infty \theta^{fr,\lambda}_{\g,n}h^n \qquad \theta^{V_\lambda}_{\g} = \sum_{n=0}^\infty \theta^{\lambda}_{\g,n}h^n. $$ According to the Birman--Lin theorem (\ref{qift}), the coefficients $\theta^{fr,\lambda}_{\g,n}$ and $\theta^{\lambda}_{\g,n}$ are Vassiliev invariants of order $n$. The Le--Murakami---Kassel Theorem states that they both are canonical series. It is important that the symbol of $\theta^{fr,V_\lambda}_{\g}$ is precisely the weight system $\varphi_\g^{V_\lambda}$ described in Chapter \ref{LAWS}. The symbol of $\theta^{V_\lambda}_{\g}$ equals $\varphi'^{V_\lambda}_\g$. In other words, it is obtained from $\varphi_\g^{V_\lambda}$ by the deframing procedure of Section~\ref{defram_ws}. Hence, knowing the Kontsevich integral allows us to restore the quantum invariants $\theta^{fr,V_\lambda}_{\g}$ and $\theta^{V_\lambda}_{\g}$ from these weight systems without the
With the help of nonlinear optics processes, the central wavelength of the probe pulses can be tuned into the mid-infrared regime, which will allow the direct observation of LWFAs in the low-density regime. Speaker: Yu Zhao (Helmholtz Institute Jena) • 7:15 PM Phase manipulation through plasma density modulation 1h A Laser-Wakefield Accelerator can produce electrons in the MeV range just over a few millimetres. However, due to their finite energy spread and divergence the applications of these electrons become limited. By tailoring the plasma density, the phase can be manipulated and hence gaining control of the bunch energy spread and divergence. Here, the properties of 100 MeV shock-assisted ionisation injected electrons after propagation through two supersonic gas jets is presented. Speaker: Cornelia Gustafsson (Lund University) • 7:15 PM Precise intensity tagging for ultrashort high-power lasers 1h The LUXE (Laser Und XFEL Experiment) project at DESY Hamburg aims to measure processes in the strong-field quantum electrodynamics regime with high precision by colliding electrons or a high-energy photon beam with a high-power, tightly focused laser beam at a repetition rate of 1Hz. Simulations [LUXE CDR, arXiv:2102.02032 [hep-ex]] predict that the probability of pair production responds highly non-linearly to the laser strength parameter. Consequently, small variations in the laser intensity lead to significant variations in the experimental observables. The required precision will be achieved by intensity tagging through precise measurements on the relative variation of intensity on a shot-by-shot basis, with an ultimate aim to monitor the shot-to-shot fluctuations with a precision below 1%. We present the results of a non-linear intensity tagging method, which provides a measure of the laser intensity by comparing the fundamental to a non-linear copy of the laser focal spot from a thin non-linear crystal. This method provides a reference to crosscheck the intensity fluctuations derived from independent measurements of energy, duration and fluence. ACKNOWLEDGEMENT - This poster presentation has received support from the European Union’s Horizon 2020 Research and Innovation programme under Grant Agreement No 101004730. Speaker: Ms Xinhe Huang (DESY, Helmholtz-Institute Jena) • 7:15 PM Resonant wakefield excitation observed in long plasma channels 1h The multi-pulse laser wakefield acceleration (MP-LWFA) scheme provides a route for GeV-scale accelerators operating at kilohertz-repetition-rates driven by picosecond-duration laser pulses, such as those available from thin-disk lasers, which are modulated to be resonant with the plasma wave. We recently published theoretical work proposing a new scheme of GeV accelerator based on MP-LWFA. In this scheme, trains of pulses are generated from a long, high-energy drive pulse via the spectral modulation caused by a low amplitude wakefield driven by a leading short, low-energy seed pulse. Our simulations show that temporal compression of the modulated drive pulse yields a pulse train that can resonantly drive a wakefield, allowing for acceleration of a test electron bunch to 0.65 GeV in a 100 mm-long plasma channel. In this study, we present the preliminary results of recent experiments with the Astra-Gemini TA3 laser at the Central Laser Facility which are relevant to the accelerator stage of this novel scheme. We demonstrate, for the first time, guiding of 2.5 J pulse trains in a 100 mm long all-optical plasma channel. Measurements of the spectrum of the transmitted laser pulse train suggest that a wakefield was resonantly excited in the plasma channel. Speaker: Aimee Ross • 7:15 PM Simultaneous space-time focusing with radially-chirped laser pulses for ionization injection in LWFAs 1h Simultaneous space-time focusing occurs when a transversely-chirped ultrashort laser pulse is focused using a conventional lens. Before the lens different frequencies are separated radially so that at any point on the transverse plane the local bandwidth is relatively low. These frequencies are brought together downstream of the lens as they approach the focus. As the spatial overlap between different frequency components increases, so does the local bandwidth, thereby reducing the pulse duration to its minimum at the focus. This reduces the space-time volume of the region in which the intensity is high. This may have potential advantages towards reducing the phase-space volume of electrons injected in a wakefield accelerator by optical ionisation injection. The focusing of a radially-chirped laser pulse is studied both analytically and numerically. Decomposing any arbitrarily chirped input laser pulse as a superposition of Laguerre-Gaussian modes allows for an exact expression of the electric field at any longitudinal position of the focusing beam. A numerical investigation is also performed using Collins' method: a diffraction integral based on ray-matrices. It is investigated whether the radial chirp gives enhanced intensity roll-off in space and time over conventional focusing. Speaker: Emily Archer • 8:30 PM 9:30 PM Dinner 1h Fuoco di Bosco - Hotel Hermitage ### Fuoco di Bosco - Hotel Hermitage • Tuesday, September 20 • 9:00 AM 10:30 AM Special Topic: Beam-driven Plasma Accelerators with focus on proton-driven (AWAKE, ...) Sala Maria Luisa - Hotel Hermitage ### Sala Maria Luisa - Hotel Hermitage Conveners: Edda Gschwendtner (CERN) , Patric Muggli (Max-Planck-Institut für Physik) • 9:00 AM Particle Physics Applications for Proton-Driven PWA 25m Proton-driven PWA is particularly well-suited for producing bunches of high-energy electrons. A number of particle physics applications for these types of beams will be presented. These include beam-dump, fixed target, and collider applications covering a wide range of fundamental physics research. Speaker: Allen Caldwell (Max Planck Institute for Physics) • 9:25 AM Operational Aspects of Beam-Driven Facilities 20m Beam-driven plasma wakefield acceleration experiments require state-of-the-art facilities in order to operate. In this talk, we review challenges associated with operating these facilities, including: beam quality and stability, machine-experiment interface, controls challenges, diagnostic challenges, and communication and scheduling issues. We hope that by clarifying these issues, we can develop better tools, methods, and processes that enable efficient experimental operation. Speaker: Spencer Gessner (SLAC) • 9:45 AM Effects of plasma ramp measured in AWAKE 15m We study the propagation of an electron bunch travelling within a proton bunch through a plasma density ramp. Because the proton bunch density in the ramp is higher than the plasma density, the bunch generates a high density, on-axis plasma electron filament. This filament is defocusing for the electron witness bunch that can therefore be lost along the ramp. At AWAKE we have measured this effect by changing the relative timing of the electron bunch with respect to the proton bunch. When the electron bunch propagates in front of the proton bunch (i.e. seeding the self-modulation), the electrons travel until the electron spectrometer, downstream of the plasma column. A position of the electrons within the proton bunch, where seeding stops, exists. Beyond this position electrons are lost and do not reach the spectrometer. We will present latest experimental data obtained during Run 2. Speaker: Jan Pucek (Max-Planck Institute for Physics) • 10:00 AM Numerical simulation study of the propagation of a short electron bunch and a long proton bunch in a plasma ramp 15m A particle bunch propagating through plasma will induce a non-linear response when $n_b \gg n_{e0}$ [1]. A negatively charged bunch will drive a blow-out, in which all plasma electrons are expelled from the propagation axis. A positively charged bunch will attract plasma electrons, which will flow-in to the propagation axis, creating a filament [2]. This will sustain focusing fields for the positively charged bunch and defocusing for negatively charged particles. In the Advanced Wakefield Experiment (AWAKE) [3], in which a long proton bunch drives high-amplitude wakefields for electron acceleration through self-modulation [4], there is a plasma density ramp at the entrance and exit of the plasma [5]. The density in the plasma ramp can be up to five orders of magnitude lower than that of the long plasma. We present a numerical study performed with the particle-in-cell code LCODE [6] using parameters similar to those of the experiments. We show that the plasma ramp would have a detrimental effect on the propagation of both a seed electron bunch placed inside of the proton bunch and for an electron bunch injected in a second plasma for acceleration [7], if that plasma
\alpha$ and $B_{k\ell} = \beta$ for all $k, \ell \in [K]$ such that $k \neq \ell$. We define a variant of SBM with respect to a representation graph $\mathcal{R}$ below and refer to it as Representation-Aware SBM or $\mathcal{R}$-SBM. \begin{definition}[$\mathcal{R}$-SBM] \label{def:conditioned_sbm} A $\mathcal{R}$-SBM is defined by the tuple $(\pi, \mathcal{R}, p, q, r, s)$, where $\pi: \mathcal{V} \rightarrow [K]$ maps nodes in $\mathcal{V}$ to clusters, $\mathcal{R}$ is a representation graph, and $1 \geq p \geq q \geq r \geq s \geq 0$ are probabilities used for sampling edges. Under this model, for all $i > j$, \begin{equation} \label{eq:sbm_specification} \mathrm{P}(A_{ij} = 1) = \begin{cases} p & \text{if } \pi(v_i) = \pi(v_j) \text{ and } R_{ij} = 1, \\ q & \text{if } \pi(v_i) \neq \pi(v_j) \text{ and } R_{ij} = 1, \\ r & \text{if } \pi(v_i) = \pi(v_j) \text{ and } R_{ij} = 0, \\ s & \text{if } \pi(v_i) \neq \pi(v_j) \text{ and } R_{ij} = 0. \end{cases} \end{equation} \end{definition} The similarity graphs sampled from a $\mathcal{R}$-SBM have two interesting properties. First, everything else being equal, nodes have a higher tendency to connect with other nodes in the same cluster as $p \geq q$ and $r \geq s$. Thus, $\mathcal{R}$-SBM plants the clusters specified by $\pi$ in the sampled graph $\mathcal{G}$. Second, and more importantly, $\mathcal{R}$-SBM also plants the properties of the given representation graph $\mathcal{R}$ in the sampled graphs $\mathcal{G}$. To see this, note that nodes that are connected in $\mathcal{R}$ have a higher probability of being connected in $\mathcal{G}$ as well ($p \geq r$ and $q \geq s$). Recall that our algorithms must discover clusters in $\mathcal{G}$ in which the connected nodes in $\mathcal{R}$ are proportionally distributed. However, $\mathcal{R}$-SBM makes two nodes connected in $\mathcal{R}$ more likely to connect in $\mathcal{G}$, even if they do not belong to the same cluster ($q \geq r$). In this sense, graphs sampled from $\mathcal{R}$-SBM are ``hard'' instances from the perspective of our algorithms. When $\mathcal{R}$ itself has a community structure, there are two natural ways to cluster the nodes: \textbf{(i)} based on the ground-truth clusters $\mathcal{C}_1$, $\mathcal{C}_2$, \dots, $\mathcal{C}_K$ specified by $\pi$; and \textbf{(ii)} based on the communities in $\mathcal{R}$. The clusters based on communities in $\mathcal{R}$ are likely to not satisfy the representation constraint in Definition \ref{def:representation_constraint} as tightly connected nodes in $\mathcal{R}$ will be assigned to the same cluster in this case rather than being distributed across clusters. We show in Section \ref{section:consistency_results} that, under certain assumptions on $\mathcal{R}$, the ground-truth clusters can be constructed so that they satisfy the representation constraint \eqref{eq:representation_constraint}. Assuming that the ground-truth clusters indeed satisfy \eqref{eq:representation_constraint}, the goal is to show that Algorithms \ref{alg:urepsc} and \ref{alg:nrepsc} recover the ground-truth clusters with high probability rather than returning any other natural but ``representation-unaware'' clusters. \subsection{Consistency results} \label{section:consistency_results} As noted in Section \ref{section:constraint}, some representation graphs lead to constraints that cannot be satisfied. For our theoretical analysis, we restrict our focus to cases where the constraint in \eqref{eq:representation_constraint} is feasible. Towards this end, an additional assumption on $\mathcal{R}$ is required. \begin{assumption} \label{assumption:R_is_d_regular} $\mathcal{R}$ is a $d$-regular graph for some $K \leq d \leq N$. Moreover, $R_{ii} = 1$ for all $i \in [N]$, and each node in $\mathcal{R}$ is connected to $d / K$ nodes from cluster $\mathcal{C}_i$, for all $i \in [K]$ (including the self-loop). \end{assumption} Assumption~\ref{assumption:R_is_d_regular} ensures the existence of a $\pi$ for which the corresponding ground-truth clusters satisfy the representation constraint in \eqref{eq:representation_constraint}. Namely, assuming equal-sized clusters, let $\pi(v_i) = k$, if $(k - 1) \frac{N}{K} \leq i \leq k \frac{N}{K}$ for all $i \in [N]$, and $k \in [K]$. It can be easily verified that the resulting clusters $\mathcal{C}_k = \{v_i : \pi(v_i) = k \}$, $k \in [K]$ satisfy \eqref{eq:representation_constraint}. Before presenting our main results, we need to set up additional notation. Let $\bm{\Theta} \in \{0, 1\}^{N \times K}$ indicate the ground-truth cluster memberships, i.e., $\Theta_{ij} = 1 \Leftrightarrow v_i \in \mathcal{C}_j$. Similarly, $\hat{\bm{\Theta}} \in \{0, 1\}^{N \times K}$ indicates the clusters returned by the algorithm, i.e., $\hat{\Theta}_{ij} = 1 \Leftrightarrow v_i \in \hat{\mathcal{C}}_j$. Further, let $\mathcal{J}$ be the set of all $K \times K$ permutation matrices. The fraction of misclustered nodes \citep{LeiEtAl:2015:ConsistencyOfSpectralClusteringInSBM} is defined as \begin{equation*} M(\bm{\Theta}, \hat{\bm{\Theta}}) = \min_{\mathbf{J} \in \mathcal{J}} \frac{1}{N} \norm{\bm{\Theta} - \hat{\bm{\Theta}} \mathbf{J}}[0]. \end{equation*} As the ground truth clusters $\mathcal{C}_1, \dots, \mathcal{C}_K$ satisfy \eqref{eq:representation_constraint} by construction, a low $M(\bm{\Theta}, \hat{\bm{\Theta}})$ indicates that the clusters returned by the algorithm approximately satisfy \eqref{eq:representation_constraint}. Theorems \ref{theorem:consistency_result_unnormalized} and \ref{theorem:consistency_result_normalized} also use the eigenvalues of the Laplacian matrix in the expected case. We use $\mathcal{L}$ to denote this matrix, and define it as $\mathcal{L} = \mathcal{D} - \mathcal{A}$, where $\mathcal{A} = \mathrm{E}[\mathbf{A}]$ is the expected adjacency matrix of a graph sampled from $\mathcal{R}$-SBM and $\mathcal{D} \in \mathbb{R}^{N \times N}$ is a diagonal matrix such that $\mathcal{D}_{ii} = \sum_{j = 1}^N \mathcal{A}_{ij}$, for all $i \in [N]$. The next two results establish high-probability upper bounds on the fraction of misclustered nodes for \textsc{URepSC} and \textsc{NRepSC} for similarity graphs $\mathcal{G}$ sampled from $\mathcal{R}$-SBM. \begin{theorem}[Error bound for \textsc{URepSC}] \label{theorem:consistency_result_unnormalized} Let $\rank{\mathbf{R}} \leq N - K$ and assume that all clusters have equal sizes. Let $\mu_1 \leq \mu_2 \leq \dots \leq \mu_{N - r}$ denote the eigenvalues of $\mathbf{Y}^\intercal \mathcal{L} \mathbf{Y}$, where $\mathbf{Y}$ was defined in Section \ref{section:unnormalized_repsc}. Define $\gamma = \mu_{K + 1} - \mu_{K}$. Under Assumption \ref{assumption:R_is_d_regular}, there exists a universal constant $\mathrm{const}(C, \alpha)$, such that if $\gamma$ satisfies $$\gamma^2 \geq \mathrm{const}(C, \alpha) (2 + \epsilon) p N K \ln N,$$ and $p \geq C \ln N / N$ for some $C > 0$, then, $$M(\bm{\Theta}, \hat{\bm{\Theta}}) \leq \mathrm{const}(C, \alpha) \frac{(2 + \epsilon)}{\gamma^2} p N \ln N,$$ for every $\epsilon > 0$ with probability at least $1 - 2 N^{-\alpha}$ when a $(1 + \epsilon)$-approximate algorithm for $k$-means clustering is used in Step 5 of Algorithm \ref{alg:urepsc}. \end{theorem} \begin{theorem}[Error bound for \textsc{NRepSC}] \label{theorem:consistency_result_normalized} Let $\rank{\mathbf{R}} \leq N - K$ and assume that all clusters have equal sizes. Let $\mu_1 \leq \mu_2 \leq \dots \leq \mu_{N - r}$ denote the eigenvalues of $\mathcal{Q}^{-1} \mathbf{Y}^\intercal \mathcal{L} \mathbf{Y} \mathcal{Q}^{-1}$, where $\mathcal{Q} = \sqrt{\mathbf{Y}^\intercal \mathcal{D} \mathbf{Y}}$ and $\mathbf{Y}$ was defined in Section \ref{section:unnormalized_repsc}. Define $\gamma = \mu_{K + 1} - \mu_{K}$ and $\lambda_1 = qd + s(N - d) + (p - q) \frac{d}{K} + (r - s) \frac{N - d}{K}$. Under Assumption \ref{assumption:R_is_d_regular}, there are universal constants $\mathrm{const}_1(C, \alpha)$, $\mathrm{const}_4(C, \alpha)$, and $\mathrm{const}_5(C, \alpha)$ such that if: \begin{enumerate} \item $\left(\frac{\sqrt{p N \ln N}}{\lambda_1 - p}\right) \left(\frac{\sqrt{p N \ln N}}{\lambda_1 - p} + \frac{1}{6\sqrt{C}}\right) \leq \frac{1}{16(\alpha + 1)}$, \item $\frac{\sqrt{p N \ln N}}{\lambda_1 - p} \leq \mathrm{const}_4(C, \alpha)$, and \item $16(2 + \epsilon)\left[ \frac{8 \mathrm{const}_5(C, \alpha) \sqrt{K}}{\gamma} + \mathrm{const}_1(C, \alpha)\right]^2 \frac{p N^2 \ln N}{(\lambda_1 - p)^2} < \frac{N}{K}$, \end{enumerate} and $p \geq C \ln N / N$ for some $C > 0$, then, $$M(\bm{\Theta}, \hat{\bm{\Theta}}) \leq 32(2 + \epsilon)\left[ \frac{8 \mathrm{const}_5(C, \alpha) \sqrt{K}}{\gamma} + \mathrm{const}_1(C, \alpha)\right]^2 \frac{p N \ln N}{(\lambda_1 - p)^2},$$ for every $\epsilon > 0$ with probability at least $1 - 2 N^{-\alpha}$ when a $(1 + \epsilon)$-approximate algorithm for $k$-means clustering is used in Step 6 of Algorithm \ref{alg:nrepsc}. \end{theorem} Next, we discuss our assumptions and use the error bounds above to establish the weak consistency of our algorithms. \subsection{Discussion} \label{section:discussion} Note that $\mathbf{I} - \bm{1} \bm{1}^\intercal / N$ is a projection matrix and $\bm{1}$ is its eigenvector with eigenvalue $0$. Any vector orthogonal to $\bm{1}$ is also an eigenvector with eigenvalue $1$. Thus, $\rank{\mathbf{I} - \bm{1} \bm{1}^\intercal / N} = N - 1$. Because $\rank{\mathbf{R} (\mathbf{I} - \bm{1} \bm{1}^\intercal / N)} \leq \min(\rank{\mathbf{R}}, \rank{\mathbf{I} - \bm{1} \bm{1}^\intercal / N})$, requiring $\rank{\mathbf{R}} \leq N - K$ ensures that $\rank{\mathbf{R}(\mathbf{I} - \bm{1} \bm{1}^\intercal / N)} \leq N - K$, which is necessary for \eqref{eq:optimization_problem} and \eqref{eq:optimization_problem_normalized} to have a solution. The assumption on the size of the clusters, together with the $d$-regularity assumption on $\mathcal{R}$, allows us
Pioneer anomaly. \subsection{Independent verifications} \label{sec:indep-verify} By now several studies of the Pioneer~10 and 11 radiometric Doppler data have demonstrated that the anomaly is unambiguously present in the trajectory solutions for both spacecraft. These studies were performed with six independent (and different!) navigational computer programs (see~\cite{pioprl,pioprd,levy-2008, 2002gr.qc.....8046M, 2007AA...463..393O, Toth2008}), namely: \begin{itemize} \item The most detailed analysis of the Pioneer anomaly to date, the 2002 study by JPL~\cite{pioprd} (which is discussed in depth in Section~\ref{sec:anomaly}), used various versions of the JPL's Orbit Determination Program (ODP), developed between 1980\,--\,2005~\cite{pioprl,pioprd}, \item The Aerospace Corporation's Compact High Accuracy Satellite Motion Program (CHASMP) code, extended for deep space navigation~\cite{pioprl,pioprd}, \item Code written at the Goddard Space Flight Center (GSFC) by C.B.~Markwardt~\cite{2002gr.qc.....8046M} that was used to analyze Pioneer~10 data for the period 1987\,--\,1994 obtained from the National Space Science Data Center (NSSDC)\epubtkFootnote{The National Space Science Data Center (NSSDC), see details at \url{http://nssdc.gsfc.nasa.gov/}}, \item The HELIOSAT orbit determination program that was co-developed at the Institute for Theoretical Astrophysics, University of Oslo, Norway, was recently used by one of the code's authors, \O.~Olsen~\cite{2007AA...463..393O}, to analyze the set of the Pioneer~10 data set identical to the one used in the JPL's 2002 study, \item An orbit determination code that was independently developed by V.T.~Toth~\cite{Toth2008} for the purposes of studying the Pioneer anomaly, and finally \item A dedicated software package called ODYSSEY that has been developed by Groupe Anomalie Pioneer (GAP)\epubtkFootnote{Groupe Anomalie Pioneer (GAP), a French collaboration on Pioneer Anomaly supported by The Centre National d'Etudes Spatiales (CNES), France, which includes researchers from LKB, ONERA, OCA, IOTA and SYRTE laboratories, see details at \url{http://www.lkb.ens.fr/-GAP-?lang=fr}} for the purposes of investigating the Pioneer anomaly~\cite{levy-2008}. \end{itemize} These recent independent analyses of the Pioneer~10 and 11 radiometric Doppler data confirmed the existence of the Pioneer anomaly at the level reported by the JPL's 2002 study and they also provided new knowledge of the effect. Below we review these analyses in some detail. \subsubsection{Independent verification by Markwardt} \label{sec:markwardt} Shortly after publication of the 2002 JPL result, Markwardt~\cite{2002gr.qc.....8046M} published an independent analysis that was unique in the sense that it utilized a separately obtained data set. Rather than using data in the form of JPL-supplied Orbit Determination Files, Markwardt obtained Pioneer~10 tracking data from the National Space Science Data Center (NSSDC) archive. This data was in the Archival Tracking Data File (ATDF) format, which Markwardt processed using tools developed for the purposes of this specific study~\cite{MARKWARDTIDL, MARKWARDTATDF}. The Pioneer~10 data used in Markwardt's investigation spanned the years 1987 through 1994, and his result, $a_{P10}=(7.70\pm 0.02)\times 10^{-10}\mathrm{\ m/s}^2$ (see Figure~\ref{fig:markwardt}), is consistent with the JPL result. Markwardt was also the first to investigate explicitly the possible presence of a jerk (i.e., the rate of change of acceleration\epubtkFootnote{See definition of the jerk term at \url{http://en.wikipedia.org/wiki/Jerk_(physics)}.}, defined as $j\equiv\dot a=da/dt$) term, and found that a term $|j_{P10}|<0.18\times 10^{-10}\mathrm{\ m/s}^2/\mathrm{year}$ is consistent with the data. Based on the studied Pioneer~10 data set, Markwardt found that the anomaly is nearly constant with time, with a characteristic variation time scale of over 70~yr, which is still too short to rule out on-board thermal radiation effects. \epubtkImage{}{% \begin{figure}[t!] \centerline{\includegraphics[width=0.70\textwidth]{craig-anomaly-plot}} \caption{Results of Markwardt's analysis~\cite{2002gr.qc.....8046M} show Doppler residuals as a function of time of the best fit model. The top panel shows the residuals after setting $a_P = 0$, and demonstrates the linear increase with time. The top panel shows all of the data, including segments that were filtered out because of interference due to the solar corona (designated by a horizontal bar with ``C'') or due to general noise (designated ``N''). The bottom panel shows the filtered residuals, including the best fit value of the anomalous acceleration. The equivalent spacecraft velocity is also shown.} \label{fig:markwardt} \end{figure}} \subsubsection{Analysis by Olsen using HELIOSAT} Olsen~\cite{2007AA...463..393O} focused on the constancy of the anomalous acceleration using the HELIOSAT orbit determination program that was independently developed by him at the University of Oslo, Norway. Analysis confirmed the acceleration at the levels reported by~\cite{pioprd} for the same segments of Pioneer~10 and 11 data that were used by JPL (see Table~\ref{tb:olsen}). The study found that systematic variations in the anomalous acceleration are consistent with solar coronal mass ejections and that the Doppler data alone cannot distinguish between constant acceleration and slowly decreasing acceleration. Specifically, the study concluded that heat dissipation cannot be excluded as a source of the anomaly. \begin{table}[h!] \caption[The Pioneer anomalous acceleration in units of 10\super{-10}~m/s\super{2}. This table compares the results from JPL's ODP and the Aerospace Corporation's CHASMP codes to results obtained using the HELIOSAT program developed by Olsen.]{The Pioneer anomalous acceleration in units of 10\super{-10}~m/s\super{2}. This table compares the results from JPL's ODP and the Aerospace Corporation's CHASMP codes from~\cite{pioprd} (see Table~\ref{tb:2002results}) to results obtained using the HELIOSAT program developed by Olsen~\cite{2007AA...463..393O}.} \label{tb:olsen} \centering \begin{tabular}{ccccc} \toprule Software & Pioneer~10 (I) & Pioneer~10 (II) & Pioneer~10 (III) & Pioneer~11\\ \midrule ODP/Sigma & 8.00~\textpm~0.01 & 8.66~\textpm~0.01 & 7.84~\textpm~0.01 & 8.44~\textpm~0.04\\ CHASMP & 8.22~\textpm~0.02 & 8.89~\textpm~0.01 & 7.92~\textpm~0.01 & 8.69~\textpm~0.03\\ HELIOSAT & 7.85~\textpm~0.02 & 8.78~\textpm~0.01 & 7.75~\textpm~0.01 & 8.10~\textpm~0.01\\ \bottomrule \end{tabular} \end{table} \subsubsection{Independent analysis by Toth} \label{sec:toth} \epubtkImage{}{% \begin{figure}[t!] \begin{raggedright} \vskip -6pt \begin{minipage}{.45\linewidth}a)\\\vskip -12pt\includegraphics[width=\linewidth]{p10_1}\end{minipage} \hskip 0.05\linewidth \begin{minipage}{.45\linewidth}c)\\\vskip -12pt\includegraphics[width=\linewidth]{p11_1}\end{minipage} \vskip -6pt \begin{minipage}{.45\linewidth}b)\\\vskip -12pt\includegraphics[width=\linewidth]{p10_3}\end{minipage} \hskip 0.05\linewidth \begin{minipage}{.45\linewidth}d)\\\vskip -12pt\includegraphics[width=\linewidth]{p11_3}\end{minipage} \caption{Results of Toth's analysis~\cite{Toth2008}: a) best-fit residuals for Pioneer~10; b) best-fit residuals for Pioneer~10 with no anomalous acceleration term; c--d) same as a--b, for Pioneer~11.} \label{fig:Toth} \end{raggedright} \end{figure}} Toth~\cite{Toth2008} also studied the anomalous acceleration using independently developed orbit determination software, and confirmed that the introduction of a constant acceleration term significantly improves the post-fit residuals (Figure~\ref{fig:Toth}). Toth determined the anomalous accelerations of Pioneers 10 and 11 as $a_{P10}=(9.03\pm 0.86)\times 10^{-10}\mathrm{\ m/s}^2$ and $a_{P11}=(8.21\pm 1.07) \times 10^{-10}\mathrm{\ m/s}^2$ correspondingly, where the error terms were taken from~\cite{pioprd} (excluding terms related to thermal modeling, which is the subject of on-going effort). Studying the temporal behavior of the anomalous acceleration, he was able to find a best fit for the acceleration and jerk terms of both spacecraft: $a_\mathrm{P10}=(10.96\pm 0.89)\times 10^{-10}\mathrm{\ m/s}^2$ and $j_\mathrm{P10}=(-0.21\pm 0.04)\times 10^{-10}\mathrm{\ m/s}^2/\mathrm{year}$ (Pioneer~10) and $a_\mathrm{P11}=(9.40\pm 1.12)\times 10^{-10}\mathrm{\ m/s}^2$ and $j_\mathrm{P11}=(-0.34\pm 0.12)\times 10^{-10}\mathrm{\ m/s}^2/\mathrm{year}$ (Pioneer~11). Toth's study demonstrated that a moderate jerk term is consistent with the Doppler data and, therefore, an anomalous acceleration that is a slowly changing function of time cannot be excluded at present. Toth's orbit determination software also has the capability to utilize telemetry data. In particular, the code can be used to estimate the thermal recoil force as a function of the heat generated on-board, or conversely, to fit thermal recoil force coefficients to radiometric Doppler measurements, as discussed in Section~\ref{sec:thermal_force}. \subsubsection{Analysis by Levy et al. using ODYSSEY} \epubtkImage{}{% \begin{figure}[t!] \hskip -6pt \begin{minipage}[b]{.5\linewidth} \centering \includegraphics[width=\linewidth]{levy-figure1} \end{minipage} \hskip 0.001\linewidth \begin{minipage}[b]{.5\linewidth} \centering \includegraphics[width=\linewidth]{levy-figure2} \end{minipage} \caption{Best-fit Pioneer~10 residuals using the ODYSSEY orbit determination program~\cite{levy-2008}. Left: residuals after a best-fit constant acceleration of $a_P=(8.40\pm 0.01)\times 10^{-10}\mathrm{\ m/s}^2$. Right: reconstruction of the anomalous acceleration contribution.} \label{fig:levy} \vskip -5pt \end{figure}} Levy et al.~\cite{levy-2008} also performed an analysis of the Pioneer data using the independently developed orbit determination program ODYSSEY. The team confirmed the presence of an acceleration signal consistent with that found in other studies: for Pioneer~10, they obtained an anomalous acceleration of $a_P=(8.40\pm 0.01)\times 10^{-10}\mathrm{\ m/s}^2$ (see Figure~\ref{fig:levy}). Their study shows the presence in the residual of periodic terms with periods consistent with half a sidereal day, one sidereal day, and half a year, and they investigate the possibility that these variations may be due to perturbations of unknown origin that modify the propagation of the signal. In view of all these studies, the existence of the Pioneer anomaly in the Pioneer~10 and 11 radiometric Doppler data is established beyond doubt. Furthermore, the analyses~\cite{levy-2008,2002gr.qc.....8046M,2007AA...463..393O, Toth2008} brought new knowledge about the effect, especially insofar as the temporal behavior of the anomaly is concerned. As a result, the anomalous acceleration can no longer be characterized as having a constant magnitude. Instead, the effect clearly shows temporal decrease -- perhaps consistent with the decay of the radioactive fuel on board -- the conjecture that needs further investigation. This recently-gained knowledge serves as a guide for new study of the effect (discussed in Section~\ref{sec:strategy}); it also points out the unresolved questions that we summarize below. \subsection{Unresolved questions} \label{sec:unresolved-questions} Although JPL's 2002 study~\cite{pioprd} offered a very comprehensive review of the Pioneer anomaly, in some cases it left some questions unanswered. In other cases, the report's conclusions were put into question, either by independent verifications or by the analysis of subsequently recovered data and spacecraft documentation. Specifically, the following open questions are important for understanding the physical nature of the Pioneer anomaly and are a subject of on-going investigation: \begin{itemize} \item \textit{Direction}: The true direction of the anomalous acceleration is yet unclear. Within the typical angular
=&\; \lV \bmt a\bar{g}(s_0)h_{21}^TH_{22}^{-1}h_{21} &-ah_{21}^TH_{22}^{-1}\\ -aH_{22}^{-1}h_{21}& H_{22}^{-1}+aH_{22}^{-1}h_{21}h_{21}^TH_{22}^{-1} \emt\rV\nonumber\\ \leq &\; |a\bar{g}(s_0)h_{21}^TH_{22}^{-1}h_{21}|+2\|aH_{22}^{-1}h_{21}\|\nonumber\\ &\; +\|H_{22}^{-1}+aH_{22}^{-1}h_{21}h_{21}^TH_{22}^{-1}\|\nonumber\\ \leq &\; |a|\|H_{22}^{-1}\|(|\bar{g}(s_0)|\|h_{21}\|^2+2\|h_{21}\|+\|h_{21}\|^2\|H_{22}^{-1}\|)\nonumber\\ &\;\quad\quad +\|H_{22}^{-1}\|\,,\label{eq_Hinv_norm_bd1} \end{align} Using \eqref{eq_h12_norm_bd}\eqref{eq_H22_norm_bd}\eqref{eq_a_norm_bd}, we can further upper bound \eqref{eq_Hinv_norm_bd1} as \begin{align} &\;\|H^{-1}-\bar{g}(s_0)e_1e_1^T\|\nonumber\\ \leq &\; \frac{M_1^2M_2^2+2M_1M_2+\frac{M_1M_2^2}{|f(s_0)|\lambda_2(L)-M_2}}{|f(s_0)|\lambda_2(L)-M_2-M_1M_2^2}\nonumber\\ &\;\qquad+\frac{1}{|f(s_0)|\lambda_2(L)-M_2}\nonumber\\ =&\;\frac{\lp M_1M_2+1\rp^2}{|f(s_0)|\lambda_2(L)-M_2-M_1M_2^2}\,.\label{eq_Hinv_norm_bd2} \end{align} This bound holds as long as $|f(s_0)|\lambda_2(L)>M_2+M_2^2M_1$. Combining \eqref{eq_T_H_norm_equiv} and \eqref{eq_Hinv_norm_bd2} gives the desired inequality. \end{proof} Lemma \ref{lem_reg_norm_bd} provides an upper bound for the incoherence measure we are interested in, namely how far apart the system transfer matrix is, at a particular point in the frequency domain, from being rank-one with coherent direction $\frac{1}{n}\one\one^T$. This result can be understood in the following aspects: First of all, a large value of $|f(s_0)|\lambda_2(L)$ is sufficient to have the incoherence measure small, and we term this quantity as \emph{effective algebraic connectivity}. We see that there are two possible ways to achieve such "point-wise coherence": Either we increase the network algebraic connectivity $\lambda_2(L)$, by adding edges to the network and increasing edge weights, etc., or we move our point of interest $s_0$ to a pole of $f(s)$. Secondly, the upper bound is frequency-dependent since it is provided at a single point $s_0$ in the $s$-domain. To see such dependence, notice that $s_0$ near a pole of $f(s)$ has large effective algebraic connectivity, hence the system is more coherent around poles of $f(s)$; On the contrary, $s_0$ near a pole of $\bar{g}(s)$ requires large $M_1$ for the condition of Lemma \ref{lem_reg_norm_bd} to hold, and readers can check that $s_0$ near a zero of $\bar{g}(s)$ requires large $M_2$, therefore for at these points, it is more difficult for us to upper bound the incoherence measure by Lemma \ref{lem_reg_norm_bd}. Such dependence makes it challenging to understand the network coherence uniformly in the entire frequency domain. Last but not least, Although Lemma \ref{lem_reg_norm_bd} provides a sufficient condition for the network coherence to emerge, i.e. the increasing effective algebraic connectivity, it is still unknown whether such a condition is necessary. In other words, we do not know whether low effective algebraic connectivity means some kind of incoherence. This problem seems trivial for the extreme case: if $|f(s_0)|=0$ or $L=0$, the feedback loop vanishes, and every node responses independently, but certainly not otherwise. When the condition in Lemma \ref{lem_reg_norm_bd} is satisfied, the system is asymptotically coherent, i.e. $T(s_0)$ converges to $\frac{1}{n}\bar{g}(s_0)$ as the effective algebraic connectivity $|f(s_0)|\lambda_2(L)$ increases. As we mentioned above, we can achieve this by increasing either $\lambda_2(L)$ or $|f(s_0)|$, provided that the other value is fixed and non-zero. Subsection \ref{ssec:gen_p_of_f} considers the former and Subsection \ref{ssec:poles_of_f} the latter. Before presenting with the results, we define \begin{dfn} For transfer function $g(s)$ and $s_0\in \compl$, $s_0$ is a generic point of $g(s)$ if $s_0$ is neither a pole nor a zero of $g(s)$. \end{dfn} As we have seen through the discussions above, we always require some generic point assumptions for either $\bar{g}(s)$, $f(s)$, or both. Those points are of the most interest in this paper but we will provide some results for the cases where the generic assumption fails. \subsection{Convergence at Generic Points of $f(s)$}\label{ssec:gen_p_of_f} In this section we keep $s_0$ fixed and present the point-wise convergence result of $T(s_0)$ as $\lambda_2(L)$ increases. This requires $s_0$ to be a generic point of $f(s)$. Notice that for any $s_0$ that is also a generic point of $\bar{g}(s)$, we can always find such $M_1,M_2>0$ and large enough $\lambda_2(L)$ for the upper bound in \eqref{eq_T_norm_bd} to hold. Furthermore, given fixed $M_1$ and $M_2$, one can let the upper bound be arbitrarily small by increasing $\lambda_2(L)$, which leads to the point-wise convergence of $T(s_0)$, as stated in the following theorem. \begin{thm}\label{thm_ptw_conv_reg} Let $T(s)$ and $\bar{g}(s)$ be defined as in \eqref{eq_T_explict} and \eqref{eq_g_bar}, respectively. If $s_0\in\compl$ is a generic point of both $\bar{g}(s)$ and $f(s)$, then \ben \lim_{\lambda_2(L)\ra +\infty} \lV T(s_0)-\frac{1}{n}\bar{g}(s_0)\one\one^T\rV=0\,. \een \end{thm} \begin{proof} Since $s_0$ is not a pole of $\bar{g}(s)$, $|\bar{g}(s_0)|$ is trivially upper bounded by some $M_1>0$. Also, it is easy to see that $s_0$ is not a zero of $\bar{g}(s)$ if and only if $s_0$ is not a zero of any $g_i(s)$. Then $\max_{1\leq i\leq n}|\bar{g}^{-1}(s_0)|$ is upper bounded by some $M_2>0$. Therefore the conditions of Lemma \ref{lem_reg_norm_bd} are satisfied. We finish the proof by taking $\lambda_2(L)\ra +\infty$ on both sides of \eqref{eq_T_norm_bd}. \end{proof} Theorem \ref{thm_ptw_conv_reg} establishes the emergence of coherence at generic points of $\bar{g}(s)$. This forms the basis of our analysis, yet requires such $s_0$ satisfying generic conditions. A more careful analysis shows that, as $\lambda_2(L)\ra +\infty$, the pole of $\bar{g}(s)$ is asymptotically a pole of $T(s)$, and the zero of $\bar{g}(s)$ is asymptotically a zero of $T(s)$, as stated in the following two theorems. \begin{thm}\label{thm_ptw_conv_pole} Let $T(s)$ and $\bar{g}(s)$ be defined as in \eqref{eq_T_explict} and \eqref{eq_g_bar}, respectively. If $s_0\in\compl$ is a pole of $\bar{g}(s)$ and a generic point of $f(s)$, then \ben \lim_{\lambda_2(L)\ra +\infty} \lV T(s_0)\rV=+\infty\,. \een \end{thm} \begin{proof} Similarly to the proof of Lemma \ref{lem_reg_norm_bd}, we define $H=V^T\dg\{g_i^{-1}(s_0)\}V+f(s_0)\Lambda$ and now we need to show that $\|T(s_0)\|=\|H^{-1}\|$ grows unbounded as $\lambda_2(L)\ra +\infty$. Write $H$ in block matrix form: \begin{align*} H&= {\small\bmt \bar{g}^{-1}(s_0)& \frac{\one^T}{\sqrt{n}}\dg\{g_i^{-1}(s_0)\}V_\perp\\ V_\perp^T\dg\{g_i^{-1}(s_0)\}\frac{\one}{\sqrt{n}} &V_\perp^T\dg\{g_i^{-1}(s_0)\}V_\perp+f(s_0)\Tilde{\Lambda} \emt}\\ &:= \bmt 0& h^T_{21}\\ h_{21} & H_{22} \emt\,, \end{align*} by noticing that $\bar{g}^{-1}(s_0)=0$ because $s_0$ is a pole of $\bar{g}(s)$. Inverting $H$ in its block form gives \begin{align} H^{-1} &=\; \bmt a &-ah_{21}^TH_{22}^{-1}\\ -aH_{22}^{-1}h_{21}& H_{22}^{-1}+aH_{22}^{-1}h_{21}h_{21}^TH_{22}^{-1} \emt\nonumber\\ &=\; a\bmt 1\\ -H_{22}^{-1}h_{21}\emt\bmt1 &-h^T_{21}H_{22}^{-1}\emt+\bmt 0&0\\ 0 & H_{22}^{-1}\emt\,, \label{eq_H_inv_pole} \end{align} where $a$ now is given by $a = -\frac{1}{h_{21}^TH_{22}^{-1}h_{21}}$. Then from \eqref{eq_H_inv_pole}, when $\lambda_2(L)$ is large enough, we can lower bound $\|H^{-1}\|$ by \begin{align} \|H^{-1}\|&\geq \lV a\bmt 1\\ -H_{22}^{-1}h_{21}\emt\bmt 1\\ -H_{22}^{-1}h_{21}\emt^T\rV - \lV\bmt 0&0\\ 0 & H_{22}^{-1}\emt \rV\nonumber\\ &= \frac{1}{|h_{21}^TH_{22}^{-1}h_{21}|} \lV \bmt 1\\ -H_{22}^{-1}h_{21}\emt\rV^2-\|H_{22}^{-1}\|\nonumber\\ &\geq \frac{1}{|h_{21}^TH_{22}^{-1}h_{21}|}-\|H_{22}^{-1}\|\nonumber\\ &\geq \frac{1}{\|h_{21}\|^2\|H_{22}^{-1}\|}-\|H_{22}^{-1}\|\,,\label{eq_H_lower_bd_pole} \end{align} where in the second inequality, we simply use the fact that the norm of a vector is lower bounded by its first entry. Because $s_0$ is a pole of $\bar{g}(s)$, it cannot be a zero of any $g_i(s)$; otherwise this would lead to the contradiction $\bar{g}(s_0)=0$. Therefore, $\max_{1\leq i\leq n}|g_i^{-1}(s)|$ is upper bounded by some $M>0$. Similarly to \eqref{eq_h12_norm_bd} and \eqref{eq_H22_norm_bd}, we have $\|h_{21}\|\leq M$ and $\|H_{22}^{-1}\|\leq \frac{1}{|f(s_0)|\lambda_2(L)-M}$. Then \eqref{eq_H_lower_bd_pole} can be lower bounded by \begin{align*} \|H^{-1}\|&\geq \frac{1}{\|h_{21}\|^2\|H_{22}^{-1}\|}-\|H_{22}^{-1}\|\\ &\geq \frac{1}{\frac{M^2}{|f(s_0)|\lambda_2(L)-M}}-\frac{1}{|f(s_0)|\lambda_2(L)-M}\\ &= \frac{(|f(s_0)|\lambda_2(L)-M)^2-M^2}{M^2(|f(s_0)|\lambda_2(L)-M)}\,. \end{align*} This lower bound holds when $|f(s_0)|\lambda_2(L)\geq M$, and it grows unbounded as $\lambda_2(L)\ra +\infty$, which finishes the proof. \end{proof} { \begin{rem}\label{rem_pole} Theorem \ref{thm_ptw_conv_pole} does not suggest whether the network is asymptotically coherent at poles of $\bar{g}(s)$. Our incoherence measure $\lV T(s_0)-\frac{1}{n}\bar{g}(s_0)\one\one^T\rV$ is undefined at such poles. Alternatively, for $s_0$ the pole of $\bar{g}(s)$, one can prove that when $\tilde{\Lambda}/\tilde{\lambda_2(L)}\ra \Lambda_{\mathrm{lim}}$ as $\lambda_2(L)\ra +\infty$, we have the limit $\lV \frac{T(s_0)}{\|T(s_0)\|}-\frac{1}{n}\gamma(\Lambda_\mathrm{lim})\one\one^T\rV\ra 0$, for some $\gamma(\Lambda_\mathrm{lim})\in\compl$ determined by $\Lambda_\mathrm{lim}$ with $|\gamma(\Lambda_\mathrm{lim})|=1$. We leave the formal statement to Appendix.\ref{app_lim_dir_pole}. This suggests that $T(s_0)$ has the desired rank-one structure for coherence. While the normalized transfer matrix is not discussed in this paper due to the space constraints, such formulation is better for understanding the network coherence at the poles of $\bar{g}(s)$. \end{rem}} Next, the convergence result regarding the zeros of $\bar{g}(s)$ is stated as \begin{thm}\label{thm_ptw_conv_zero} Let $T(s)$ and $\bar{g}(s)$ be defined as in \eqref{eq_T_explict} and \eqref{eq_g_bar}, respectively. If $s_0\in\compl$ is a zero of $\bar{g}(s)$ and a generic point of $f(s)$, then \ben \lim_{\lambda_2(L)\ra +\infty} \lV T(s_0)\rV=0\,. \een \end{thm} \begin{proof} Since $s_0$ is a zero of $\bar{g}(s)$, it is the zero of at least one $g_i(s)$. Without loss of generality, suppose $g_i(s_0)=0$ for $1\leq i\leq m$ and $g_i(s_0)\neq 0$ for $m+1\leq i\leq n$. If $m=n$, then $T(s_0)=0$. We only consider the non-trivial case when $m<n$. The transfer matrix is now given by \begin{align} T(s_0)&=\;(I_n+G(s_0)f(s_0)L)^{-1}G(s_0)\nonumber\\ &=\;\bmt I_{m}&0_{m\times (n-m)}\\ 0_{(n-m)\times m}& I_{n-m}+\tilde{G}(s_0)f(s_0)\tilde{L}\emt^{-1}G(s_0)\nonumber\\ &=\;\bmt 0_{m\by m}& 0_{m\by (n-m)}\\ 0_{(n-m)\by m} & (I_{n-m}+\Tilde{G}(s_0)f(s_0)\Tilde{L})^{-1}\Tilde{G}(s_0) \emt\,,\label{eq_T_zero} \end{align} where $\Tilde{G}(s)=\dg\{g_{m+1}(s),\cdots,g_{n}(s)\}$ and $\Tilde{L}$ is the \emph{grounded Laplacian} of $L$ by removing the first $m$ rows and columns. By Lemma \ref{lem_grd_Lap_eig_bd}, when $\lambda_1(\Tilde{L})$ is large enough, we have \begin{align*} \|T(s_0)\|&=\;\|(I_{n-m}+\Tilde{G}(s_0)f(s_0)\Tilde{L})^{-1}\Tilde{G}(s_0)\|\\ &=\; \|(\Tilde{G}^{-1}(s_0)+f(s_0)\Tilde{L})^{-1}\|\\ &\leq\; \frac{1}{\sigma_1(f(s_0)\Tilde{L})-\|\Tilde{G}^{-1}(s_0)\|}\\
+ \mu \operatorname{dev}(\bvarepsilon^\text{el}):\operatorname{dev}(\bvarepsilon^\text{el}) \label{eq:isotropic-elasticity} \end{equation} where $\kappa$ is the compressibility modulus, $\mu$ the shear modulus and $\operatorname{dev}(\bvarepsilon^\text{el})$ the deviatoric elastic strain. The hardening potential is assumed to be of exponential type as follows: \begin{equation} \psi_\text{h}(p) = (\sigma_u-\sigma_0)\left(p+\frac{1}{\omega}\exp(-\omega p)\right) \end{equation} where $\sigma_0$ (resp. $\sigma_u$) is the initial (resp. ultimate) yield strength and $\omega$ a saturation parameter. This potential defines the following hardening thermodynamic force: \begin{equation} R(p) = \dfrac{\partial \psi_\text{h}}{\partial p} = (\sigma_u-\sigma_0)(1-\exp(-\omega p)) \end{equation} which will increase the yield stress $R(p)$ from $\sigma_0$ at $p=0$ to $\sigma_u$ when $p\to\infty$. Finally, we assume a $J_2$-plasticity dissipation pseudo-potential: \begin{equation} \phi(\dot{\bvarepsilon^\text{p}}, \dot{p}) = \begin{cases} \sqrt{\frac{2}{3}}\sigma_0\|\dot{\bvarepsilon^\text{p}}\| & \text{if } \tr(\dot{\bvarepsilon^\text{p}})=0 \\ +\infty & \text{otherwise} \end{cases}\label{eq:J2-dissipation} \end{equation} which involves a plastic incompressibility constraint. However, as such, we are missing the link between the plastic strain and the cumulated equivalent plastic strain. Classically, one defines the equivalent plastic strain as follows: \begin{equation} p = \int_0^t \sqrt{\frac{2}{3}}\|\dot{\bvarepsilon^\text{p}}\|\text{dt} \end{equation} or, equivalently in rate form: \begin{equation} \dot{p} = \sqrt{\frac{2}{3}}\|\dot{\bvarepsilon^\text{p}}\| \label{eq:definition-dot-p} \end{equation} \subsection{Conic reformulations} We now discuss the conic representation of the various convex functions $\psi_\text{el},\psi_\text{h}$ and $\phi$ involved in the definition of $J(\bvarepsilon,\bvarepsilon^\text{p},p)$. Starting with the plastic dissipation \eqref{eq:J2-dissipation}, let us first mention that the constraint \eqref{eq:definition-dot-p} is non-convex. The proper constraint linking both internal state variable rates in a convex manner is to relax the equality and redefine $\phi$ using $\dot{p}$ as follows: \begin{equation} \phi(\dot{\bvarepsilon^\text{p}}, \dot{p}) = \begin{cases} \sigma_0 \dot{p} & \text{if } \tr(\dot{\bvarepsilon^\text{p}})=0\:; \sqrt{\frac{2}{3}}\|\dot{\bvarepsilon^\text{p}}\| \leq \dot{p} \\ +\infty & \text{otherwise} \end{cases}\label{eq:J2-dissipation-conic} \end{equation} Formulation \eqref{eq:J2-dissipation-conic} corresponds to an epigraph form of \eqref{eq:J2-dissipation} which is readily expressed using a second-order cone constraint. Considering now the elastic potential, the quadratic form \eqref{eq:isotropic-elasticity} must be expressed using a Cholesky factorization. Accounting for the fact that the spherical and deviatoric parts of a second-rank tensor define orthogonal subspaces, we can readily show that: \begin{equation} \psi_\text{el}(\bvarepsilon^\text{el}) = \dfrac{1}{2}\|\QQ:\bvarepsilon^\text{el}\|^2 \label{eq:isotropic-elasticity-factorized} \end{equation} where: \begin{equation} \QQ:\bvarepsilon^\text{el} = \sqrt{3\kappa}\dfrac{1}{3}\tr(\bvarepsilon^\text{el})\bI + \sqrt{2\mu} \operatorname{dev}(\bvarepsilon^\text{el}) \end{equation} which yields the following conic epigraph formulation: \begin{equation} \begin{array}{rl} \displaystyle{\psi_\text{el}(\bvarepsilon^\text{el}) = \inf_{t, s, \by}} & t \\ \text{s.t.} & \by = \QQ:\bvarepsilon^\text{el} \\ & \|\by\|^2 \leq 2t s \\ & s = 1 \end{array} \label{eq:isotropic-elasticity-epigraph} \end{equation} Finally, the hardening potential term can be readily reformulated using an exponential cone as follows: \begin{equation} \exp(-\omega p) = \min \:r_0 \quad \text{s.t. } \exp(-\omega p) \leq r_0 \end{equation} This non-linear constraint can be reformulated using an exponential cone $\Kk_\text{exp}$ as follows: \begin{equation} \exp(-\omega p) \leq r_0 \quad \Leftrightarrow \quad r_1=1, \:, r_2=-\omega p, \: (r_0, r_1, r_2)\in \Kk_\text{exp} \end{equation} \subsection{Numerical illustration} We consider a 2D rectangular domain of length $L=5$ and height $H=0.5$, fixed on both lateral extremities and subjected to a uniform downwards vertical body force $\bf=-f\be_y$ as illustrated in Figure \ref{fig:beam-mesh}. The domain is meshed with $50\times 20$ elements on the boundaries. The displacement field is discretized using a continuous quadratic Lagrange function space whereas internal state variables are represented using a discontinuous piecewise affine space. We consider a plane strain setting and use the following material properties $E=\SI{210}{GPa}$, $\nu=0.3$, $\sigma_0=\SI{450}{MPa}$, $\sigma_u=\SI{700}{MPa}$ and $\omega=50$. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{beam_mesh} \end{center} \caption{A 2D beam clamped at both ends and subject to a vertical body force $f$.}\label{fig:beam-mesh} \end{figure} The beam is first loaded by progressively increasing the body force from 0 to $f^+=\dfrac{2}{\sqrt{3}}\dfrac{4\sigma_u H}{L^2}$ which is the theoretical limit load using a beam theory model. Then, we perform a full unloading up to $f=0$. The loading stage is imposed using 10 load increments, the unloading stage being elastic using only one increment. Figure \ref{fig:elastoplastic-load-displ-10steps} represents the evolution of the beam downwards vertical displacement at its mid-span center point $(L/2,H/2)$ as a function of the imposed loading. One can observe an initial elastic phase up to $f\approx 0.4f^+$ followed by a strongly non-linear hardening phase. Note that the load $f=f^+$ can still be supported by the structure due to the difference between a 2D model as here and a 1D beam theory solution. Note that, for the present 2D structure with the considered mesh, the ultimate load is found to be around $f\approx 1.1f^+$. As expected for plasticity problems, the unloading stage is indeed elastic and exhibits a permanent residual deflection. The distribution of equivalent plastic strain $p$ at $f=f^+$ has been represented in Figure \ref{fig:elastoplastic-p-distribution}. One can clearly observe the formation of plastic hinges near the clamped supports and a more diffuse plastic field at the beam mid-span. As regards numerical resolution statistics, let us point out that each instance of a single step elasto-plastic problem consists in roughly 250,000 optimization variables, 170,000 linear constraints and 36,000 quadratic or exponential cones. Each resolution with Mosek v.9.0 took between 2.5 and 3.5 seconds (10 to 18 IP iterations) depending on the loading step (fully plastic steps near $f=f^+$ took a larger number of iterations than initial elastic steps). Overall, we find that the IP solver exhibits a very robust behaviour in terms of number of iterations with respect to the load step level or to the problem size (after mesh refinement for instance). \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{elastoplastic_exp_hardening_10_steps} \end{center} \caption{Load vs. mid-span deflection evolution during plastic loading and elastic unloading.}\label{fig:elastoplastic-load-displ-10steps} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{elastoplasticity_final_state} \end{center} \caption{Equivalent plastic strain distribution and deformed configuration at $f=f^+$.}\label{fig:elastoplastic-p-distribution} \end{figure} Although not being fully competitive against standard Newton methods in multi-step plasticity, the conic programming approach becomes interesting when much larger load steps are considered. For instance, Figure \ref{fig:elastoplasticity-increment-size} shows the same load-displacement curve as before using different numbers of load steps. Interestingly, the computed displacement at the ultimate state near collapse ($f=f^+$) is already very accurate using a single load step, see also \cite{krabbenhoft2007interior,el2020elastoplastic}. Again, solver robustness does not seem to be affected by the load step amplitude since the number of IP iterations remains very similar. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{elastoplasticity_increment_size_comparison} \end{center} \caption{Load-deflection evolution depending on the total number of load increments (only one used in the unloading phase).}\label{fig:elastoplasticity-increment-size} \end{figure} \section{Minimal crack surfaces}\label{sec:minimal-crack} In this section, we consider the problem of computing the effective crack resistance of a heterogeneous medium with locally varying fracture energy $G_\text{c}(\bx)$. We consider $\Omega$ to be some representative volume element (RVE) of the heterogeneous material. In \cite{braides1996homogenization}, a periodic homogenization result regarding the variational approach to fracture \cite{francfort1998revisiting} has been established. In particular, the effective fracture energy $G_\text{c}^\text{eff}(\bn)$ associated with a crack of mean normal $\bn$ was explicitly characterized from the computation of minimal surfaces inside $\Omega$, weighted by the local fracture energy $G_\text{c}(\bx)$. \cite{schneider2020fft} proposed a convex optimization formulation inspired by min-cut/max-flow problems in a periodic setting. More precisely, given a prescribed crack plane normal $\bn$, they consider the following variational problem: \begin{equation} G_\text{c}^\text{eff}(\bn) = \inf_{\phi \in V} \dfrac{1}{|\Omega|}\int_\Omega G_\text{c}(\bx)\|\nabla \phi + \bn\|_2 \,\text{d}\Omega \label{eq:mincut-crack-surface-Gceff} \end{equation} where $V$ denotes the space of smooth scalar functions which are periodic over $\Omega$. This min-cut problem is known to have minimizers corresponding to $G_\text{c}$-weighted periodic minimal surfaces. \subsection{Illustrating example} We consider a simple microstructure consisting of a periodic square unit cell and circular inclusions of radius $R \leq R_0=1/(2\sqrt{2})$ located at the cell center and at its four corners, see Figure \ref{fig:minimal-crack-surface-regular-inclusions}. $R_0$ denotes the maximum radius corresponding to each inclusion touching each other. The inclusion material possesses a fracture energy which is much larger than that of the matrix. Inclusions can thus be considered as infinitely resistant so that minimal crack surfaces will always pass through the matrix material only with fracture energy $G_\text{c}$. Let us first consider the effective fracture energy for cracks of normal $\be_x$ (or $\be_y$ for symmetry reasons). For small enough inclusions i.e. $R\leq 1/4$, there always exists a straight crack plane passing inside the matrix (Fig. \ref{fig:minimal-crack-surface-regular-inclusions}a). Thus, $G_\text{c}^\text{eff}(\be_x)=G_\text{c}$ for any $R\leq 1/4$. For larger $R$ up to $R=R_0$, the optimal path consists of a part of the circular inclusion border connected by a straight line between each inclusions. In the limit case $R=R_0$, the straight line vanishes and the total path length is $\pi R_0$ (Fig. \ref{fig:minimal-crack-surface-regular-inclusions}b). As a result, the effective fracture energy is $G_\text{c}^\text{eff}(\be_x)=\dfrac{\pi}{2\sqrt{2}}G_\text{c} \approx 1.11G_\text{c}$. Regarding crack planes oriented at $\pm 45^\circ$, for any $R$ up to $R_0$ there exists a $\pm 45^\circ$ degree line connecting the mid-point of the square domains as indicated in Figure \ref{fig:minimal-crack-surface-regular-inclusions}. As a result, the corresponding fracture energy will always be $G_\text{c}^\text{eff} = G_\text{c}$ for this orientation. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{minimal_crack_surfaces} \end{center} \caption{Minimal crack surfaces for $0^\circ$ (red) and $45^\circ$ (blue) imposed crack plane orientation.}\label{fig:minimal-crack-surface-regular-inclusions} \end{figure} \subsection{Numerical computation} The convex problem \eqref{eq:mincut-crack-surface-Gceff} is inherently non-smooth and