input
stringlengths
2.56k
275k
output
stringclasses
1 value
is much like plain v u with no prefix argument, except that in addition to the components of the input object, a suitable packing mode to re-pack the object is also pushed. Thus, C-u 1 v u followed by v p will re-build the original object. A mode of 2 unpacks two levels of the object; the resulting re-packing mode will be a vector of length 2. This might be used to unpack a matrix, say, or a vector of error forms. Higher unpacking modes unpack the input even more deeply. There are two algebraic functions analogous to v u. The unpack(mode, item)' function unpacks the item using the given mode, returning the result as a vector of components. Here the mode must be an integer, not a vector. For example, unpack(-4, a +/- b)' returns [a, b]', as does unpack(1, a +/- b)'. The unpackt function is like unpack but instead of returning a simple vector of items, it returns a vector of two things: The mode, and the vector of items. For example, unpackt(1, 2:3 +/- 1:4)' returns [-4, [2:3, 1:4]]', and unpackt(2, 2:3 +/- 1:4)' returns [[-4, -10], [2, 3, 1, 4]]'. The identity for re-building the original object is apply(pack, unpackt(n, x)) = x'. (The apply function builds a function call given the function name and a vector of arguments.) Subscript notation is a useful way to extract a particular part of an object. For example, to get the numerator of a rational number, you can use unpack(-10, x)_1'. ## Building Vectors Vectors and matrices can be added, subtracted, multiplied, and divided; see section Basic Arithmetic. The | (calc-concat) command "concatenates" two vectors into one. For example, after [ 1 , 2 ] [ 3 , 4 ] |, the stack will contain the single vector [1, 2, 3, 4]'. If the arguments are matrices, the rows of the first matrix are concatenated with the rows of the second. (In other words, two matrices are just two vectors of row-vectors as far as | is concerned.) If either argument to | is a scalar (a non-vector), it is treated like a one-element vector for purposes of concatenation: 1 [ 2 , 3 ] | produces the vector [1, 2, 3]'. Likewise, if one argument is a matrix and the other is a plain vector, the vector is treated as a one-row matrix. The H | (calc-append) [append] command concatenates two vectors without any special cases. Both inputs must be vectors. Whether or not they are matrices is not taken into account. If either argument is a scalar, the append function is left in symbolic form. See also cons and rcons below. The I | and H I | commands are similar, but they use their two stack arguments in the opposite order. Thus I | is equivalent to TAB |, but possibly more convenient and also a bit faster. The v d (calc-diag) [diag] function builds a diagonal square matrix. The optional numeric prefix gives the number of rows and columns in the matrix. If the value at the top of the stack is a vector, the elements of the vector are used as the diagonal elements; the prefix, if specified, must match the size of the vector. If the value on the stack is a scalar, it is used for each element on the diagonal, and the prefix argument is required. To build a constant square matrix, e.g., a @c{$3\times3$} 3x3 matrix filled with ones, use 0 M-3 v d 1 +, i.e., build a zero matrix first and then add a constant value to that matrix. (Another alternative would be to use v b and v a; see below.) The v i (calc-ident) [idn] function builds an identity matrix of the specified size. It is a convenient form of v d where the diagonal element is always one. If no prefix argument is given, this command prompts for one. In algebraic notation, idn(a,n)' acts much like diag(a,n)', except that a is required to be a scalar (non-vector) quantity. If n is omitted, idn(a)' represents a times an identity matrix of unknown size. Calc can operate algebraically on such generic identity matrices, and if one is combined with a matrix whose size is known, it is converted automatically to an identity matrix of a suitable matching size. The v i command with an argument of zero creates a generic identity matrix, idn(1)'. Note that in dimensioned matrix mode (see section Matrix and Scalar Modes), generic identity matrices are immediately expanded to the current default dimensions. The v x (calc-index) [index] function builds a vector of consecutive integers from 1 to n, where n is the numeric prefix argument. If you do not provide a prefix argument, you will be prompted to enter a suitable number. If n is negative, the result is a vector of negative integers from n to -1. With a prefix argument of just C-u, the v x command takes three values from the stack: n, start, and incr (with incr at top-of-stack). Counting starts at start and increases by incr for successive vector elements. If start or n is in floating-point format, the resulting vector elements will also be floats. Note that start and incr may in fact be any kind of numbers or formulas. When start and incr are specified, a negative n has a different interpretation: It causes a geometric instead of arithmetic sequence to be generated. For example, index(-3, a, b)' produces [a, a b, a b^2]'. If you omit incr in the algebraic form, index(n, start)', the default value for incr is one for positive n or two for negative n. The v b (calc-build-vector) [cvec] function builds a vector of n copies of the value on the top of the stack, where n is the numeric prefix argument. In algebraic formulas, cvec(x,n,m)' can also be used to build an n-by-m matrix of copies of x. (Interactively, just use v b twice: once to build a row, then again to build a matrix of copies of that row.) The v h (calc-head) [head] function returns the first element of a vector. The I v h (calc-tail) [tail] function returns the vector with its first element removed. In both cases, the argument must be a non-empty vector. The v k (calc-cons) [cons] function takes a value h and a vector t from the stack, and produces the vector whose head is h and whose tail is t. This is similar to |, except if h is itself a vector, | will concatenate the two vectors whereas cons will insert h at the front of the vector t. Each of these three functions also accepts the Hyperbolic flag [rhead, rtail, rcons] in which case t instead represents the last single element of the vector, with h representing the remainder of the vector. Thus the vector [a, b, c, d] = cons(a, [b, c, d]) = rcons([a, b, c], d)'. Also, head([a, b, c, d]) = a', tail([a, b, c, d]) = [b, c, d]', rhead([a, b, c, d]) = [a, b, c]', and rtail([a, b, c, d]) = d'. ## Extracting Vector Elements The v r (calc-mrow) [mrow] command extracts one row of the matrix on the top of the stack, or one element of the plain vector on the top of the stack. The row or element is specified by the numeric prefix argument; the default is to prompt for the row or element number. The matrix or vector is replaced by the specified row or element in the form of a vector or scalar, respectively. With a prefix argument of C-u only, v r takes the index of the element or row from the top of the stack, and the vector or matrix from the second-to-top position. If the index is
= "X.X.X.X" ordinary IP address from one of the network interfaces PORTI = 0 to 65535 (but you need to choose a free one) number_of_neurons = is the size of the spike train pipe_out = multiprocessing.Pipe used to send the information received through UDP to the format_received_spks pipe_in = the other end of the pipe, used only to check if the pipe was emptied """ buffer_size = 8 + number_of_neurons # Each element of the numpy.array with the uint8 occupies 1 byte. # So, the brian_address has 8 elements, therefore 8 bytes. # number_of_neurons: because each neuron occupies 1 byte (numpy.uint8) sockI = socket.socket(socket.AF_INET, # IP socket.SOCK_DGRAM) # UDP sockI.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # Tells the OS that if someone else is using the PORT, it # can use the same PORT without any error/warning msg. # Actually this is useful because if you restart the simulation # the OS is not going to release the socket so fast and an error # could occur. sockI.bind((IPI, PORTI)) # Bind the socket to the IPI/PORTI # Add here a test to check if the user asked to clean the buffer before start. clean_loop = 1 while clean_loop: print self.brian_address, "- Cleaning receiving buffer...", "IP/PORT:", IPI, "/", PORTI try: data = sockI.recv(1, socket.MSG_DONTWAIT) # buffer size is 1 byte, NON blocking. print data except IOError: # The try and except are necessary because the recv raises a error when no data is received clean_loop = 0 print self.brian_address, "- Cleaning receiving buffer...", "IP/PORT:", IPI, "/", PORTI, "...Done!" sockI.setblocking(1) # Tells the system that the socket recv() method will DO block until a packet is received # Until this point, the code is going to be executed only once each time the system runs. while True: try: data = sockI.recv(buffer_size) # This is a blocking command, therefore the while loop is not going # to eat up all the processor time. # It is possible to change the code here and insert a test to be able to turn ON and OFF the packet dropping. if not pipe_in.poll(): # Verify if the simulator has already read the last packet pipe_out.send(data) # and only sends new data if the pipe in empty! except IOError: # Without the IOError even the keyboard "control+C" is caught here! print "UDP read error?" pass except ValueError: print "ValueError:", data # DEBUG! pass #the data is corrupted, a wrong package appeared at the port, etc... except KeyboardInterrupt: print "User keyboard interruption...finishing!" break # kills the while def format_received_spks_multiple(self, udp_data, process_out, addresses_input, spike_flag): """ This function processes the information received by UDP using the receive_UDP creating a proper spike train to be fed to Brian. udp_data: list with all the pipes connections from the input (read_UDP processes) process_out: pipe to send the final spikes to Brian addresses_input: list with the information about the inputs [("IP", PORT_NUMBER, Number_of_Neurons),...] spike_flag: flag Brian is going to use to signalize the spikes were processed. This is important because makes possible to drop packets. Actually this is the consumer point of the process_out's pipe. """ individual_num_of_neurons = [i[2] for i in addresses_input] # just filters from the address_input list to make it easier to read. formatted_train = numpy.array([0]*sum(individual_num_of_neurons)) # generates the empty numpy.array to receive the spikes in the right position/offset. offset_num_of_neuron = [(sum(individual_num_of_neurons[0:i]),individual_num_of_neurons[i]) for i in range(len(individual_num_of_neurons))] # this one is a list of tuples with the initial offset and the total number of neurons # it is used to position the spike received within the formatted_train numpy.array. pipe_and_offset = zip(udp_data, offset_num_of_neuron) # I'm doing this here to save processing cycles inside the while loop. def handler_clean_spike_train(signum, frame): """ Function used to react according to the signal.alarm, send (or not) the spikes and clean the spike train. """ if not spike_flag.poll() and any(formatted_train): # checks if Brian already emptied the consumer side of the pipe # and if there is at least one spike to be processed. process_out.send(formatted_train) formatted_train.fill(0) #.fill(0) because is faster than assignment! # Sets the signal handler, so every time alarm reaches the final time, calls handler_clean_spike_train. signal.signal(signal.SIGALRM, handler_clean_spike_train) while True: try: # # I need to check the best value for this alarm!!!! # signal.setitimer(signal.ITIMER_REAL, self.inputclock_dt/1000.0) # Sets the alarm to signal according to the Brian clock used to read the inputs. # If the alarm uns off, the spikes are trashed. # This is the only way to interrupt the select() call. if select.select(udp_data,[],[]): # It blocks until at least one of the pipes in the udp_data list gets data in! # select() should work in most of the operational systems, but if only linux is used # this function could be swapped by the poll and then waste less time with this test. signal.setitimer(signal.ITIMER_REAL, 0) # disables the alarm used to clean the formatted_train for pipe_udp, offset in pipe_and_offset: if pipe_udp.poll(): # Veryfies if there is any spiking waiting in this pipe. # This test is important because recv() is a blocking command. received_raw_data = pipe_udp.recv() #reads from the pipe the data receive by UDP received_spikes = numpy.fromstring(received_raw_data[8:], dtype=numpy.uint8) # The dtype=numpy.uint8 is very important because the # convertion to/from string depends on it! # Takes out the first 8 bytes related to the brian_address. try: formatted_train[offset[0]:offset[0]+offset[1]]=received_spikes except ValueError: print "Received packet Error!" # in case the system receives a wrong packet in this address. except select.error: # Catches the exception raised by the alarm interruption inside the select command. pass # In other words: makes possible to quit the select command! except KeyboardInterrupt: print "User keyboard interruption...finishing!" break # Kills the while... def send_UDP(self, IP, PORT, pipe_out): """ This is a very light weight function to send the final msg (local_brian_address + spikes_train) using UDP. The spikes_train is going to be a numpy.array converted to string using the numpy.tostring method and the local_brian_address the same format, but with the original numpy.array containing the address (8bits). Each IP/PORT is going to have a process running exclusively with this function and because this is an IO function it is not a big deal to have more processes than real processors. """ sockO = (socket.socket(socket.AF_INET, # IP socket.SOCK_DGRAM)) # UDP while True: try: sockO.sendto(pipe_out.recv(), (IP, PORT)) # here it is being supposed the pipe has the final string # the method .recv() is a blocking command except IOError: # Without the IOError even the keyboard "control+C" is caught here! print self.brian_address, "-send_UDP IO error?" except KeyboardInterrupt: print "User keyboard interruption...finishing!" break # Kills the while... def send_UDP_output_pipes(self, local_brian_address, addresses_output, pipe_out, send_UDP_pipes): """ This function receives a Pipe (pipe_out) where the spikes are going to arrive from the Brian simulation, sets up the output spike train (because Brian sends only the indexes of the neurons who spiked), converts to the numpy.tostring() format, adds the brian_address and then redirects to the processes who send the UDP packet to each address/machine. In order to make it a little bit faster, the local_brian_address could be converted to string before sending it to this function. """ UDP_spike_array = numpy.array([0]*self.NumOfNeuronsOutput, dtype=numpy.uint8) # This command is outside the setup_send_UDP because # it slows down the function. # The dtype=numpy.uint8 is very important because the # convertion to/from string depends on it! random_indexes = range(len(addresses_output)) # Creates the index to be shuffled next. # in my experiments, the shuffle runs 10x faster with a list # instead of using a numpy.array ?!?!?!? while True: try: numpy.random.shuffle(random_indexes) # Randomize (shuffle) the indexes, so on average they all receive # packets with the same delay. UDP_spike_array.fill(0) # in my experiments, this is 10x faster than using [0]*NumOfNeuronsOutput for an array with 1k uint8s UDP_spike_array[pipe_out.recv()]=1 # now the
[R-sig-Geo] Memory problems with dnearneigh in spdep -- related data questions Empty Empty phytophthorasb at yahoo.com Fri Apr 12 23:29:36 CEST 2013 Thanks again, Roger. I will work on the rasters and learning the focal method for this. In the interest of learning for beyond this particular analysis, I'm a bit confused about what I'm seeing in the data summaries and how to understand it. I realize understanding one's data is extremely fundamental, sorry. The data I was using was from a points file generated in ArcGIS. I have another older Arc-generated points file of the same data that is not projected and proj4string NA, but everything else is the same When I took an older raster of the data that I have and converted it to points in R (package: raster), I get the following summary Object of class SpatialPointsDataFrame Coordinates: min      max x -17.52090 51.40407 y -34.82918 37.53746 Is projected: NA proj4string : [NA] Number of points: 36820887 Data attributes: Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 0.00      0.28      2.57     26.43     10.51 103100.00 So it is the same, except it isn't projected and it doesn't have that entry, NA's. Where the ones that come from ArcGIS have a very high number of NA's (all but 1 million of the points - 35m out of 36m). Again it's all from a 1k raster. 1) Any ideas why the difference in NA's and if this might be part of my problem? And in general, what would one check for in the data to avoid this problem (if it is a problem)? 2) In the projected data set, it clearly says units=m (inside the proj4string entry), so if I do a distance threshold can I assume it is calculating in meters? If it isn't projected, is the default then lat lon and kilometers? In general, if I'm looking at a summary of data, how do I know in what units it is going to be interpreting a distance threshold (or something similar for any other function)? Some weeks ago, I successfully ran the G* with dnearneigh on a subset of the data - just Ethiopia (the points file was generated from ArcGIS). The distance threshold I used was 30 (honestly I was just trying to see if it would run, there was no theoretical reason for 30). The output made sense and was similar to what my colleague got in ArcGIS for Ethiopia. Summary of data: Object of class SpatialPointsDataFrame Coordinates: min     max coords.x1 -162966.9 1492490 coords.x2  377203.2 1642482 Is projected: TRUE proj4string : [+proj=utm +zone=37 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0] Number of points: 1335221 Data attributes: OBJECTID          POINTID    GRID_CODE Min.   :      1   Min.   :0   Min.   :    0.00 1st Qu.: 333806   1st Qu.:0   1st Qu.:    5.32 Median : 667611   Median :0   Median :   15.25 Mean   : 667611   Mean   :0   Mean   :   59.38 3rd Qu.:1001416   3rd Qu.:0   3rd Qu.:   69.10 Max.   :1335221   Max.   :0   Max.   :72526.11 3) This data set doesn't have the NA's and it worked. Hmmm. But the units are in meters and I used a distance threshold of 30. For 1km data, shouldn't there have been no neighbors at 30m? Why did this work? or was it calculating kilometers instead? Thanks for any insight, Juliann ----- Original Message ----- From: Roger Bivand <Roger.Bivand at nhh.no> To: Empty Empty <phytophthorasb at yahoo.com> Cc: "r-sig-geo at r-project.org" <r-sig-geo at r-project.org> Sent: Wednesday, April 10, 2013 12:10 AM Subject: Re: [R-sig-Geo] Memory problems with dnearneigh in spdep On Tue, 9 Apr 2013, Empty Empty wrote: > Thanks so much, Roger, for your suggestions. > I should have realized it was taking kilometers rather than meters! > > My summary(r) is: > Object of class SpatialPointsDataFrame > Coordinates: >                 min      max > coords.x1 -17.52090 51.40407 > coords.x2 -34.82918 37.53746 > Is projected: TRUE proj4string : > [+proj=aea +lat_1=20 +lat_2=-23 +lat_0=0 +lon_0=25 +x_0=0 +y_0=0 +ellps=WGS84 > +datum=WGS84 +units=m +no_defs +towgs84=0,0,0] Given your object summary, you need to check where your data came from. The bounding box is for geographical coordinates, but the declared coordinate reference system is projected in units of metres. So your 9.333 is 9.333m, and no neighbours will be found. The inclusion of NAs in gridded data represented as points is unnecessary, and all points not on land should be dropped before analysis begins. I do suggest moving to a raster representation, and using focal methods in the raster package to generate the separate components of equation 14.3, p. 263-264 in Gettis & Ord (1996). Using focal methods defines the moving window as a matrix, here 10x10, moved over the raster and circumventing the creation of a weights object - create a new raster layer with \sum{j} w_{ij}(d) x_j values. W_i^* will be a constant, as will \bar{x}^*, and possibly the other starred terms too. Hope this helps, Roger > Number of points: 36820887 > Data attributes: >     POINTID           GRID_CODE         Min.   :      1    Min.   :     0.00   1st Qu.: 250000    1st Qu.:     0.28   Median : 500000    Median :     2.57   Mean   : 500000    Mean   :    26.43   3rd Qu.: 750000    3rd Qu.:    10.51   Max.   : 999999    Max.   :103104.70   NA's   :35820888 > > It seems that there are a lot of NA's. (Everything in the square outside the continent's borders should be an NA, so maybe that's a reasonable number). It seemed right when I plotted it earlier. > > I ran it with 9.333 as the distance band (then I was going to try 1.5 next). > nb<- dnearneigh(r, 0, 9.333) > > It used a constant amount of memory for the first ~90 minutes and then it increased at a constant rate for the next 20 hours > until it was using virtually all the memory on the server and was stopped. > Because the administrator had to kill my job, I wasn't able to get session info while the session was open. I'm not sure this is at all helpful, but when I re-started R this is the sessionInfo() : > R version 2.15.3 (2013-03-01) > Platform: x86_64-pc-linux-gnu (64-bit) > > locale: >  [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C         LC_TIME=C            LC_COLLATE=C >  [5] LC_MONETARY=C        LC_MESSAGES=C        LC_PAPER=C           LC_NAME=C >  [9] LC_ADDRESS=C         LC_TELEPHONE=C       LC_MEASUREMENT=C     LC_IDENTIFICATION=C > attached base packages: > [1] stats     graphics  grDevices utils     datasets  methods   base > > loaded via a namespace (and not attached): > [1] tools_2.15.3 > > Thanks, > Juliann > > > ________________________________ > From: Roger Bivand <Roger.Bivand at nhh.no> > To: Empty Empty <phytophthorasb at yahoo.com> Cc: "r-sig-geo at r-project.org" <r-sig-geo at r-project.org> Sent: Sunday, April 7, 2013 12:15 PM > Subject: Re: [R-sig-Geo] Memory problems with dnearneigh in spdep > > I do prefer an affiliation, a real name, and a motivation; a > yahoo or gmail address tells me nothing about what you might be expected > to know. If I (or a list helper) cannot "place" you, it is much harder to > know why you are having problems. > > You are also in breach of list rules because you are posting in HTML rather than just plain text. Removing the HTML will shorten your posting radically. > > > On Sun, 7 Apr 2013, Empty Empty wrote: > >> Hi, >> >> I'm running into memory problems calculating Gi* using dnearneigh and wonder if someone has any suggestions for either my script or another work around. >> >> I have 1km data for Africa. My colleague calculated the appropriate distance band (9333m) using incremental spatial autocorrelation in ArcGIS but she was only able to run a subset of the data or with the data aggregated to 5km due to memory issues (fixed distance, Euclidean). I tried it in both ArcGIS (memory error after 45 hours) and R (Within 2 hours it was using more than 245GB on the server and then crashed). This is what I used > > Please always include the output of sessionInfo(). Here also include summary(r) output. > >> >> library(spdep) >> >> >> >> coords<-coordinates(r) > > This is not needed. > >> nb9333<- dnearneigh(coords, 0,9333,longlat = F) > > nb9333<- dnearneigh(r, 0, 9333) > > would work, and would not override longlat=. I assume in fact that r is in longlat, and that you would then mean to use a km threshold of 9.333. If I'm right, then the memory usage is being complicated by your including all observations as neighbours of each observation. > > For a 1km grid, a threshold of 9333m means that each of the about 30M gridcells has 100 neighbours (rather strong smoothing). Is the threshold
registers defined in the FPGA using the Unix operating system as a pipeline. Unlike BORPH, FUSE \cite{os_fuse} leverages a modified kernel module to support FPGA accelerators in the form of tasks instead of a process. In order to decrease data transmission latency between HW and SW operations, it also makes use of a shared kernel memory paradigm. ReconOS \cite{os_recon} introduces the multi-thread abstractions of software programs and standardized interface for integrating custom hardware accelerators. Like FUSE and BORPH, Recon uses the modified Linux kernel to develop this framework. At a high level, Feniks \cite{os_fenik} abstracts FPGA into two distinct region: OS and multi-application regions, both of that are provided to software applications. To effectively connect with the local DRAM of the FPGA, the host CPU and memory, servers, and cloud services, software stacks and modules are present in the OS region. In addition, Feniks provides resource management and FPGA allocation using centralized cloud controllers that run on host CPUs. Like an OS, LEAP \cite{os_leap} provides a uniform, abstract interface to underlying hardware resources and is similar to other proposed architectures. AmorphOS’s OS \cite{os_amorph}, divides the FPGA region into a small fixed-size zone, a.k.a morph lets, which provides the virtualization of the FPGA fabric. AmorphOS performs the sharing of the hardware tasks by either spatial sharing or the time-sharing schedule method. \section{ Cloud FPGA Infrastructure and Deployment Model } \label{Section:Deployment} FPGAs have mostly been used in the verification phase of ASIC designs over the previous ten years, where the ASIC design was implemented for validation and verification phases before it was actually produced. Additionally, specialized markets and research programs had some other applications. However, FPGAs are gaining popularity as an alternative for CPUs and GPUs due to high performance processing and parallelism. FPGA boards are both available on the market today as supported devices that may be connected by PCIe ports or as part of the same System-on-Chip (SoC). Recent trends indicate that the integration of FPGA is growing exponentially in cloud platforms to provide tenants with designing and implementing their custom hardware accelerators. There are typically four basic methods for deploying FPGA boards in cloud data centers. Fig \ref{fig:deploy} shows the different FPGA deployment models on the cloud. \begin{figure}[h] \centering \includegraphics[width=11 cm]{images/deploy.pdf} \caption{FPGA deployment models in the cloud. a) In the co-processor model, PCIe connections are used to connect FPGA boards to CPUs in data centers. b) In the SoC model, the FPGA and CPU are mounted on a chip die. c) The bump-in-the-wire concept uses FPGAs in the data centers, which tenants can access via NIC protocols. } \Description{FPGA deployment models in the cloud.} \label{fig:deploy} \end{figure} \subsection{Co-processor} \label{subsection:deploy_co-processor} In the first approach, FPGA is considered a co-processor. FPGA and CPU are located in a same node in a data center and can be accessed by the PCIe network. However, the total count FPGA boards in the data center is proportional to the total number of CPUs, and the FPGA board cannot run independently. Xilinx introduced the first CPU+FPGA integration in 2013 for embedded devices named a Zynq SoC \cite{zynq}. This SoC platform is integrated with ARM cortex processor and FPGA programming block in the same die. Removing communication ports to the CPU reduces latency and increases overall performance. The AXI-based communication protocol introduced the communication between FPGA and CPU. In year 2014, a project lead by Microsoft implemeted the idea of integrating FPGA and CPUs in a datacenter named as Catapult \cite{catapult}. In this project, Microsoft aimed at augmenting CPUs with an interconnected and configurable programming FPGA block. The project was first used as a prototype in the Bing search engine to identify and accelerate computationally expensive operations in Bing’s IndexServe engine. The experimental project was very successful and generated outstanding results while increasing a dramatic increase in the search latency, running Bing search algorithms 40 times faster than CPUs alone \cite{catapult2}. These results pushed Microsoft to extend the other web services.In 2015, Intel acquired Altera and its (Xeon+FPGA) deployment model that integrates FPGAs along with Xeon CPUs. In the backend, Intel introduces a partial reconfiguration of the bitstream, where static region of FPGA is allocated. This reconfiguration is referred to as blue bitstream as it can load the user bitstream (named as a green bitstream) into the re-configurable blue region. This interaction is handled by an interface protocol called Core Cache Interface (CCI-P). Amazon announced its affiliation with Xilinx for accelerated cloud technology using FPGA in 2016. This project was controlled under an AWS shell module where the user logic was configured dynamically in the FPGA hardware. Tenant are provided with amazon-designed software API shell to avoid any security damanges. In the last recent years, Baidu \cite{baidu}, Huawei \cite{huawei}, Tencent \cite{tencant}, and Alibaba also started the recent trends of integrating FPGA and CPU. \subsection{Discrete} \label{subsection:deploy_discrete} FPGA board can be also deployed independently as an individual separate component which eliminates the ncessity of deploying CPU along with FPGA boards. This discrete approach considers deploying the FPGA board as separate standalone component. This setup is independent from the CPU and FPGA board is directly connected to the network. For example, NARC \cite{narc} is a standalone FPGA board which is connected through the network and capable of performing high computational workloads and network tasks. By using OpenStack platform, Smart Applications on Virtualized Infras-tructure (SAVI) project \cite{deploy_toronto} deployed a cluster of discrete FPGA boards that can communicate privately in a closed network. IBM announced cloudFPGA project of accommodating of 1024 FPGA boards in total to a data-centre rack, in 2017 \cite{ibm}. IBM deploys these FPGA racks as stand-alone resources for hardware and avoids the common way of coupling FPGA boards with CPUs in data-centres. IBM standalone setup has shown a increase 40x and 5x in latency and output paramater \cite{ibm}. \subsection{Bump-in-the-wire} \label{subsection:deploy_wire} Bump-in-the-wire model refer to a setup where FPGAs are placed on a server between the Network Interface Card(NIC) and the network. This allow FPGAs to communicate directly over the network and process data packets receiving from different users through the internet. Bump-in-the-wire architectures experienced a dramatic reduction in latency as layers between the communication path are reduced. Exposing the FPGA resources over the network have unique benefits of providing offload computation without interacting with CPUs and GPUs. Users can directly send packets to the servers which is processed by the routers and later forwarded to the destined FPGA board using software defined network(SDN) protcols. The famous Microsoft Catapult has followed the bump-in-the-wire deployment model to connect their FPGA boards in the top-of-the-rack switch (TOR) rack. \subsection{System-on-chip (SoC)} \label{subsection:deploy_soc} SoC FPGA devices integrate microprocessors with FPGA fabric into a single board. Consequently, they provide higher reliability and reduced latency and power consumption between the processor and FPGA. They also include a rich set of I/O devices, on-chip memory blocks, logic arrays, DSP blocks, and high-speed transceivers. Currently, there are three families of SoC FPGAs available on the market by Intel, Xilinx, and Microsemi \cite{Jin2020}. The processors used in the FPGA SoC have fully dedicated “hardened” processor blocks. All three FPGA vendors integrated the full-featured ARM® processor and a memory hierarchy internally connected with the FPGA region. Integrating ARM processors and FPGA blocks on the same piece of a silicon die significantly reduces the board space and fabrication cost. Communication between the two regions consumes substantially less power than using two isolated devices. \section{Industrial Evolution of Public Cloud FPGA Accelerators} \label{Section:Industry} In recent years, public cloud providers offering FPGA boards to users/tenants in their data centers. Users or tenants normally go for pay-per-use to access FPGA resources, and the control is assigned to users for a specific time slot. Tenants can speed up their application performance by designing custom hardware accelerators and implementing them in their assigned region. Amazon AWS\cite{amazon}, Huawei Cloud\cite{huawei}, and Alibaba Cloud \cite{alibaba} has started offering rent of the Xilinx Virtex Ultrascale+ architecture in their cloud platform. Xilinx Kintex Ultrascale is offered by Baidu \cite{baidu}
\section{Introduction} The intriguing role of duality in different contexts is being progressively understood and clarified \cite{O, GH, JS, AG}. Much effort has been given in sorting out several technical aspects of duality symmetric actions. In this context the old idea \cite{O, Z, DT} of eletromagnetic duality has been revived with considerable attention and emphasis \cite{SS, GR, NB, DGHT}. Recent directions \cite{SS, KP, PST, G} also include an abstraction of manifestly covariant forms for such actions or an explicit proof of their equivalence with the nonduality symmetric actions, which they are supposed to represent. There are also different suggestions on the possible analogies between duality symmetric actions in different dimensions. In particular it has been claimed \cite{PST} that the two dimensional self dual action given in \cite{FJ} is the analogue of the four dimensional electromagnetic duality symmetric action \cite{SS}. In spite of the recent spate of papers on this subject there does not seem to be a simple clear cut way of arriving at duality symmetric actions. Consequently the fundamental nature of duality remains clouded by technicalities. Additionally, the dimensionality of space time appears to be extremely crucial. For instance, while the duality symmetry in $D=4k$ dimensions is characterised by the one-parameter continuous group $SO(2)$, that in $D=4k+2$ dimensions is described by a discrete group with just two elements \cite{DGHT}. Likewise, it has also been argued from general notions that a symmetry generator exists only in the former case. From an algebraic point of view the distinction between the dimensionalities is manifested by the following identities, \begin{eqnarray} \label{i1} \mbox{}^{**}F &=& F\,\,\,;\,\,\,D=4k+2\nonumber\\ &=&- F\,\,\,;\,\,\,D=4k \end{eqnarray} where the $*$ denotes a usual Hodge dual operation and $F$ is the $\frac{D}{2}$-form. Thus there is a self dual operation in the former which is missing in the latter dimensions. This apparently leads to separate consequences for duality in these cases. The object of this paper is to develop a method for systematically obtaining and investigating different aspects of duality symmetric actions that embrace all dimensions. A deep unifying structure is illuminated which also leads to new symmetries. Indeed we show that duality is not limited to field or string theories, but is present even in the simplest of quantum mechanical examples- the harmonic oscillator. It is precisely this duality which pervades all field theoretical examples as will be explicitly shown. The basic idea of our approach is deceptively simple. We start from the second order action for any theory and convert it to the first order form by introducing an auxiliary variable. Next, a suitable relabelling of variables is done which induces an internal index in the theory. It is crucial to note that there are two distinct classes of relabelling characterised by the opposite signatures of the determinant of the $2\times 2$ orthogonal matrix defined in the internal space. Correspondingly, in this space there are two actions that are manifestly duality symmetric. Interestingly, their equations of motion are just the self and anti-self dual solutions, where the dual field in the internal space is defined below in (\ref{i2}). It is also found that in all cases there is one (conventional duality) symmetry transformation which preserves the invariance of these actions but there is another transformation which swaps the actions. We refer to this property as swapping duality. This indicates the possibility, in any dimensions, of combining the two actions to a master action that would contain all the duality symmetries. Indeed this construction is explicitly done by exploiting the ideas of soldering introduced in \cite{S} and developed by us \cite{ABW, BW}. The soldered master action also has manifest Lorentz or general coordinate invariance. The generators of the symmetry transformations are also obtained. It is easy to visualise how the internal space effectively unifies the results in the different $4k+2$ and $4k$ dimensions. The dual field is now defined to include the internal index $(\alpha, \beta)$ in the fashion, \begin{eqnarray} \label{i2} \tilde F^\alpha &=&\epsilon^{\alpha\beta}\mbox{}^{*}F^\beta \,\,\,;\,\,\,D=4k\nonumber\\ \tilde F^\alpha &=&\sigma_1^{\alpha\beta}\mbox{}^{*}F^\beta \,\,\,;\,\,\,D=4k+2 \end{eqnarray} where $\sigma_1$ is the usual Pauli matrix and $\epsilon_{\alpha\beta}$ is the fully antisymmetric $2\times 2$ matrix with $\epsilon_{12} =1$. Now, irrespective of the dimensionality, the repetition of the dual operation yields, \begin{equation} \label{i3} \tilde{\tilde F} = F \end{equation} which generalises the relation (\ref{i1}). An immediate consequence of this is the possibility to obtain self and anti-self dual solutions in all even $D=2k+2$ dimensions. Their explicit realisation is one of the central results of the paper. The paper is organised into five sections. In section 2 the above ideas are exposed by considering the example of the simple harmonic oscillator. A close parallel with the electromagnetic notation is also developed to illuminate the connection between this exercise and those given for the field theoretical models in the next two sections. The duality of scalar field theory in two dimensions is considered in section 3. The occurrence of a pair of actions is shown which exhibit duality and swapping symmetries. These are the analogues of the four dimensional electromagnetic duality symmetric actions. Indeed, from these expressions,it is a trivial matter to reproduce both the self and anti-self dual actions given in \cite{FJ}. Our analysis clarifies several issues regarding the intertwining roles of chirality and duality in two dimensions. The soldering of the pair of duality symmetric actions is also performed leading to fresh insights. The analysis is completed by including the effects of gravity. In section 4, the Maxwell theory is treated in great details. Following our prescription the duality symmetric action given in \cite{SS} is obtained. However, there is also a new action which is duality symmetric. Once again the soldering of these actions leads to a master action which contains a much richer structure of symmetries. Incidentally, it also manifests the original symmetry that interchanges the Maxwell equations with the Bianchi identity, but reverses the signature of the action. As usual, the effects of gravity are straightforwardly included. Section 5 contains the concluding comments. \section {Duality in $0+1$ dimension} The basic features of duality symmetric actions are already present in the quantum mechanical examples as the present analysis on the harmonic oscillator will clearly demonstrate. Indeed, this simple example is worked out in some details to illustrate the key concepts of our approach and set the general tone of the paper. An extension to field theory is more a matter of technique rather than introducing truly new concepts. The Lagrangean for the one-dimensional oscillator is given by, \begin{equation} L=\frac{1}{2}\Big ({\dot q}^2-q^2\Big) \label{10} \end{equation} leading to an equation of motion, \begin{equation} \ddot q+q=0 \label{20} \end{equation} Introducing a change of variables, \begin{equation} E=\dot q\,\,\,\,\,; \,\,\,\,\, B=q \label{30} \end{equation} so that, \begin{equation} \dot B-E=0 \label{40} \end{equation} is identically satisfied, the above equations (\ref{10}) and (\ref{20}) are, respectively, expressed as follows; \begin{equation} L=\frac{1}{2}\Big(E^2-B^2\Big) \label{50} \end{equation} and, \begin{equation} \dot E+B=0 \label{60} \end{equation} It is simple to observe that the transformations,\footnote{Note that these are just the discrete cases ($\alpha=\pm\frac{\pi}{2}$) for a general $SO(2)$ rotation matrix parametrised by the angle $\alpha$} \begin{equation} E\rightarrow \pm B\,\,\,;\,\,\, B\rightarrow \mp E \label{70} \end{equation} swaps the equation of motion (\ref{60}) with the identity (\ref{40}) although the Lagrangean (\ref{50}) is not invariant. The similarity with the corresponding analysis in the Maxwell theory is quite striking, with $q$ and $\dot q$ simulating the roles of the magnetic and electric fields, respectively. There is a duality among the equation of motion and the `Bianchi' identity (\ref{40}), which is not manifested in the Lagrangean. In order to elevate the duality to the Lagrangean, the basic step is to rewrite (\ref{10}) in the first order form by introducing an additional variable, \begin{eqnarray} L&=&p\dot q-\frac{1}{2}(p^2+q^2)\nonumber\\ &=&\frac{1}{2}\Big(p\dot q-q\dot p -p^2-q^2\Big) \label{80} \end{eqnarray} where a symmetrisation has been performed. There are now two possible classes for relabelling these variables corresponding to proper and improper rotations generated by the matrices $R^+(\theta)$ and $R^-(\varphi)$ with determinant $+1$ and $-1$, respectively, \begin{eqnarray} \left(\begin{array}{c} q\\ p \end{array}\right) = \left(\begin{array}{cc} {\cos\theta} & {\sin\theta} \\ {-\sin\theta} &{\cos\theta} \end{array}\right) \left(\begin{array}{c} x_1\\ x_2 \end{array}\right) \label{matrix1} \end{eqnarray} \begin{eqnarray} \left(\begin{array}{c} q\\ p \end{array}\right) = \left(\begin{array}{cc} {\sin\varphi} & {\cos\varphi} \\ {\cos\varphi} &{-\sin\varphi} \end{array}\right) \left(\begin{array}{c} x_1\\ x_2 \end{array}\right) \label{matrix} \end{eqnarray} leading to the distinct Lagrangeans, \begin{eqnarray} L_\pm&=&\frac{1}{2}\Big(\pm x_\alpha\epsilon_{\alpha\beta}\dot x_\beta -x_\alpha^2\Big)\nonumber\\ &=&{1\over 2}\left( \pm B_\alpha\epsilon_{\alpha\beta}E_\beta-B_\alpha^2 \right) \label{100} \end{eqnarray} where we have reverted back to the `electromagnetic' notation introduced in (\ref{30}). By these change of variables an index $\alpha=(1, 2)$ has been introduced that characterises a symmetry in this internal space, the complete details of which will progressively become clear. It is useful to remark that the above change of variables are succinctly expressed as, \begin{eqnarray} q&=&x_1\,\,\,\,;\,\,\,\, p=x_2\nonumber\\ q&=&x_2\,\,\,\,;\,\,\,\, p=x_1 \label{90} \end{eqnarray} by setting the angle $\theta=0$ or $\varphi =0$ in the rotation matrices (\ref{matrix1}) and (\ref{matrix}). Correspondingly, the Lagrangean (\ref{80}) goes over to (\ref{100}). Now observe that the above Lagrangeans (\ref{100}) are manifestly invariant under the continuous duality transformations, \begin{equation} \label{90a} x_\alpha\rightarrow R^+_{\alpha\beta}x_\beta \end{equation} which may be equivalently expressed as, \begin{eqnarray} E_\alpha&\rightarrow& R^+_{\alpha\beta}E_\beta\nonumber\\ B_\alpha&\rightarrow& R^+_{\alpha\beta}B_\beta \label{100a} \end{eqnarray} where $R^+_{\alpha\beta}$ is the usual $SO(2)$ rotation matrix (\ref{matrix1}). The generator of the infinitesimal symmetry transformation is given by, \begin{equation} \label{gen} Q^\pm =\pm\frac{1}{2}x_\alpha x_\alpha \end{equation} so that the complete transformations (\ref{90a}) are generated as, \begin{eqnarray} x_\alpha \rightarrow x'_\alpha & =& e^{-i\theta Q} x_\alpha e^{i\theta Q} \nonumber\\
gids_spiking, gids_spiking + 100, rasterized=True, lw=0.15) ax2 = ax.twinx() ax2.hist(times, bins=np.linspace(-t_window, 0, 51), histtype='step', weights=np.zeros(times.size) + (1000.0/10.0)/gids.size) ax2.set_ylabel('FR (Hz)') #ax2.set_ylim([0, 3]) #ax2.set_yticks([0, 1, 2, 3]) spikes = bluepy.Simulation(continue_bcs[n_bc]).v2.reports['spikes'] df = spikes.data(t_end=t_window) gids_spiking = np.array(df.axes[0]) gids_spiking = np.vectorize(sort_dict.get)(gids_spiking) times = np.array(df) ax.vlines(times, gids_spiking, gids_spiking + 100, rasterized=True, lw=0.15) ax2.hist(times, bins=np.linspace(0, t_window, 51), histtype='step', weights=np.zeros(times.size) + (1000.0/10.0)/gids.size) ax = axs[1,k] spikes = bluepy.Simulation(base_bcs[n_bc]).v2.reports['spikes'] df = spikes.data(t_start=t_middle - t_window) gids_spiking = np.array(df.axes[0]) gids_spiking = np.vectorize(sort_dict.get)(gids_spiking) times = np.array(df) - t_middle ax.vlines(times, gids_spiking, gids_spiking + 100, rasterized=True, lw=0.15) ax2 = ax.twinx() ax2.hist(times, bins=np.linspace(-t_window, 0, 51), histtype='step', weights=np.zeros(times.size) + (1000.0/10.0)/gids.size) ax2.set_ylabel('FR (Hz)') #ax2.set_ylim([0, 3]) spikes = bluepy.Simulation(change_bcs[n_bc]).v2.reports['spikes'] df = spikes.data(t_end=t_window) gids_spiking = np.array(df.axes[0]) gids_spiking = np.vectorize(sort_dict.get)(gids_spiking) times = np.array(df) ax.vlines(times, gids_spiking, gids_spiking + 100, rasterized=True, lw=0.15) ax2.hist(times, bins=np.linspace(0, t_window, 51), histtype='step', weights=np.zeros(times.size) + (1000.0/10.0)/gids.size) ax = axs[2, k] for bc in base_bcs: soma = bluepy.Simulation(bc).v2.reports['soma'] time_range = soma.time_range[soma.time_range >= t_middle - t_window] - t_middle data = soma.data(t_start=t_middle - t_window) ax.plot(time_range, data.mean(axis=0), linewidth=1, alpha=0.5, rasterized=True, color='#1f77b4') for bc in continue_bcs: soma = bluepy.Simulation(bc).v2.reports['soma'] time_range_cont = soma.time_range[soma.time_range < t_middle + t_window] - t_middle data_cont = soma.data(t_end=t_middle + t_window) ax.plot(time_range_cont, data_cont.mean(axis=0), linewidth=1, alpha=0.5, rasterized=True, color='#ff7f0e') for bc in change_bcs: soma = bluepy.Simulation(bc).v2.reports['soma'] time_range_cont = soma.time_range[soma.time_range < t_middle + t_window] - t_middle data_cont = soma.data(t_end=t_middle + t_window) ax.plot(time_range_cont, data_cont.mean(axis=0), linewidth=1, alpha=0.5, rasterized=True, color='#ff7f0e') ax_s = [axs[3, k].twinx(), axs[3, k]] n = 50 lines = ['-', '--'] for index_correlation in range(2): ax = ax_s[index_correlation] corrs, bins = get_correlations(parameter_continue=params[0][1], parameter_change=params[0][2], dt=10.0, ca=cas[k]) errs = np.hstack([np.zeros(n), corrs.mean(axis=0).std(axis=-1, ddof=1)[:, index_correlation]]) #/np.sqrt(corrs.shape[-1])]) means = np.hstack([np.zeros(n) + index_correlation, corrs.mean(axis=0).mean(axis=-1)[:, index_correlation]]) xs = np.hstack([-(bins[1:(n+1)] - t_middle)[::-1], bins[:(n+1)] - t_middle]) + (bins[1] - bins[0])/2.0 ys = means[:(n+n+1)] print xs.shape print ys.shape ax.errorbar(xs, ys, yerr=errs[:(n+n+1)]/np.sqrt(20), linestyle=lines[index_correlation], linewidth=0.8, color='black') ax_s[0].set_ylim([0, 5.3]) ax_s[0].set_yticks([0, 1, 2, 3, 4, 5]) ax_s[1].set_ylim([0, 1.1]) ax_s[1].set_yticks([0, 0.5, 1]) ax_s[0].set_ylabel('RMSD (mV)') ax_s[1].set_ylabel('r') for ax in axs[:2, :].flatten(): ax.set_xlabel('t (ms)') ax.set_ylim([0, 31346]) ax.set_yticks([0, 10000, 20000, 30000]) ax.set_ylabel('Neuron') ax.set_xlim([-t_window, t_window]) for ax in axs[2, :].flatten(): ax.set_xlabel('t (ms)') #ax.set_ylim([-63, -60]) #ax.set_yticks([-63, -62, -61, -60]) ax.set_ylabel('V (mV)') ax.set_xlim([-t_window, t_window]) for ax in axs[3, :].flatten(): ax.set_xlabel('t (ms)') ax.set_xlim([-t_window, t_window]) plt.tight_layout() plt.savefig('figures/raster_example_3_ca_scan.pdf', dpi=300) def plot_time_bins_fitting(): fig, ax = plt.subplots(1, figsize=(5, 4)) norms, times = get_initial_divergence(parameter_continue='', parameter_change='abcd') print norms.shape times -= 2000 ax.plot(np.vstack([times for i in range(40)]).T, norms.T, alpha=0.3, color='#a1d76a', lw=0.5) ax.plot(times, norms.mean(axis=0), '-', color='red', lw=1.0) ax.plot(times, norms.mean(axis=0) - norms.std(axis=0), '--', color='red', lw=1.0) ax.plot(times, norms.mean(axis=0) + norms.std(axis=0), '--', color='red', lw=1.0) ax.legend(loc='upper right', prop={'size':6}) ax.set_xlabel('t (ms)') ax.set_ylabel('RMSD or abs(diff)') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') plt.tight_layout() plt.savefig('figures/time_bins_fitting.pdf') def lorenz_func(t, a, lamb, c): return 1 / (1 + c * np.exp(-lamb * t)) def lorenz_func_2(t, c): return (0.01)**(np.exp(-c*t)) def simple_func(t, a): return 1 - np.exp(-a * t) def cell_type_analysis(normalize=True, ca='1p25'): """ Mtypes Etypes layers synapse_types in_degree in_ei_ratio """ circuit = bluepy.Simulation(get_base_bcs()[0]).circuit mtypes = circuit.mvddb.mtype_id2name_map().values() cells = circuit.v2.cells({Cell.HYPERCOLUMN: 2}) etypes = circuit.mvddb.etype_id2name_map().values() # print cells layers = [0, 1, 2, 3, 4, 5] synapse_classes = ['INH', 'EXC'] print "loading in degree" file_path_indegree = 'in_degree.npz' if not os.path.isfile(file_path_indegree): in_degree = connections(circ=circuit, base_target='mc2_Column').sum(axis=0) np.savez(open(file_path_indegree, 'w'), in_degree=in_degree) else: in_degree = np.load(file_path_indegree)['in_degree'] #in_degree = np.random.permutation(31346) print "loaded" file_path_indegree_exc = 'in_degree_exc.npz' if not os.path.isfile(file_path_indegree_exc): in_degree_exc = connections(circ=circuit, base_target_pre='mc2_Excitatory', base_target_post='mc2_Column').sum(axis=0) np.savez(open(file_path_indegree_exc, 'w'), in_degree_exc=in_degree_exc) else: in_degree_exc = np.load(file_path_indegree_exc)['in_degree_exc'] #in_degree = np.random.permutation(31346) print "loaded" file_path_indegree_inh = 'in_degree_inh.npz' if not os.path.isfile(file_path_indegree_inh): in_degree_inh = connections(circ=circuit, base_target_pre='mc2_Inhibitory', base_target_post='mc2_Column').sum(axis=0) np.savez(open(file_path_indegree_inh, 'w'), in_degree_inh=in_degree_inh) else: in_degree_inh = np.load(file_path_indegree_inh)['in_degree_inh'] #in_degree = np.random.permutation(31346) print "loaded" indices = np.array(cells['etype'] == 'dNAC') print "Number of dNAC" print indices.sum() bins_n = np.array([50, 100, 150, 200, 250, 300, 350, 400, 450]) #bins_n = np.linspace(in_degree.min() + 20, in_degree.max() - 20, 9) percentiles_in_degree = np.digitize(in_degree, bins_n) index_correlation = 1 n_plot = 15 dt = 10.0 #1.0, 10.0, 20.0 if ca=='1p25': corrs_shuffle, bins = get_correlations(parameter_continue='', parameter_change='abcd', dt=dt, n=40, shuffle_level=1) corrs, bins = get_correlations(parameter_continue='', parameter_change='abcd', dt=dt, n=40, shuffle_level=0) else: corrs_shuffle, bins = get_correlations(parameter_continue='_ca_abcd', parameter_change='ca_abcd', dt=dt, n=20, shuffle_level=1, ca=ca) corrs, bins = get_correlations(parameter_continue='_ca_abcd', parameter_change='ca_abcd', dt=dt, n=20, shuffle_level=0, ca=ca) print corrs.shape start_mean = [-1, 1][index_correlation] * ([0, 1][index_correlation] - corrs_shuffle[:, : , index_correlation, :].mean(axis=(-2, -1))) base_means = np.copy(start_mean) corrs = [-1, 1][index_correlation] * (corrs - corrs_shuffle) print "Number of messed up cells:" print (start_mean <= 0).sum() if normalize == True: corrs /= start_mean[:, None, None, None] corrs_shuffle /= start_mean[:, None, None, None] start_mean /= start_mean fig, ax = plt.subplots() ax.scatter(base_means, corrs.mean(axis=(-1))[:, 1, index_correlation], rasterized=True) plt.savefig('figures/scatter_base_vs_speed_ca%s.pdf' % ca, dpi=300) bins -= (2000.0 + dt/2) bins[0] = 0 means = {} errs = {} means_indegree_complete = {} for mtype in mtypes: indices = np.array(cells['mtype'] == mtype) errs[mtype] = np.hstack([np.array([mean_confidence_interval([-1, 1][index_correlation] * ([0, 1][index_correlation] - corrs_shuffle[:, : , index_correlation, :].mean(axis=(0, -1))))]), np.apply_along_axis(mean_confidence_interval, -1, corrs[indices, ...].mean(axis=0))[:, index_correlation]]) means[mtype] = np.hstack([np.array([start_mean[indices].mean()]), corrs[indices, ...].mean(axis=0).mean(axis=-1)[:, index_correlation]]) for layer in layers: indices = np.array(cells['layer'] == layer) errs[layer] = np.hstack([np.array([mean_confidence_interval([-1, 1][index_correlation] * ([0, 1][index_correlation] - corrs_shuffle[:, : , index_correlation, :].mean(axis=(0, -1))))]), np.apply_along_axis(mean_confidence_interval, -1, corrs[indices, ...].mean(axis=0))[:, index_correlation]]) means[layer] = np.hstack([np.array([start_mean[indices].mean()]), corrs[indices, ...].mean(axis=0).mean(axis=-1)[:, index_correlation]]) for synapse_class in synapse_classes: indices = np.array(cells['synapse_class'] == synapse_class) errs[synapse_class] = np.hstack([np.array([mean_confidence_interval([-1, 1][index_correlation] * ([0, 1][index_correlation] - corrs_shuffle[:, : , index_correlation, :].mean(axis=(0, -1))))]), np.apply_along_axis(mean_confidence_interval, -1, corrs[indices, ...].mean(axis=0))[:, index_correlation]]) means[synapse_class] = np.hstack([np.array([start_mean[indices].mean()]), corrs[indices, ...].mean(axis=0).mean(axis=-1)[:, index_correlation]]) for etype in etypes: indices = np.array(cells['etype'] == etype) errs[etype] = np.hstack([np.array([mean_confidence_interval([-1, 1][index_correlation] * ([0, 1][index_correlation] - corrs_shuffle[:, : , index_correlation, :].mean(axis=(0, -1))))]), np.apply_along_axis(mean_confidence_interval, -1, corrs[indices, ...].mean(axis=0))[:, index_correlation]]) means[etype] = np.hstack([np.array([start_mean[indices].mean()]), corrs[indices, ...].mean(axis=0).mean(axis=-1)[:, index_correlation]]) if etype == 'dNAC': fig, ax = plt.subplots() means_neurons = corrs[indices, ...].mean(axis=-1)[:, :, index_correlation] print means_neurons.shape ax.plot(bins[1:101], means_neurons[0:50, :100].T, linewidth=0.2) ax.plot(bins[1:101], means_neurons[0:50, :100].T.mean(axis=1), linewidth=2.2, color='orange') ax.plot(bins[1:101], means[etype][1:101], linewidth=2.2, color='red') plt.savefig('figures/dNAC_ca%s.pdf' % ca) for percentile_id in range(bins_n.size + 1): indices = percentiles_in_degree == percentile_id errs["p%d" % percentile_id] = np.hstack([np.array([mean_confidence_interval([-1, 1][index_correlation] * ([0, 1][index_correlation] - corrs_shuffle[:, : , index_correlation, :].mean(axis=(0, -1))))]), np.apply_along_axis(mean_confidence_interval, -1, corrs[indices, ...].mean(axis=0))[:, index_correlation]]) means["p%d" % percentile_id] = np.hstack([np.array([start_mean[indices].mean()]), corrs[indices, ...].mean(axis=0).mean(axis=-1)[:, index_correlation]]) means_indegree_complete["p%d" % percentile_id] = corrs[indices, ...].mean(axis=0)[1, index_correlation, :] print "CORRS SHAPE" print corrs.shape all_means_neurons = corrs.mean(axis=-1)[:, 1, index_correlation] fig, axs = plt.subplots() # ax.scatter(in_degree_exc, in_degree_inh, c=all_means_neurons) lowers = [50, 150, 250, 350, 450, 550, 650] uppers = [150, 250, 350, 450, 550, 650, 750] for i, upper in enumerate(uppers): ax = axs indices = np.logical_and(in_degree <= upper, in_degree > lowers[i]) ei_ratio = in_degree_exc[indices] / in_degree_inh[indices].astype(float) # ax.scatter(ei_ratio, all_means_neurons[indices], rasterized=True) lowers_2 = [5, 10, 15, 20] uppers_2 = [10, 15, 20, 25] values = [] errors =[] for j, upper_2 in enumerate(uppers_2): indices_2 = np.logical_and(ei_ratio <= upper_2, ei_ratio > lowers_2[j]) values.append(all_means_neurons[indices][indices_2].mean()) errors.append(all_means_neurons[indices][indices_2].std(ddof=1)/np.sqrt(all_means_neurons[indices][indices_2].size)) ax.errorbar(lowers_2, values, yerr=errors, marker='.', label=lowers[i]) axs.legend() ax.set_ylabel('s_r_10-20ms') ax.set_xlabel('n_E/n_I') plt.tight_layout() plt.savefig('figures/in_degree_ratio_ei.pdf', dpi=300) fig, axs = plt.subplots(4, 2, figsize=(9, 9)) # ax = axs[0, 0] # for mtype in mtypes: # ax.errorbar(bins[:(n_plot+1)], means[mtype][:(n_plot+1)], yerr=errs[mtype][:(n_plot+1)], linewidth=0.8) means_indegree = np.array([means["p%d" % percentile_id][:(n_plot+1)][2] for percentile_id, value in enumerate(np.insert(bins_n, 0, 0))]) errs_indegree = np.array([errs["p%d" % percentile_id][:(n_plot+1)][2] for percentile_id, value in enumerate(np.insert(bins_n, 0, 0))]) for percentile_id, value in enumerate(np.insert(bins_n, 0, 0)): ax = axs[0, 0] ax.errorbar(bins[:(n_plot+1)], means["p%d" % percentile_id][:(n_plot+1)], yerr=errs["p%d" % percentile_id][:(n_plot+1)], linewidth=0.8, label=value) ax = axs[2, 0] ax.errorbar(np.arange(len(np.insert(bins_n, 0, 0))), means_indegree, yerr=errs_indegree, color='black', marker='.') # individual sims print means_indegree_complete list_means_indegree_complete = [] for percentile_id, value in enumerate(np.insert(bins_n, 0, 0)): list_means_indegree_complete.append(means_indegree_complete["p%d" % percentile_id]) means_indegree_complete = np.vstack(list_means_indegree_complete) ax.plot(np.arange(len(np.insert(bins_n, 0, 0))), means_indegree_complete, color='red', alpha=0.3) print means_indegree_complete print means_indegree_complete.shape ax.set_xticks(np.arange(len(np.insert(bins_n, 0, 0)))) ax.set_xticklabels(np.insert(bins_n, 0, 0) + 50) if index_correlation == 1: # ax.set_ylim([0.2, 0.55]) # ax.set_ylim([0.0, 1.0]) # ax.set_yticks([0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55]) ax.set_yticks([0.1, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 1]) in_bins = np.arange(len(np.insert(bins_n, 0, 0))) dcorr = means_indegree dcorr_errs = errs_indegree for layer in layers: ax = axs[1, 0] ax.errorbar(bins[:(n_plot+1)], means[layer][:(n_plot+1)], yerr=errs[layer][:(n_plot+1)], linewidth=0.8, label=layer) ax = axs[3, 0] ax.bar(layer, means[layer][:(n_plot+1)][2], yerr=errs[layer][:(n_plot+1)][2], color='#cccccc', edgecolor='#636363', width=0.6) ax.set_xticks(layers) ax.set_xticklabels(['L%d' % l for l in range(1, 7)]) for i, synapse_class in enumerate(synapse_classes): ax = axs[0, 1] ax.errorbar(bins[:(n_plot+1)], means[synapse_class][:(n_plot+1)], yerr=errs[synapse_class][:(n_plot+1)], linewidth=0.8, label=synapse_class) ax = axs[2, 1] ax.bar(i, means[synapse_class][:(n_plot+1)][2], yerr=errs[synapse_class][:(n_plot+1)][2], color='#cccccc', edgecolor='#636363', width=0.6) ax.set_xticks(np.arange(len(synapse_classes))) ax.set_xticklabels(synapse_classes) etype_means = np.array([means[etype][:(n_plot+1)][2] for etype in etypes]) indices = np.argsort(etype_means) print indices for j, i in enumerate(indices): etype = etypes[i] ax = axs[1, 1] ax.errorbar(bins[:(n_plot+1)], means[etype][:(n_plot+1)], yerr=errs[etype][:(n_plot+1)], label=etype, linewidth=0.8) ax = axs[3, 1] ax.bar(j, means[etype][:(n_plot+1)][2], yerr=errs[etype][:(n_plot+1)][2], color='#cccccc', edgecolor='#636363', width=0.6) ax.set_xticks(np.arange(len(etypes)))
as the momentum thickness $\delta_\theta = 1/[\rho_0 (\Delta v)^2] \int \rho (v_l - v)(v - v_r)~dx$ and the vorticity thickness $\delta_\omega = |v_l - v_r|/(\upartial v/\upartial y)_{max}$. A further complication is that lab experiments of the plane mixing layer measure a spatial spreading rate, $\delta'(x) \equiv d \delta / d x$. In our experiment, we move in a frame of reference at the convective velocity $v_{\rm c} = (1/2)(v_l+v_r)$ (assuming $c_l = c_r$) and therefore measure a temporal spreading rate, \citep[e.g.,][]{Vreman1996,Pantano2002} \begin{equation} \delta'(t) = \frac{d \delta}{d t} = \frac{d x}{d t} \frac{ d \delta}{d x} = v_c \delta'(x). \end{equation}Values for $C_\delta$ estimated from plane mixing layer experiments \citep{Brown1974,Papamoschou1988} and high-resolution numerical simulations \citep{Pantano2002,Barone2006} are reported in Table \ref{tab:shearresults}, where the subscript on $C$ indicates the corresponding shear layer thickness definition. \subsection{Mixing layer results}\label{ss:mlresults} \begin{figure} \includegraphics[width=\linewidth]{fig1.eps} \caption{Time evolution of the one-dimensional subsonic ($M_{\rm c} = 0.10$) shear flow test with the LS74 $k$-$\varepsilon$ turbulence model. From the top, profiles of the $y$-velocity $v$, specific turbulent kinetic energy $k$, and turbulent length scale $L = C_\mu^{3/4} k^{3/2} \varepsilon^{-1}$. Profiles are shown at times $t = $ 0, 50, 100, and 200 $\mu$s, indicated by colour.} \label{f:chiravalle_profiles} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{fig2.eps} \caption{Growth of the shear layer width $\delta(t)$ in the subsonic ($M_{\rm c} = 0.10$) mixing layer with the LS74 $k$-$\varepsilon$ turbulence model. The shear layer definition is indicated by colour. All definitions produce linear growth but at different rates.} \label{f:chiravalle_growth} \end{figure} Figure \ref{f:chiravalle_profiles} shows the time evolution of a subsonic ($M_{\rm c} = 0.1$) mixing layer with the LS74 $k$-$\varepsilon$ model. The profiles of the the $y$-velocity $v$, turbulent kinetic energy $k$, and turbulent length $L$ all spread in time; as noted, the exact spreading rate depends on how the layer thickness is defined. Figure \ref{f:chiravalle_growth} shows the growth of the shear layer thickness $\delta(t)$ for different layer definitions. All definitions show linear growth in time. The 1 per cent velocity thickness grows at the greatest rate, while the momentum thickness increases at the lowest rate. We use a $\chi^2$ minimization linear fit to estimate $C_\delta$; the results are presented in Table \ref{tab:shearresults}. Table \ref{tab:shearresults} also shows the growth rates at $M_{\rm c} = 0.10$ for all RANS models tested. We find that the various turbulence models lead to differing growth rates on the same test problem. Although most models do not reproduce the measured growth rate for all thickness definitions, all models do produce linear growth in time and roughly agree with the measured value for at least one definition, leading us to conclude that our models are implemented correctly in \textsc{Athena}. Variations in numerical method between codes could lead to discrepancies with previous work; further, there is significant uncertainty on the measured values. Interestingly, there is no clear relation between the different measures and models; for example, $C_{b10}$ is much greater with the GS11 model compared to the LS74 model, but $C_\omega$ is slightly less. This suggests no single measure should be preferred. Finally, we note that C06 and GS11 calibrated their turbulence models using a 1 per cent velocity definition for the mixing layer. While their models show good agreement with this definition, we find that these models largely do not predict spreading rates in agreement with measured values when using other definitions. This suggests that a 1 per cent criterion may not be the best definition for comparison. \subsection{Compressible mixing layer}\label{ss:mixcompcorr} \begin{figure} \includegraphics[width=\linewidth]{fig3.eps} \caption{The compressibility factor $\Phi \equiv \delta'/\delta'_i$ as a function of convective Mach number $M_{\rm c}$. Results are shown for the standard LS74 model with no compressibility correction (purple dots) and with the compressibility corrections of S89 (blue upward triangles), Z90 (green downward triangles), and W92 (gold diamonds); as well as for the GS11 model (red squares), which includes a stress modification ($\tau_{\rm KH}$). The empirical curves of \citet[][dashed]{Dimotakis1991} and \citet[][dot-dashed]{Barone2006} are also shown for comparison.} \label{f:chiravalle_comp} \end{figure} The spreading rate of a compressible mixing layer is found to decrease with increasing convective Mach number \citep{Birch1972,Brown1974,Papamoschou1988}. The difference is expressed as the compressibility factor $\Phi \equiv \delta'/\delta'_i$, where $\delta'_i$ is the incompressible growth rate. Experiments have yielded different relations between $M_c$ and $\Phi$, such as the popular ``Langley'' curve \citep{Birch1972}, the results of \citet{Papamoschou1988}, and the fit of \citet{Dimotakis1991}. We perform simulations with increasing convective Mach number up to $M_c = 10$. We use the growth rate determined at $M_c = 0.1$ with thickness $\delta_{b10}$ as our incompressible growth rate $\delta'_i$. Results obtained with the LS74 model are presented as solid circles in Figure \ref{f:chiravalle_comp}, with two experimental curves shown for comparison. Although the spreading rate does decrease with increasing Mach number, it does not follow the experimental trend. This is consistent with previous work which shows that standard two-equation RANS turbulence models do not reproduce the observed reduction in spreading rate without modifications. As described in \S\ref{ss:compcorr}, three authors (S89, Z90, and W92) have proposed ``compressibility corrections'' to better capture the decrease. These corrections work by increasing the dissipation rate due to pressure-dilatation effects. Although direct numerical simulation results have shown that this is not actually the case \citep{Vreman1996}, these \textit{ad hoc} compressibility corrections are still widely used because they produce more accurate results (at least in the transonic regime). Figure \ref{f:chiravalle_comp} also shows results obtained when the three compressibility corrections are applied to the LS74 model. All three corrections do decrease the spreading rate to roughly the experimental values, at least up to $M_c = 5$; above this, the growth rate is slightly below the experimental estimate. The difference between the corrections of S89, Z90, and W92 is negligible. Similar results are obtained when applied to the MS13, W88, and W06 models. There is no straightforward way to apply these corrections to the model of C06; however, GS11 does include a compressibility correction through the variable $\tau_{\rm KH}$ (see Section \ref{sss:GS11}). Results obtained with the model of GS11 are also shown on Figure \ref{f:chiravalle_comp}. The asymptotic nature of the $\tau_{\rm KH}$ function (Eq. \ref{eq:tauKH}) reproduces the observed behavior of compressible layers up to $M_c \approx 1$; however, above this point the GS11 formulation leads to growth rates that are too small. Indeed, data points are not available for $M_c > 2.5$ for GS11 because the model did not evolve. \section{Stratified medium test}\label{s:rttest} \begin{figure} \includegraphics[width=\linewidth]{fig4.eps} \caption{Time evolution of the stratified medium test with the buoyant turbulence models (dotted: C06; dashed: GS11; dot-dashed: MS13). From the top, profiles of the turbulent length scale $L$, specific turbulent kinetic energy $k$, density $\rho$, temperature $T$, and heavy-fluid mass fraction $F_{\rm h}$. Profiles are shown at times $t = $ 50, 100, 200, and 300 $\mu$s, indicated by colour. Analytic solutions are shown for $L$ and $k$ with solid lines. While GS11 matches well, C06 grows too slowly and MS13 too quickly.} \label{f:dimonte_profiles} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{fig5.eps} \caption{Growth of the bubble height $h$ as a function of $A g t^2$ for the buoyant turbulence models (C06, GS11, and MS13). The growth should be linear with slope equal to the DT06 experimental bubble constant $\alpha = 0.06$, shown in black. GS11 matches well with linear growth at $\alpha \approx 0.05$. C06 is also linear but with a lower value of $\alpha \approx 0.038$. MS13 matches well initially with $\alpha \approx 0.06$, but eventually the evolution becomes non-linear and diverges.} \label{f:dimonte_growth} \end{figure} Three of the models here considered include buoyant effects to capture the RT instability, namely MS13, C06, and GS11. To further verify the implementation of these models, we perform a two-dimensional stratified medium test. Our set-up is nearly identical to that described in section 2.2.1 of GS11, which was itself adapted from section 5 of DT06. We accelerate a heavy fluid of density $\rho_1 = 1.0$~g~cm$^{-3}$ into a lighter fluid of density $\rho_2 = 0.9$~g~cm$^{-3}$ from an initially hydrostatic state. The acceleration acts in the $-y$ direction at $g = 9.8\times10^8$~cm~s$^{-2}$. The grid is $0.02\times1.0$~cm with
|v_{\rm d}(a)-v_{\rm d}(a^{\prime})|$. $C_0$ will here be considered a universal constant, although $\Delta {v}_{\rm d}$ is obviously a function of grain size. By eq. (\ref{SCEmom2}), the coagulation term for the zeroth order moment then becomes \begin{equation} \left({d\mathcal{K}_0\over dt}\right)_{\rm coag} = -{C_0\over a_0^2} [\mathcal{K}_2(t) \mathcal{K}_0(t) + \mathcal{K}_1^2(t) ], \end{equation} and (due to the conservation of mass) \begin{equation} \left({d\mathcal{K}_3\over dt}\right)_{\rm coag} = 0. \end{equation} Orders $\ell = 6, 9, 12, 15$ follow analogously (see Appendix \ref{ODEs} for further details). The remaining (intermediate) orders have no corresponding exact equations when coagulation is included according to eq. (\ref{SCEmom2}), which requires use of an unorthodox numerical technique (see below). The galaxy formation/evolution model contains several parameters that need to be set by a procedure in which the model is calibrated against observational constraints (see Table \ref{calibration}). For the solar neighbourhood, the present-day density of stars, gas and metals in The Galaxy is known with fairly good precision \citep[see, e.g.][]{Asplund09,Dame93,Dickey93,Holmberg04,McKee15}. The star-formation rate is known to at least the right order of magnitude \citep{Rana91} and the rate of infall is known within a factor of a few. \citep{Braun04}. With the figures given in Table \ref{calibration}, the parameters of the galaxy formation model are severely constrained and one may therefore lock the infall timescale, star-formation timescales and IMF/return fraction to the values given in Table \ref{calibration}. The parameters of the dust processing model are not arbitrary either. First, it is well established that the Galactic dust-to-metals (mass) ratio is $\approx 0.5$ and the local number density of dust grains is $\sim 10^{-12}$~cm$^{-3}$ \citep[see][and references therein]{Inoue03,Draine07}. Second, the typical/average grain size in the ISM must be of the order $0.1\,\mu$m \citep{Weingartner01}. Bringing these numbers together, it therefore seems likely that stars produce relatively small grains on average (or at least the grains that reach the ISM are small) and that the grain size of interstellar grains is mainly due to growth by condensation and coagulation. The growth-velocity constant $\xi_0$ can be constrained for a given stellar dust yield (with a specified stellar GSD), generic dust-destruction timescale/efficiency (which is set by $\langle M_{\rm ISM}\rangle$) and a given material bulk density for the dust. One finds that calibration of $\xi_0$, assuming $\rho_{\rm gr} = 3.0$~g~cm$^{-3}$, translates into a growth velocity of the order $\xi_0 \sim 10^{-15}$~cm~yr$^{-1}$. The constants $\rho_{\rm gr}$ and $\xi_0$ corresponds to a reference timescale at the onset of disc formation $\tau_{{\rm cond}}^{\rm ref}$ (see eq. \ref{condtscale}) which is given in Table \ref{parameters} for each considered model. The coagulation kernel sets the efficiency of the coagulation by adjustment of the constant $C_0$, which cannot be explicitly constrained from observations. However, in order for coagulation to play a significant role in the interstellar processing of dust, the present-day coagulation timescale $\tau_{\rm coag}(t_0)$ (see eq. \ref{tau_coag}) should be comparable to the condensation timescale $\tau_{\rm cond}(t_0)$. In terms of physics, changing $C_0$ corresponds to changing the typical relative velocity between grains and/or the fraction of interstellar gas that is found in environments where coagulation is an important growth factor for the dust grains. The dynamics of the ISM also plays an important role \citep[see, e.g.,][]{Hirashita09}, which leaves room for variation of $C_0$. \citet{Hirashita09} used the results of \citet{Yan04} and a threshold velocity $v_{\rm coag}$ (above which particles are unlikely to stick) to prescribe the relative motions of grains in different ISM phases. Because of the threshold at $v_{\rm coag}$, coagulation is expected to be efficient only in molecular clouds (in particular the so-called ``dark clouds'' which are dust rich and very opaque). According to the MHD simulations by \citet{Yan04}, the typical value of $\Delta \bar{v}_{\rm d}$ in molecular clouds is of the order $\Delta \bar{v}_{\rm d} \sim 1$~km~s$^{-1}$, which suggest that $C_0/a_0^2$ is at most a few km~s$^{-1}$. The adopted coagulation efficiencies are given inTable \ref{parameters}, where the reference timescale $\tau_{\rm coag,\,0}$ (at $\tau_{\rm G} = 1.0$~Gyr) is given instead of $C_0$ and $a_0$. Dust destruction by SN induced sputtering is hard to quantify \citep[see, e.g.,][]{Jones11}. The most commonly adopted calibration \citep[based on the prescription by][]{McKee89} for the solar neighbourhood suggests that $\langle M_{\rm ISM}\rangle = 800\,M_\odot$ (corresponding to a present-day timescale $\tau_{\rm d,\,0} = 0.8$~Gyr) for carbonaceous dust and $\langle M_{\rm ISM}\rangle = 1200\,M_\odot$ ($\tau_{\rm d,\,0} = 0.6$~Gyr) for silicates \citep{Jones94,Tielens94}. Thus, $\tau_{\rm d,\,0} = 0.7$~Gyr is in the present work adopted as a representative figure for the present-day timescale of SN induced dust destruction. To obtain the same level of total dust destruction obtained in one Galactic age when using the simple prescription by \citet{McKee89}, the dimensionless scaling constants $\upsilon$ and $\mathcal{H}_0$ (here, the latter is fixed to unity -- see Table \ref{parameters}), introduced in the prescription assuming a constant sputtering rate (with $\Delta a = 0.001\,\mu$m), must be chosen such that the total amount of dust destruction over one Galactic age $\tau_{\rm G} = 13.5$~Gyr is similar to that obtained from prescription by \citet{McKee89} with $\langle M_{\rm ISM}\rangle = 1000\,M_\odot$. This is based on the assumption that the ``standard calibration'' for the solar neighbourhood is essentially correct, but there is reason to doubt this assumption. First, the assessment by \citet{Jones11} shows how difficult it is to constrain the destruction timescale. Second, as hypothesised by \citet{Mattsson14a}, grain--grain interactions inside the shocks may play an important role as these can lead to shattering/fragmentation of grains and it is well known that small grains are more susceptible to destruction by sputtering \citep[see, e.g.,][]{Slavin04,Bocchio14}. \subsubsection{Numerical solution} \label{numerical} As described above, the method of moments leads to a system of coupled ODEs. Because the coagulation term can only be expressed without approximations for $\ell = 0,3,6,9,\dots$, its is required that one either uses analytical approximation or an unorthodox numerical procedure called interpolative closure. The latter does, in fact, turn out to be the most accurate and stable approach. Closure schemes based on analytical approximations will therefore not be considered here \citep[but see][for one such example]{McGraw97}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[clip=true]{moments.eps}} \caption{Time evolution of the moment hierarchy up to $\ell = 10$. The black dots show the direct numerical solution of the system of ODEs for the moment hierarchy of model A. Lines show cubic spline fits to the moments of order $\ell = 0, 3, 6, 9$. \label{moments}} \end{figure} The equations for orders $\ell = 0,3,6,9,\dots$ can be integrated using, e.g., a standard fourth-order Runge-Kutta routine, but since moments of intermediate orders appear on the right-hand side of these equations, a good estimate of the moments of order $\ell = 1,2,4,5,7,8,\dots$ must be obtained at each new time step. Fortunately, the moments $\mathcal{K}_\ell$ show a very predictable trend with moment order $\ell$. Fig. \ref{moments} shows the evolution of the logarithm of the moments (normalised such that $\mathcal{K}_3$ is the dust-mass density) for model A, solving the system of ODEs for {\it all} orders $\ell$ with a fourth-order Runge-Kutta routine up to $\ell = 15$ (tests have shown that the hierarchy can safely be truncated at $\ell = 15$). Cubic spline fits (see below) are shown as solid lines. Clearly, interpolation between the moments of order $\ell = 0,3,6,9,\dots$ can provide accurate estimates of moments of order $\ell = 1,2,4,5,7,8,\dots$, which then provides closure of the system ODEs, albeit with an error that will of course propagate as the numerical solution is advanced from one time step to the next. However, the relative errors are small ($\sim 10^{-4}$ or less) and should not be problem here. \begin{figure*} \center \resizebox{0.73\hsize}{!}{\includegraphics[clip=true]{MW_sol_test_0.eps} \includegraphics[clip=true]{MW_sol_test_1.eps}} \\ \resizebox{0.73\hsize}{!}{\includegraphics[clip=true]{MW_sol_test_lgr.eps} \includegraphics[clip=true]{MW_sol_test_hms.eps}} \caption{Dust-to-metals ratio as a function of metallicity. Upper left: the simplistic case ($\upsilon = 0$) of dust destruction due to sputtering associated with the passage of SN shock waves. Upper right: the case of a constant sputtering rate and $\Delta a = 0.001\,\mu$m. Lower left: constant sputtering rate with $\Delta a = 0.001\,\mu$m, but for $a^\star_{\rm min} = 0.05\,\mu$m, i.e., a case where stars inject relatively large grains. Lower right: a case with very substantial ($y_{\rm d} =
Actually, TEventList objects are also added to the current directory, but at this time, we don’t have to worry about those. If the canvas is not in the current directory then where is it? Because it is a canvas, it was added to the list of canvases. This list can be obtained by the command gROOT->GetListOfCanvases()->ls(). The ls() will print the contents of the list. In our list, we have one canvas called c1. It has a TFrame, a TProfile, and a TPaveStats. root[] gROOT->GetListOfCanvases()->ls() Canvas Name=c1 Title=c1 Option=TCanvas fXlowNDC=0 fYlowNDC=0 fWNDC=1 fHNDC=1 Name= c1 Title= c1 Option=TFrame X1= -4.000000 Y1=0.000000 X2=4.000000 Y2=19.384882 OBJ: TProfile hprof Profile of pz versus px : 0 TPaveText X1=-4.900000 Y1=20.475282 X2=-0.950000 Y2=21.686837 title TPaveStats X1=2.800000 Y1=17.446395 X2=4.800000 Y2=21.323371 stats Lets proceed with our example and draw one more histogram, and we see one more OBJ entry. root[] hpx->Draw() root[] f->ls() TFile** hsimple.root TFile* hsimple.root OBJ: TProfile hprof Profile of pz versus px : 0 OBJ: TH1F hpx This is the px distribution : 0 KEY: TH1F hpx;1 This is the px distribution KEY: TH2F hpxpy;1 py vs px KEY: TProfile hprof;1 Profile of pz versus px KEY: TNtuple ntuple;1 Demo ntuple TFile::ls() loops over the list of objects in memory and the list of objects on disk. In both cases, it calls the ls() method of each object. The implementation of the ls method is specific to the class of the object, all of these objects are descendants of TObject and inherit the TObject::ls() implementation. The histogram classes are descendants of TNamed that in turn is a descent of TObject. In this case, TNamed::ls() is executed, and it prints the name of the class, and the name and title of the object. Each directory keeps a list of its objects in the memory. You can get this list by TDirectory::GetList(). To see the lists in memory contents you can do: root[]f->GetList()->ls() OBJ: TProfile hprof Profile of pz versus px : 0 OBJ: TH1F hpx This is the px distribution : 0 Since the file f is the current directory (gDirectory), this will yield the same result: root[] gDirectory->GetList()->ls() OBJ: TProfile hprof Profile of pz versus px : 0 OBJ: TH1F hpx This is the px distribution : 0 ### 11.2.4 Saving Histograms to Disk At this time, the objects in memory (OBJ) are identical to the objects on disk (KEY). Let’s change that by adding a fill to the hpx we have in memory. root[] hpx->Fill(0) Now the hpx in memory is different from the histogram (hpx) on disk. Only one version of the object can be in memory, however, on disk we can store multiple versions of the object. The TFile::Write method will write the list of objects in the current directory to disk. It will add a new version of hpx and hprof. root[] f->Write() root[] f->ls() TFile** hsimple.root TFile* hsimple.root OBJ: TProfile hprof Profile of pz versus px : 0 OBJ: TH1F hpx This is the px distribution : 0 KEY: TH1F hpx;2 This is the px distribution KEY: TH1F hpx;1 This is the px distribution KEY: TH2F hpxpy;1 py vs px KEY: TProfile hprof;2 Profile of pz versus px KEY: TProfile hprof;1 Profile of pz versus px KEY: TNtuple ntuple;1 Demo ntuple The TFile::Write method wrote the entire list of objects in the current directory to the file. You see that it added two new keys: hpx;2 and hprof;2 to the file. Unlike memory, a file is capable of storing multiple objects with the same name. Their cycle number, the number after the semicolon, differentiates objects on disk with the same name. If you wanted to save only hpx to the file, but not the entire list of objects, you could use the TH1::Writemethod of hpx: root[] hpx->Write() A call to obj->Write without any parameters will call obj->GetName() to find the name of the object and use it to create a key with the same name. You can specify a new name by giving it as a parameter to the Write method. root[] hpx->Write("newName") If you want to re-write the same object, with the same key, use the overwrite option. root[] hpx->Write("",TObject::kOverwrite) If you give a new name and use the kOverwrite, the object on disk with the matching name is overwritten if such an object exists. If not, a new object with the new name will be created. root[] hpx->Write("newName",TObject::kOverwrite) The Write method did not affect the objects in memory at all. However, if the file is closed, the directory is emptied and the objects on the list are deleted. root[] f->Close() root[] f->ls() TFile** hsimple.root TFile* hsimple.root In the code snipped above, you can see that the directory is now empty. If you followed along so far, you can see that c1 which was displaying hpx is now blank. Furthermore, hpx no longer exists. root[] hpx->Draw() Error: No symbol hpx in current scope This is important to remember, do not close the file until you are done with the objects or any attempt to reference the objects will fail. ### 11.2.5 Histograms and the Current Directory When a histogram is created, it is added by default to the list of objects in the current directory. You can get the list of histograms in a directory and retrieve a pointer to a specific histogram. TH1F *h = (TH1F*)gDirectory->Get("myHist"); // or TH1F *h = (TH1F*)gDirectory->GetList()->FindObject("myHist"); The method TDirectory::GetList() returns a TList of objects in the directory. You can change the directory of a histogram with the SetDirectory method. h->SetDirectory(newDir); If the parameter is 0, the histogram is no longer associated with a directory. h->SetDirectory(0); Once a histogram is removed from the directory, it will no longer be deleted when the directory is closed. It is now your responsibility to delete this histogram object once you are finished with it. To change the default that automatically adds the histogram to the current directory, you can call the static function: TH1::AddDirectory(kFALSE); In this case, you will need to do all the bookkeeping for all the created histograms. ### 11.2.6 Saving Objects to Disk In addition to histograms and trees, you can save any object in a ROOT file. For example to save a canvas to the ROOT file you can use either TObject::Write() or TDirectory::WriteTObject(). The example: root[] c1->Write() This is equivalent to: root[] f->WriteTObject(c1) For objects that do not inherit from TObject use: root[] f->WriteObject(ptr,"nameofobject") Another example: root[] TFile *f = new TFile("hsimple.root","UPDATE") root[] hpx->Draw() <TCanvas::MakeDefCanvas>: created default TCanvas with name c1 root[] c1->Write() root[] f->ls() TFile** hsimple.root TFile* hsimple.root OBJ: TH1F hpx This is the px distribution : 0 KEY: TH1F hpx;2 This is the px distribution KEY: TH1F hpx;1 This is the px distribution KEY: TH2F hpxpy;1 py vs px KEY: TProfile hprof;2 Profile of pz versus px KEY: TProfile hprof;1 Profile of pz versus px KEY: TNtuple ntuple;1 Demo ntuple KEY: TCanvas c1;1 c1 ### 11.2.7 Saving Collections to Disk All collection classes inherit from TCollection and hence inherit the TCollection::Write() method. When you call TCollection::Write() each object in the container is written individually into its own key in the file. To write all objects into one key you can specify the name of the key and use the optionTObject::kSingleKey. For example: root[] TList * list = new TList; root[] TNamed * n1, * n2; root[] n1 = new TNamed("name1","title1"); root[] n2 = new TNamed("name2","title2"); root[] list->Add(n1); root[] list->Add(n2); root[] gFile->WriteObject(list,"list",TObject::kSingleKey); ### 11.2.8 A TFile Object Going Out of Scope There is another important point to remember about TFile::Close and TFile::Write. When a variable is declared on the stack in a function such as in the code below, it will be deleted when it goes out of scope.
[121, 179, 17], [124, 179, 18], [127, 180, 18], [130, 181, 19], [132, 182, 19], [135, 182, 20], [138, 183, 20], [141, 184, 20], [144, 184, 21], [147, 185, 21], [150, 186, 22], [153, 186, 22], [155, 187, 23], [158, 188, 23], [161, 188, 24], [164, 189, 24], [166, 190, 25], [169, 190, 25], [172, 191, 25], [175, 192, 26], [177, 192, 26], [180, 193, 27], [183, 194, 27], [186, 194, 28], [188, 195, 28], [191, 195, 29], [194, 196, 29], [196, 197, 30], [199, 197, 30], [202, 198, 30], [204, 199, 31], [207, 199, 31], [210, 200, 32], [212, 200, 32], [215, 201, 33], [217, 201, 33], [220, 202, 34], [223, 202, 34], [225, 202, 34], [227, 203, 35], [230, 203, 35], [232, 203, 35], [234, 203, 36], [236, 203, 36], [238, 203, 36], [240, 203, 36], [241, 202, 36], [243, 202, 36], [244, 201, 36], [245, 200, 36], [246, 200, 36], [247, 199, 36], [248, 197, 36], [248, 196, 36], [249, 195, 36], [249, 194, 35], [249, 192, 35], [250, 191, 35], [250, 190, 35], [250, 188, 34], [250, 187, 34], [250, 185, 34], [250, 184, 33], [250, 182, 33], [250, 180, 33], [250, 179, 32], [249, 177, 32], [249, 176, 32], [249, 174, 31], [249, 173, 31], [249, 171, 31], [249, 169, 30], [249, 168, 30], [249, 166, 30], [248, 165, 29], [248, 163, 29], [248, 161, 29], [248, 160, 29], [248, 158, 28], [248, 157, 28], [248, 155, 28], [247, 153, 27], [247, 152, 27], [247, 150, 27], [247, 148, 26], [247, 147, 26], [246, 145, 26], [246, 143, 26], [246, 142, 25], [246, 140, 25], [246, 138, 25], [245, 137, 24], [245, 135, 24], [245, 133, 24], [245, 132, 24], [244, 130, 23], [244, 128, 23], [244, 127, 23], [244, 125, 23], [244, 123, 22], [243, 121, 22], [243, 119, 22], [243, 118, 22], [243, 116, 21], [242, 114, 21], [242, 112, 21], [242, 110, 21], [241, 109, 21], [241, 107, 21], [241, 105, 21], [241, 103, 21], [240, 101, 21], [240, 100, 22], [240, 98, 22], [240, 96, 23], [240, 95, 24], [240, 93, 26], [240, 92, 27], [240, 90, 29], [240, 89, 31], [240, 88, 33], [240, 87, 36], [240, 87, 38], [241, 86, 41], [241, 86, 44], [242, 86, 47], [242, 86, 51], [243, 86, 54], [243, 87, 58], [244, 88, 62], [245, 88, 65], [245, 89, 69], [246, 90, 73], [247, 91, 77], [247, 92, 82], [248, 94, 86], [249, 95, 90], [249, 96, 94], [250, 97, 98], [251, 99, 102], [251, 100, 106], [252, 101, 111], [252, 103, 115], [253, 104, 119], [253, 105, 123], [254, 107, 128], [254, 108, 132], [255, 109, 136], [255, 111, 140], [255, 112, 145], [255, 114, 149], [255, 115, 153], [255, 116, 157], [255, 118, 162], [255, 119, 166], [255, 120, 170], [255, 122, 175], [255, 123, 179], [255, 125, 183], [255, 126, 188], [255, 127, 192], [255, 129, 196], [255, 130, 201], [255, 132, 205], [255, 133, 210], [255, 134, 214], [255, 136, 219], [255, 137, 223], [255, 139, 227], [255, 140, 232], [255, 141, 236], [254, 143, 241], [254, 144, 245], [253, 146, 250]] function Rainbow(v) { var i = Math.floor((Math.min(v, 1), Math.max(v, 0)) * 255) r = RB[i][0] g = RB[i][1] b = RB[i][2] return { r: r, g: g, b: b } } You have a heat source at temperature TTop at the top of a stack of materials. At the other side of the stack you either have the temperature "floating" or it's connected to a heat source/sink at temperature TBelow. At time t=0 the stack material is all at Tstart. What we want to know is what the temperature is at any given point in the stack after any given time. And the graph (plus the mouse readout) tells you everything you need to know. ### The Rainbow Graph To see what's happening to temperature and time we have to plot 3 variables in a 2D graph. It turns out to be clearest to plot temperature on the X-axis, the stack itself on the Y-axis (with dotted lines showing the different layers) and to show the time evolution of temperature as a rainbow-coloured set of lines from blue at short times to red at long times. It takes a while to get used to it, but you will. And if you move your mouse you get a readout of everything. ### Thermal Diffusivity For each layer you have to enter the thickness (obviously) and the thermal diffusivity, D. Most of us have heard of thermal conductivity, K (J/m.K), so why use diffusivity? We are interested in how the temperature changes throughout the system. That depends, of course, on the heat flowing in via conductivity but it also depends on the head capacity Cp (J/kg.K) (heat needed to raise the temperature of 1kg by a degree) and, therefore, on the density ρ (kg/m³), the mass per unit volume. Fortunately, the thermal diffusivity, in m²/s combines all three parameters. D=K/(Cp.ρ) You can find a lot of values on Wikipedia's Thermal Diffusivity page If you happen to know the thermal conductivity but not the diffusivity, use the calculator. If you don't know the heat capacity or density, enter 2000 and 1000 respectively. The calculated values are inconveniently small, hence the 10-7 adjustment implied when you enter the D values. ### Where are the units? For thickness and time no units are specified. In fact the underlying units in the calculation are μm and s. However, because the calculation is general, the thickness units could be in mm or m and the calculations remain correct provided you mentally adjust the time units. So if you have a structure defined in mm, you simply tell yourself that 1 thickness unit is 1mm and therefore if you calculate for 1 time unit, that's 1000s. Why have I done it this way? Because any slider that went from 1μm to 1km or from 1ms to 1ks would be useless. The single app lets us explore heat flows through packaging films (10s of μms), heat flows through brick walls with insulation panels, and heat flows through kms of rocks. What cannot be left unitless are the D values. A slider that covers everything from insulating aerogel to highly conductive silver is a bit impractical so there are some restrictions on what you can model. ### Floating or not If TBelow is set to 0 then it is assumed that the lowest surface makes no thermal contact, so the temperature can "float". In fact this can simulate something like heat sealing of a symmetrical structure - it's not that the middle of the heat seal is "floating", it's just that via symmetry we can calculate what's happening via this numerical trick. So you can simulate an 8-layer sealing process with a 4-layer model. ### Thermal Contact Resistance This model assumes that the top heat source is in perfect contact with the top layer and the heat can flow with infinite ease. In practice there is a thermal contact resistance from imperfections in contact. It can be modelled in the Contacts app. I could add a thermal resistance input, but that would make the interface even more complex! ### Numerics The calculations are numerical interations based on cutting the layers into virtual slices of thickness hStep and calculating what's happened after a short time interval of tStep. If you choose a larger hStep calculations are faster but you lose detail of smaller layers. If you go smaller in hStep the calculations can blow up (you see nothing in the graph or get funny lines). The fix is to reduce the tStep value, but then calculations are doubly slower. The default values of 5 are fine for many purposes but if you need very fine detail (small hStep) then you might need a small
""" This work heavily relies on the implementation by Icarte: https://bitbucket.org/RToroIcarte/bnn/src/master/ """ import gurobipy as gp from gurobipy import GRB import numpy as np from helper.misc import infer_and_accuracy from globals import INT, BIN, CONT, EPSILON, GUROBI_ENV, LOG, DO_CUTOFF vtypes = { INT: GRB.INTEGER, BIN: GRB.BINARY, CONT: GRB.CONTINUOUS } class MIP_NN: def __init__(self, data, architecture, bound, reg, fair): model = gp.Model("Gurobi_NN", env=GUROBI_ENV) if not LOG: model.setParam("OutputFlag", 0) self.N = len(data["train_x"]) self.architecture = architecture self.data = data self.train_x = data["train_x"] self.oh_train_y = data["oh_train_y"] self.bound = bound self.reg = reg self.fair = fair self.m = model if len(architecture) > 2: self.out_bound = (self.architecture[-2]+1)*self.bound else: self.out_bound = np.mean(data['train_x'])*architecture[0] self.init_params() if reg: # When compressing model, we need slightly more precision becaues of the sparse weight matrices self.m.setParam('IntFeasTol', 1e-7) self.add_examples() if fair in ["EO", "DP"]: self.add_fairness() def init_params(self): self.weights = {} self.biases = {} self.var_c = {} self.act = {} # All pixels that are 0 in every example are considered dead self.dead = np.all(self.train_x == 0, axis=0) for lastLayer, neurons_out in enumerate(self.architecture[1:]): layer = lastLayer + 1 neurons_in = self.architecture[lastLayer] self.weights[layer] = np.full((neurons_in, neurons_out), None) self.biases[layer] = np.full(neurons_out, None) if layer > 1: self.var_c[layer] = np.full((self.N, neurons_in, neurons_out), None) if layer < len(self.architecture) - 1: self.act[layer] = np.full((self.N, neurons_out), None) for j in range(neurons_out): for i in range(neurons_in): if layer == 1 and self.dead[i]: # Dead inputs should have 0 weight self.weights[layer][i,j] = 0 else: self.weights[layer][i,j] = self.add_var(INT,"w_%s-%s_%s" % (layer,i,j), self.bound) if layer > 1: # Var c only needed after first activation for k in range(self.N): self.var_c[layer][k,i,j] = self.add_var(CONT,"c_%s-%s_%s_%s" % (layer,i,j,k), self.bound) # Bias only for each output neuron self.biases[layer][j] = self.add_var(INT,"b_%s-%s" % (layer,j), self.bound) if layer < len(self.architecture) - 1: for k in range(self.N): # Each neuron for every example is either activated or not self.act[layer][k,j] = self.add_var(BIN, "act_%s-%s_%s" % (layer,j,k)) def add_examples(self): #self.pre_acts = {} for lastLayer, neurons_out in enumerate(self.architecture[1:]): layer = lastLayer + 1 neurons_in = self.architecture[lastLayer] #self.pre_acts[layer] = np.full((self.N, neurons_out), None) for k in range(self.N): for j in range(neurons_out): inputs = [] for i in range(neurons_in): if layer == 1: inputs.append(self.train_x[k,i]*self.weights[layer][i,j]) else: self.add_constraint(self.var_c[layer][k,i,j] - self.weights[layer][i,j] + 2*self.bound*self.act[lastLayer][k,i] <= 2*self.bound) self.add_constraint(self.var_c[layer][k,i,j] + self.weights[layer][i,j] - 2*self.bound*self.act[lastLayer][k,i] <= 0*self.bound) self.add_constraint(self.var_c[layer][k,i,j] - self.weights[layer][i,j] - 2*self.bound*self.act[lastLayer][k,i] >= -2*self.bound) self.add_constraint(self.var_c[layer][k,i,j] + self.weights[layer][i,j] + 2*self.bound*self.act[lastLayer][k,i] >= 0*self.bound) inputs.append(self.var_c[layer][k,i,j]) pre_activation = sum(inputs) + self.biases[layer][j] #self.pre_acts[layer][k,j] = pre_activation if layer < len(self.architecture) - 1: self.add_constraint((self.act[layer][k,j] == 1) >> (pre_activation >= 0)) self.add_constraint((self.act[layer][k,j] == 0) >> (pre_activation <= -EPSILON)) def add_regularizer(self): self.H = {} for lastLayer, neurons_out in enumerate(self.architecture[1:-1]): layer = lastLayer + 1 neurons_in = self.architecture[lastLayer] self.H[layer] = np.full(neurons_out, None) for j in range(neurons_out): self.H[layer][j] = self.add_var(BIN, "h_%s-%s" % (layer,j)) for i in range(neurons_in): if not (layer == 1 and self.dead[i]): self.add_constraint((self.H[layer][j] == 0) >> (self.weights[layer][i,j] == 0)) self.add_constraint((self.H[layer][j] == 0) >> (self.biases[layer][j] == 0)) for n in range(self.architecture[layer+1]): self.add_constraint((self.H[layer][j] == 0) >> (self.weights[layer+1][j,n] == 0)) #for k in range(self.N): # self.add_constraint((self.H[layer][j] == 0) >> (self.var_c[layer+1][k,j,n] == 0)) #self.add_constraint((self.H[layer][j] == 0) >> (self.act[layer][j] == 1)) # Last hidden layer should have at least as many neurons as the output layer self.add_constraint(self.H[layer].sum() >= self.architecture[-1]) def add_fairness(self): layer = len(self.architecture) - 1 lastLayer = layer - 1 neurons_in = self.architecture[lastLayer] neurons_out = self.architecture[layer] self.pred_labels = np.full(self.N, None) for k in range(self.N): self.pred_labels[k] = self.add_var(BIN, name="label_%s" % k) females = self.data['train_x'][:,64] males = self.data['train_x'][:,65] labels = self.data['train_y'] false_labels = 1 - labels for k in range(self.N): pre_acts = 0 for j in range(neurons_out): inputs = [] for i in range(neurons_in): if layer == 1: inputs.append(self.train_x[k,i]*self.weights[layer][i,j]) else: inputs.append(self.var_c[layer][k,i,j]) pre_activation = sum(inputs) + self.biases[layer][j] pre_activation = 2*pre_activation/self.out_bound if j == 0: pre_acts += pre_activation else: pre_acts -= pre_activation self.add_constraint((self.pred_labels[k] == 0) >> (pre_acts >= 0)) self.add_constraint((self.pred_labels[k] == 1) >> (pre_acts <= -EPSILON)) if self.fair == "EO": self.female_pred1_true1 = (females*labels*self.pred_labels).sum() / (females*labels).sum() self.male_pred1_true1 = (males*labels*self.pred_labels).sum() / (males*labels).sum() self.female_pred1_true0 = (females*false_labels*self.pred_labels).sum() / (females*false_labels).sum() self.male_pred1_true0 = (males*false_labels*self.pred_labels).sum() / (males*false_labels).sum() fair_constraint = 0.02 self.add_constraint(self.female_pred1_true1 - self.male_pred1_true1 <= fair_constraint) self.add_constraint(self.female_pred1_true1 - self.male_pred1_true1 >= -fair_constraint) self.add_constraint(self.female_pred1_true0 - self.male_pred1_true0 <= fair_constraint) self.add_constraint(self.female_pred1_true0 - self.male_pred1_true0 >= -fair_constraint) elif self.fair == "DP": self.female_pred1 = (females*self.pred_labels).sum() / females.sum() self.male_pred1 = (males*self.pred_labels).sum() / males.sum() fair_constraint = 0.05 #self.add_constraint(self.female_pred1 - self.male_pred1 <= fair_constraint) #self.add_constraint(self.female_pred1 - self.male_pred1 >= -fair_constraint) self.add_constraint(self.female_pred1 >= 0.8*self.male_pred1) self.add_constraint(self.male_pred1 >= 0.8*self.female_pred1) def update_bounds(self, bound_matrix={}): for lastLayer, neurons_out in enumerate(self.architecture[1:]): layer = lastLayer + 1 neurons_in = self.architecture[lastLayer] for j in range(neurons_out): for i in range(neurons_in): if "w_%s_lb" % layer in bound_matrix and type(self.weights[layer][i,j]) != int: self.weights[layer][i,j].lb = bound_matrix["w_%s_lb" % layer][i,j] if "w_%s_ub" % layer in bound_matrix and type(self.weights[layer][i,j]) != int: self.weights[layer][i,j].ub = bound_matrix["w_%s_ub" % layer][i,j] if "b_%s_lb" % layer in bound_matrix: self.biases[layer][j].lb = bound_matrix["b_%s_lb" % layer][j] if "b_%s_ub" % layer in bound_matrix: self.biases[layer][j].ub = bound_matrix["b_%s_ub" % layer][j] def add_output_constraints(self): raise NotImplementedError("Add output constraints not implemented") def calc_objective(self): raise NotImplementedError("Calculate objective not implemented") def add_var(self, precision, name, bound=None, lb=None, ub=None): if precision not in vtypes: raise Exception('Parameter precision not known: %s' % precision) if precision == BIN: return self.m.addVar(vtype=GRB.BINARY, name=name) else: if not bound: if lb != None and ub != None: return self.m.addVar(vtype=vtypes[precision], lb=lb, ub=ub, name=name) elif lb != None: return self.m.addVar(vtype=vtypes[precision], lb=lb, name=name) elif ub != None: return self.m.addVar(vtype=vtypes[precision], ub=ub, name=name) else: return self.m.addVar(vtype=vtypes[precision], lb=-bound, ub=bound, name=name) def add_constraint(self, constraint): self.m.addConstr(constraint) def set_objective(self, sense="min"): if sense == "min": self.m.setObjective(self.obj, GRB.MINIMIZE) else: self.m.setObjective(self.obj, GRB.MAXIMIZE) def train(self, time=None, focus=None): if time: self.m.setParam('TimeLimit', time) if focus: self.m.setParam('MIPFocus', focus) #self.m.setParam('Threads', 1) self.m._lastobjbst = GRB.INFINITY self.m._lastobjbnd = -GRB.INFINITY self.m._progress = [] self.m._val_acc = 0 # Needed to access values for NN objects self.m._self = self self.m.update() self.m.optimize(mycallback) def get_objective(self): return self.m.ObjVal def get_runtime(self): return self.m.Runtime def get_data(self): data = { 'obj': self.m.ObjVal, 'bound': self.m.ObjBound, 'gap': self.m.MIPGap, 'nodecount': self.m.NodeCount, 'num_vars': self.m.NumVars, 'num_int_vars': self.m.NumIntVars - self.m.NumBinVars, 'num_binary_vars': self.m.NumBinVars, 'num_constrs': self.m.NumConstrs, 'num_nonzeros': self.m.NumNZs, 'periodic': self.m._progress, 'variables': self.extract_values(), } return data # get_func needed to know how to access the variable def get_val(self, maybe_var, get_func): tmp = np.zeros(maybe_var.shape) for index, count in np.ndenumerate(maybe_var): try: # Sometimes solvers have "integer" values like 1.000000019, round it to 1 if maybe_var[index].VType in ['I', 'B']: tmp[index] = round(get_func(maybe_var[index])) else: tmp[index] = get_func(maybe_var[index]) except: tmp[index] = 0 return tmp def extract_values(self, get_func=lambda z: z.x): varMatrices = {} for layer in self.weights: varMatrices["w_%s" %layer] = self.get_val(self.weights[layer], get_func) varMatrices["b_%s" %layer] = self.get_val(self.biases[layer], get_func) if layer > 1: varMatrices["c_%s" %layer] = self.get_val(self.var_c[layer], get_func) if layer < len(self.architecture) - 1: varMatrices["act_%s" %layer] = self.get_val(self.act[layer], get_func) if self.fair in ["EO", "DP"]: varMatrices["pred_labels"] = self.get_val(self.pred_labels, get_func) return varMatrices def print_values(self): for layer in self.weights: print("Weight %s" % layer) print(self.get_val(self.weights[layer])) print("Biases %s" % layer) print(self.get_val(self.biases[layer])) if layer > 1: print("C %s" % layer) print(self.get_val(self.var_c[layer])) if layer < len(self.architecture) - 1: print("Activations %s" % layer) print(self.get_val(self.act[layer])) def mycallback(model, where): if where == GRB.Callback.MIP: nodecnt = model.cbGet(GRB.Callback.MIP_NODCNT) objbst = model.cbGet(GRB.Callback.MIP_OBJBST) + 1e-15 objbnd = model.cbGet(GRB.Callback.MIP_OBJBND) runtime = model.cbGet(GRB.Callback.RUNTIME) gap = 1 - objbnd/objbst if objbst < model._lastobjbst or objbnd > model._lastobjbnd: model._lastobjbst = objbst model._lastobjbnd = objbnd model._progress.append((nodecnt, objbst, objbnd, runtime, gap, model._val_acc)) elif where == GRB.Callback.MIPSOL: nodecnt = model.cbGet(GRB.Callback.MIPSOL_NODCNT) objbst = model.cbGet(GRB.Callback.MIPSOL_OBJBST) + 1e-15 objbnd = model.cbGet(GRB.Callback.MIPSOL_OBJBND) runtime = model.cbGet(GRB.Callback.RUNTIME) gap = 1 - objbnd/objbst model._lastobjbst = objbst model._lastobjbnd = objbnd data = model._self.data architecture = model._self.architecture varMatrices = model._self.extract_values(get_func=model.cbGetSolution) train_acc = infer_and_accuracy(data['train_x'], data['train_y'], varMatrices, architecture) val_acc = infer_and_accuracy(data['val_x'], data['val_y'], varMatrices, architecture) if LOG: print("Train accuracy: %s " % (train_acc)) print("Validation accuracy: %s " % (val_acc)) if model._self.reg: for layer in model._self.H: hl = varMatrices["H_%s" % layer].sum() print("Hidden layer %s length: %s" % (layer, int(hl))) model._progress.append((nodecnt, objbst, objbnd, runtime, gap, val_acc)) model._val_acc = val_acc # ModelSense == 1 makes sure it is minimization if DO_CUTOFF and int(objbst) <= model._self.cutoff and model.ModelSense == 1:# and model._self.reg <= 0: if model._self.reg == -1: #print("Cutoff first optimization from cutoff value: %s" % model._self.cutoff) model.cbStopOneMultiObj(0) elif model._self.reg > 0: hls = 0 for layer in model._self.H: hls += varMatrices["H_%s" % layer].sum() if hls == architecture[-1] and int(objbst) - hls <= model._self.cutoff: model.terminate() else: #print("Terminate from
is obtained from $c(n-1)$ by adding $+2$ if $\varepsilon_n=-1$ or adding $+1$ if $\varepsilon_n=+1$. Clearly, $n \mapsto c(n)$ is strictly monotone with $c(n) \to +\infty$ as $n \to +\infty$. \begin{lem}\label{L:fractions-related} For all $n\geq -1$, the following hold: \begin{itemize} \item[(i)] if $\varepsilon_{n+1}=-1$, $\alpha_{n+1}= 1- \tilde{\alpha}_{c(n)+1}$, and if $\varepsilon_{n+1}=+1$, $\alpha_{n+1}=\tilde{\alpha}_{c(n)+1}$; \item[(ii)] $\alpha_{n+1}= \prod_{c(n)+1}^{c(n+1)} \tilde{\alpha}_i$; \item[(iii)] $\tilde{\beta}_{c(n)}=\beta_n$; \item[(iv)] $\C{B}(\tilde{\alpha}_{c(n)+1})= \C{B}(\alpha_{n+1})$. \end{itemize} \end{lem} \begin{proof} We prove (i) by induction on $n$. We start with $n=-1$. If $\varepsilon_0=+1$, $\alpha_0=\tilde{\alpha}_0= \tilde{\alpha}_{c(-1)+1}$. If $\varepsilon_0=-1$, $\alpha_0 =1-\tilde{\alpha}_0=1-\tilde{\alpha}_{c(-1)+1}$. Now assume that the assertion in (i) is true for $n-1$. To prove it for $n$, we consider two cases: First assume that $\varepsilon_n=+1$. By the induction hypothesis for $n-1$, $\alpha_n= \tilde{\alpha}_{c(n-1)+1}=\tilde{\alpha}_{c(n)}$. Hence, $1/\alpha_n=1/\tilde{\alpha}_{c(n)}$. Now, if $\varepsilon_{n+1}=+1$, $1/\alpha_n=1/\tilde{\alpha}_{c(n)}$ leads to $\alpha_{n+1}=\tilde{\alpha}_{c(n)+1}$. If $\varepsilon_{n+1}=-1$, $1/\alpha_n=1/\tilde{\alpha}_{c(n)}$ leads to $\alpha_{n+1}=1-\tilde{\alpha}_{c(n)+1}$. Now assume that $\varepsilon_n=-1$. By the induction hypothesis, $\alpha_n=1-\tilde{\alpha}_{c(n-1)+1}=1-\tilde{\alpha}_{c(n)-1}$. As $\alpha_n \in (0,1/2)$, $\tilde{\alpha}_{c(n)-1} \in (1/2,1)$. Then, $\tilde{\alpha}_{c(n)}= 1/\tilde{\alpha}_{c(n)-1}-1= 1/(1-\alpha_n)-1= \alpha_n /(1-\alpha_n)$. Hence, $1/\tilde{\alpha}_{c(n)}= 1/\alpha_n-1$. Now, if $\varepsilon_{n+1}=+1$, $1/\tilde{\alpha}_{c(n)}= 1/\alpha_n-1$ leads to $\tilde{\alpha}_{c(n)+1}= \alpha_{n+1}$. If $\varepsilon_{n+1}=-1$, $1/\tilde{\alpha}_{c(n)}= 1/\alpha_n-1$ leads to $1-\tilde{\alpha}_{c(n)+1}=\alpha_{n+1}$. Part (ii): If $\varepsilon_{n+1}=+1$, by Part (i), $\alpha_{n+1}= \tilde{\alpha}_{c(n)+1}= \tilde{\alpha}_{c(n+1)}$. If $\varepsilon_{n+1}=-1$, by Part (i), $\alpha_{n+1}=1-\tilde{\alpha}_{c(n)+1}$. As $\alpha_{n+1}\in (0,1/2)$ we conclude that $\tilde{\alpha}_{c(n)+1}\in (1/2,1)$, which implies that $\tilde{\alpha}_{c(n)+2}= 1/\tilde{\alpha}_{c(n)+1}-1$. Therefore, \[\tilde{\alpha}_{c(n+1)} \tilde{\alpha}_{c(n)+1}=\tilde{\alpha}_{c(n)+2} \tilde{\alpha}_{c(n)+1}=1-\tilde{\alpha}_{c(n)+1}= \alpha_{n+1}.\] Part (iii): By the formula in Part (ii), and the definition of $c(n)$, \[\textstyle{ \beta_n=\prod_{m=0}^n \alpha_m =\prod_{m=0}^n \Big (\prod_{i=c(m-1)+1}^{c(m)} \tilde{\alpha}_i\Big) =\prod_{i=0}^{c(n)} \tilde{\alpha}_i= \tilde{\beta}_{c(n)}. }\] Part (iv): If $\varepsilon_{n+1}=+1$, by Part (i), $\alpha_{n+1}= \tilde{\alpha}_{c(n)+1}$, and hence $\C{B}(\tilde{\alpha}_{c(n)+1})= \C{B}(\alpha_{n+1})$. If $\varepsilon_{n+1}=-1$, by Part (i), $\alpha_{n+1}= 1-\tilde{\alpha}_{c(n)+1}$. Using \refE{E:Brjuno-functional-equations}, \[\C{B}(\alpha_{n+1})=\C{B}(1-\tilde{\alpha}_{c(n)+1})=\C{B}(-\tilde{\alpha}_{c(n)+1})= \C{B}(\tilde{\alpha}_{c(n)+1}). \qedhere\] \end{proof} \begin{proof}[Proof of \refP{P:brjuno-standard-vs-modified}] Fix $n \geq 0$. If $\varepsilon_n=+1$ then $c(n)=c(n-1)+1$, and by \refL{L:fractions-related}-(iii), \begin{equation}\label{E:P:brjuno-standard-vs-modified-1} \beta_{n-1}\log \beta_n = \tilde{\beta}_{c(n-1)}\log \tilde{\beta}_{c(n-1)+1}= \tilde{\beta}_{c(n)-1}\log \tilde{\beta}_{c(n)}. \end{equation} If $\varepsilon_n=-1$ then $c(n)= c(n-1)+2$, and by \refL{L:fractions-related}-(iii), \[\tilde{\beta}_{c(n)-1} =\tilde{\beta}_{c(n-1)+1} = \tilde{\beta}_{c(n-1)} \tilde{\alpha}_{c(n-1)+1} = \beta_{n-1} (1-\alpha_n)= \beta_{n-1}- \beta_{n}.\] and therefore \begin{equation}\label{E:P:brjuno-standard-vs-modified-2} \begin{aligned} \big(\tilde{\beta}_{c(n-1)}\log\tilde{\beta}_{c(n-1)+1}&+\tilde{\beta}_{c(n)-1}\log\tilde{\beta}_{c(n)} \big) -\beta_{n-1}\log\beta_n\\ &=\big( \beta_{n-1}\log (\beta_{n-1}- \beta_n) + (\beta_{n-1} - \beta_n) \log \beta_{n}\big) - \beta_{n-1}\log\beta_n\\ &= \beta_{n-1}\log (\beta_{n-1}- \beta_n) - \beta_n \log \beta_{n} \end{aligned} \end{equation} Combing \eqref{E:P:brjuno-standard-vs-modified-1} and \eqref{E:P:brjuno-standard-vs-modified-2}, and using $\beta_{n-1}- \beta_n = \beta_{n-1}(1-\alpha_n)$, we conclude that for all $m\geq 0$ we have \begin{align*} \textstyle{ \sum_{i=0}^{c(m)} \tilde{\beta}_{i-1} \log \tilde{\beta}_i} & - \textstyle{\sum_{n=0}^m \beta_{n-1} \log \beta_n} \\ &=\textstyle{ \sum_{n=0}^m \left (\sum_{i=c(n-1)+1}^{c(n)} (\tilde{\beta}_{i-1} \log \tilde{\beta}_i) - \beta_{n-1}\log \beta_n \right) }\\ &= \textstyle{ \sum_{n=0\, ;\, \varepsilon_n=-1}^m \left (\beta_{n-1}\log \beta_{n-1} + \beta_{n-1} \log (1-\alpha_n) - \beta_n \log \beta_n\right).} \end{align*} On the other hand, \begin{equation*} \textstyle{ \sum_{n=0}^m \beta_{n-1} \log (1/\alpha_n) + \sum_{n=0}^m \beta_{n-1} \log \beta_n = \sum_{n=0}^m \beta_{n-1} \log \beta_{n-1}, } \end{equation*} and similarly, \[\textstyle{ - \sum_{i=0}^{c(m)} \tilde{\beta}_{i-1} \log \tilde{\beta}_i - \sum_{i=0}^{c(m)} \tilde{\beta}_{i-1} \log (1/\tilde{\alpha}_i) = - \sum_{i=0}^{c(m)} \tilde{\beta}_{i-1} \log \tilde{\beta}_{i-1}. }\] Recall that $\tilde{\beta}_{-1}=\beta_{-1}=1$. Adding the above three equations, we conclude that \begin{equation*} \begin{aligned} \textstyle{ \left |\sum_{n=0}^m \beta_{n-1} \log (1/\alpha_n) \right. } & \textstyle{ \left .- \sum_{i=0}^{c(m)} \tilde{\beta}_{i-1} \log (1/\tilde{\alpha}_i) \right |} \\ & \textstyle{ \leq 3 \sum_{n=0}^{+\infty} |\beta_{n} \log \beta_{n}| + \sum_{n=0}^{+\infty} |\beta_{n-1} \log (1-\alpha_n)| } + \textstyle{ \sum_{i=0}^{+\infty} |\tilde{\beta}_{i} \log \tilde{\beta}_{i}|} \end{aligned} \end{equation*} Since $|x \log x| \leq 2 \sqrt{x}$ for $x\in (0,1)$, \begin{equation}\label{E:P:Brjuno-Yoccoz-equivalent} \begin{aligned} \textstyle{ \sum_{n=0}^{+\infty} |\tilde{\beta}_{n}\log \tilde{\beta}_{n}| } \textstyle{ \leq 2 \sum_{n=0}^{+\infty} (\tilde{\beta}_{n})^{1/2} } & \textstyle{ \leq 2 \sum_{n=0}^{+\infty} (\tilde{\beta}_{2n})^{1/2} + 2 \sum_{n=0}^{+\infty} (\tilde{\beta}_{2n+1})^{1/2} } \\ & \textstyle{ \leq 2 (\tilde{\beta}_0)^{1/2} \sum_{n=0}^{+\infty} 2^{-n/2} + 2 \sum_{n=1}^{+\infty} 2^{-n/2} } \\ & \leq 6+ 4 \cdot 2^{1/2}. \end{aligned} \end{equation} On the other hand, $\beta_n\leq 2^{-n-1}$, for all $n\geq 0$. Using $|x \log x| \geq 2 \sqrt{x}$, for $x \in (0,1)$, $\alpha_j \in (0, 1/2)$, we obtain \begin{equation*} \textstyle{ \sum_{n=0}^k |\beta_{n} \log \beta_{n}| \leq 2 \sum_{n=0}^{+\infty} \sqrt{\beta_{n}} \leq 2 \sum_{n=0}^{+\infty} 1/2^{(n+1)/2} = 2+ 2 \sqrt{2}, } \end{equation*} and \[ \textstyle{\sum_{n=0}^{+\infty} |\beta_{n-1} \log (1-\alpha_n)| \leq \log 2 \sum_{n=0}^{+\infty} \beta_{n-1} = 2 \log 2. }\] This completes the proof of the proposition. \end{proof} \begin{lem}\label{L:h-vs-cocycle} We have, \begin{itemize} \item[(i)] for all $r \in (0,1)$, \[h_r (\C{B}(r)) \geq \C{B}(1/r)+1, \quad h_r (\tilde{\C{B}}(r)) \geq \tilde{\C{B}}(1/r)+1.\] \item[(ii)] if there are $m\geq n \geq 0$ satisfying $h_{\tilde{\alpha}_{m-1}} \circ \dots \circ h_{\tilde{\alpha}_n} (0)\geq \tilde{\C{B}}(\tilde{\alpha}_{m})$, then \[\lim_{m \to +\infty} h_{\tilde{\alpha}_{m-1}} \circ \dots \circ h_{\tilde{\alpha}_n} (0) - \tilde{\C{B}}(\tilde{\alpha}_{m})=+\infty.\] \item[(iii)] if there are $m\geq n \geq 0$ satisfying $h_{\alpha_{m-1}} \circ \dots \circ h_{\alpha_n} (0)\geq \C{B}(\alpha_{m})$, then \[\lim_{m\to +\infty} h_{\alpha_{m-1}} \circ \dots \circ h_{\alpha_n} (0) - \C{B}(\alpha_{m})=+\infty.\] \end{itemize} \end{lem} \begin{proof} Note that for all $r\in (0,1)$ and all $y \in \D{R}$, \[h_r(y) \geq r^{-1} y + r^{-1} \log r+ 1.\] If $y\geq \log r^{-1}$, $h_r(y)=r^{-1}y + r^{-1} \log r + r^{-1} \geq r^{-1}y + r^{-1} \log r +1$. Using the inequality $x \geq 1+ \log x$, for $x>0$, we note that $r e^y \geq 1+ \log (re^y) = y + \log r +1 \geq y + \log r + r$. This implies the above inequality for $y < \log r^{-1}$. By the above inequality, as well as \eqref{E:Brjuno-functional-equations-standard} and \eqref{E:Brjuno-functional-equations}, we obtain \[h_r (\C{B}(r)) \geq r^{-1} \C{B}(r)+ \log r +1 = \C{B}(1/r)+1, \quad h_r (\tilde{\C{B}}(r)) \geq r^{-1} \tilde{\C{B}}(r)+ \log r +1 = \tilde{\C{B}}(1/r)+1.\] If the inequalities in (ii) and (iii) hold, we may use the inequality in (i) and $h_r(y+1)\geq h_r(y)+1$ in \refE{E:h_r-properties}, to obtain \[h_{\tilde{\alpha}_{m+j-1}} \circ \dots \circ h_{\tilde{\alpha}_n} (0)\geq \tilde{\C{B}}(\tilde{\alpha}_{m+j})+j, \quad h_{\alpha_{m+j-1}} \circ \dots \circ h_{\alpha_n} (0)\geq \C{B}(\alpha_{m+j})+j.\qedhere\] \end{proof} \begin{lem}\label{L:herman-blocks} Let $r_1 \in (1/2,1)$, $r_2=1/r_1-1\in (0,1)$, and $r=r_1 r_2 \in (0, 1/2)$. Then, for all $y \geq e^2$ we have \[\big| h^{-1}_r(y)- h^{-1}_{r_1} \circ h^{-1}_{r_2}(y)\big| \leq 1+ e^{-1}.\] \end{lem} \begin{proof} The inverse map $h_r^{-1}: (0, +\infty) \to \D{R}$ is given by the formula \[h_r^{-1}(y)= \begin{cases} r y + \log r^{-1} -1 & \text{if } y \geq 1/r, \\ \log y & \text{if } 0 < y \leq 1/r. \end{cases}\] Since $y\geq e^2$, one can see that $h_{r_2}^{-1}(y)\geq 1+ r_2 = 1/r_1$. Thus, $h^{-1}_{r_1} \circ h^{-1}_{r_2}(y)= r_1 h_{r_2}^{-1}(y)+ \log r_1^{-1}-1$. Using $r_2=1/r_1-1$ and the elementary inequality $|x \log x| \leq 1/e$, for $x\in (0,1)$, we have \[(1-r_1) \log (1/r_2) = (r_2/(1+ r_2)) \log r^{-1}_2 \leq e^{-1}/(1+r_2) \leq e^{-1}.\] We consider three cases: \noindent (1) $y \leq 1/r_2$: Since $1/r_2 \leq 1/r$, we get \begin{align*} \big| h^{-1}_r(y)- h^{-1}_{r_1} \circ h^{-1}_{r_2}(y)\big| &= \big | \log y - (r_1 \log y + \log r_1^{-1} -1)\big| \\ & \leq (1-r_1) \log y + \big |1- \log r^{-1}_1 \big| \\ & \leq (1-r_1) \log r_2^{-1} + \big |1- \log r^{-1}_1 \big| \leq e^{-1}+ 1. \end{align*} \noindent (2) $1/r_2 \leq y \leq 1/r$: Then, \begin{align*} h^{-1}_r(y)- h^{-1}_{r_1} \circ h^{-1}_{r_2}(y) & = \log y - (r_1 (r_2 y+\log r_2^{-1}-1) + \log r_1^{-1} -1) \\ &= \log (yr_1) - r_1 \log r_2^{-1} + r_1 (1-r_2 y) + 1. \end{align*} On the other hand, we have \[-1/2 \leq r_1 -1 \leq r_1 - ry = r_1 - r_1 r_2 y = r_1(1-r_2y) \leq 0,\] and, using $y r_1 r_2= y r \leq 1$, we get \[\log yr_1- r_1 \log r_2^{-1} \leq \log r^{-1}_2 - r_1 \log r_2^{-1} = (1-r_1) \log r_2^{-1} \leq e^{-1},\] and, using $y\geq 1/r_2$, \[\log yr_1- r_1 \log r_2^{-1} \geq \log (r_1/r_2) - r_1 \log r_2^{-1} = \log r_1 + (1-r_1) \log r_2 ^{-1} \geq \log r_1 \geq - \log 2.\] Combining the above inequalities we get \[-1 - e^{-1} \leq 1-1/2 -\log 2 \leq h^{-1}_r(y)- h^{-1}_{r_1} \circ h^{-1}_{r_2}(y) \leq 1+ e^{-1}.\] \noindent (3) $y \geq 1/r$: Using $r_1 r_2=r$, we get \begin{align*} \big| h^{-1}_r(y)- h^{-1}_{r_1} \circ h^{-1}_{r_2}(y)\big| & = \big | r y + \log r^{-1} -1 - (r_1 (r_2 y+\log r_2^{-1}-1) + \log r_1^{-1} -1)\big| \\ &= (1-r_1) \log r_2^{-1} + r_1 \leq e^{-1}+ 1. \end{align*} This completes the proof of the lemma. \end{proof} \begin{lem}\label{L:herman-chains} Let $m > n \geq 0$ and $y \geq e^2$. Assume that at least one of the following holds: \begin{itemize} \item[(i)] $h^{-1}_{\alpha_n} \circ h^{-1}_{\alpha_{n+1}} \circ \dots \circ h^{-1}_{\alpha_m}(y)$ is defined and is at least $e^2$, \item[(ii)] $h^{-1}_{\tilde{\alpha}_{c(n-1)+1}} \circ h^{-1}_{\tilde{\alpha}_{c(n-1)+2}} \circ \dots \circ h^{-1}_{\tilde{\alpha}_{c(m)}}(y)$ is defined and is at least $e^2$. \end{itemize} Then, \[\big |h^{-1}_{\alpha_n} \circ h^{-1}_{\alpha_{n+1}} \circ \dots \circ h^{-1}_{\alpha_m}(y) - h^{-1}_{\tilde{\alpha}_{c(n-1)+1}} \circ h^{-1}_{\tilde{\alpha}_{c(n-1)+2}} \circ \dots \circ h^{-1}_{\tilde{\alpha}_{c(m)}}(y) \big |\leq 2(1+e^{-1}).\] \end{lem} In the above lemma, it is part of the conclusion that if one of the compositions in items (i) and (ii) is defined and is at least $e^2$, then the composition in the other item is defined as well. \begin{proof} First assume that item (i) holds. By \refE{E:h_r-properties}, $h_r(t) \geq t$ for all $t \geq 0$ and $r\in (0,1)$. This implies that for all $j$ with $n \leq j \leq m$, $h^{-1}_{\alpha_j} \circ h^{-1}_{\alpha_{j+1}} \circ \dots \circ h^{-1}_{\alpha_m}(y)\geq e^2$. Let \[y_j=h^{-1}_{\alpha_j} \circ h^{-1}_{\alpha_{j+1}} \circ \dots \circ h^{-1}_{\alpha_m}(y), \;
do some math. My son was born this year. With an average life span of 76 years, he should most likely die by 2090. But, I will also make the assumption that in the years between 2014 and 2090 we will find ways to advance the average life span to a little bit longer, lets say 30 years. So now, his average life span is 106 years and the "death year' is extended to 2120. But between 2090 and 2020 science will continue to advance and we will probably have a life expectance of 136 years by then, which now make his death year "2050". And so forth until science finds a way to keep him alive for ever. Even if it takes the better part of a century, some of the younger people will still be beyond the cutoff. Now. If you actually talk to real scientists who have studied this in much more detail, they are saying that this will not take a century, and should just take a few decades to achieve " escape velocity" for immortality. There is a book written about this, and how in the next few decades, we will unlock the masteries of aging. The Singularity Is Near: When Humans Transcend Biology u/TehGinjaNinja · 3 pointsr/confession There are two books I recommend to everyone who is frustrated and/or saddened by the state of the world and has lost hope for a better future. The first is The Better Angels of Our Nature by Stephen Pinker. It lays out how violence in human societies has been decreasing for centuries and is still declining. Despite the prevalence of war and crime in our media, human beings are less likely to suffer violence today than at any point in our prior history. The west suffered an upswing in social violence from the 1970s -1990s, which has since been linked to lead levels, but violence in the west has been declining since the early 90s. Put simply the world is a better place than most media coverage would have you believe and it's getting better year by year. The second book I recomend is The Singularity is Near by Ray Kurzweil. It explains how technology has been improving at an accelerating rate. Technological advances have already had major positive impacts on society, and those effects will become increasingly powerful over the next few decades. Artificial intelligence is already revolutionizing our economy. The average human life span is increasing every year. Advances in medicine are offering hope for previously untreatable diseases. Basically, there is a lot of good tech coming which will significantly improve our quality of life, if we can just hang on long enough. Between those two forces, decreasing violence and rapidly advancing technology, the future looks pretty bright for humanity. We just don't hear that message often, because doom-saying gets better ratings. I don't know what disability you're struggling with but most people have some marketable skills, i.e. they aren't "worthless". Based on your post, you clearly have good writing/communicating skills. That's a rare and valuable trait. You could look into a career leveraging those skills (e.g. as a technical writer or transcriptionist) which your disability wouldn't interfere with to badly (or which an employer would be willing to accommodate). As for being powerless to change the world, many people feel that way because most of us are fairly powerless on an individual level. We are all in the grip of powerful forces (social, political, historical, environmental, etc.) which exert far more influence over our lives than our own desires and dreams. The books I recommended post convincing arguments that those forces have us on a positive trend line, so a little optimism is not unreasonable. We may just be dust on the wind, but the wind is blowing in the right direction. That means the best move may simply be to relax and enjoy the ride as best we can. u/bombula · 3 pointsr/Futurology Any futurist or regular reader of /r/futurology can rehearse all of the arguments for why uploading is likely to be feasible by 2100, including the incremental replacement of biological neurons by artificial ones which avoids the "copy" issue. If you're not already familiar with these, the easiest single reference is probably The Singularity is Near. u/maurice_jello · 3 pointsr/elonmusk Read Superintelligence. Or check out Bostrom's TED talk. u/j4nds4 · 3 pointsr/elonmusk &gt;Really? It's still their opinion, there's no way to prove or disprove it. Trump has an opinion that global warming is faked but it doesn't mean it's true. From my perspective, you have that analogy flipped. Even if we run with it, it's impossible to ignore the sudden dramatic rate of acceleration in AI capability and accuracy over just the past few years, just as it is with the climate. Even the CEO of Google was caught off-guard by the sudden acceleration within his own company. Scientists also claim that climate change is real and that it's an existential threat; should we ignore them though because they can't "prove" it? What "proof" can be provided for the future? You can't, so you predict based on the trends. And their trend lines have a lot of similarities. &gt;Also, even if it's a threat(i don't think so, but let's assume it is), how putting it in your brain will help? That's kind of ridiculous. Nowadays you can turn your PC off or even throw it away. You won't be able to do that once it's in your brain. Also, what if the chip decides to take control over your arms and legs one day? It's insane to say that AI is a threat but to plan to put it inside humans' brain. AI will change your perception input and you will be thinking you are living your life but in reality you will be sitting in a cell somewhere. Straight up some Matrix stuff. Don't want that. The point is that, in a hypothetical world where AI becomes so intelligent and powerful that you are effectively an ant in comparison, both in intelligence and influence, a likely outcome is death just as it is for billions of ants that we step on or displace without knowing or caring; think of how many species we humans have made extinct. Or if an AI is harnessed by a single entity, those controlling it become god-like dictators because they can prevent the development of any further AIs and have unlimited resources to grow and impose. So the Neuralink "solution" is to 1) Enable ourselves to communicate with computer-like bandwidth and elevate ourselves to a level comparable to AI instead of being left in ant territory, and 2) make each person an independent AI on equal footing so that we aren't controlled by a single external force. It sounds creepy in some ways to me too, but an existential threat sounds a lot worse. And there's a lot of potential for amazement as well. Just like with most technological leaps. I don't know how much you've read on the trends and future of AI. I would recommend Nick Bostrom's book "Superintelligence: Paths, Dangers, Strategies", but it's quite lengthy and technical. For a shorter thought experiment, the Paperclip Maximizer scenario. Even if the threat is exaggerated, I see no problem with creating this if it's voluntary. u/mdd · 3 pointsr/TheTeslaShow u/Mohayat · 3 pointsr/ElectricalEngineering Read Superintelligence by Nick Bostrom , it answered pretty much all the questions I had about AI and learned a ton of new things from it. It’s not too heavy on the math but there is a lot of info packed into it, highly recommend it. u/NotebookGuy · 3 pointsr/de Ein Beispiel wäre, dass sie die Menschen als Hindernis in ihrem Plan ansieht. Es gibt da dieses Beispiel der Maschine, die dafür gebaut wird die Herstellung von Büroklammern zu optimieren. Diese Maschine könnte zum
SES-based skills gaps in various ways. First, the gaps between the top- and bottom-SES quintiles shrink, showing that SES-based gaps are partially explained by the variation in the controls (which is not visible in the tables).17 Second, controls do not significantly change the SES-based gaps over time, in general; i.e., the coefficients associated with changes in the gaps between high- and low-SES children remain almost the same, or change very minimally, depending on the skill measured. The statistical significance of the SES-based skills gaps in 1998 is not affected by the inclusion of the controls (see rows “Gap in 1998–1999” in tables), but the statistical significance of the changes in the gaps between 1998 and 2010 (see rows “Change in gap by 2010–2011” in tables) is somewhat affected by the inclusion of the controls (note that the sizes of the coefficients measuring gaps in 1998 change after the inclusion of the controls, but that the sizes of the coefficients measuring changes in them between 1998 and 2010 do not change significantly). In reading, the change in the gap between 1998 and 2010 diminishes and becomes statistically insignificant in the last model (the relative gap increases by 0.08 sd but this change is not statistically significant), meaning that adding parental expectations of education accounts for some of the increase in the gap detected in Models 1 to 3. The only SES-based skills gap that shows a statistically significant increase from 1998 to 2010 once parental expectations are controlled for is the gap associated with parents’ assessment of approaches to learning, which increases by 0.11 sd. Gaps between high- and low-SES children in cognitive and noncognitive skills after adjustments are made are shown in Figure B. As mentioned above, the fact that the skills gaps decrease after controls are taken into consideration affirms that SES-based gaps are due in part to variation in the controls among high- versus low-SES children. This trend can be seen in Table 5, which, as noted above, shows the overall reduction in gaps that results from controlling for child and family characteristics, early literacy practices, and parental expectations of educational achievement. With respect to cognitive skills, the 1998 gaps shrink by 46 percent and 53 percent, respectively, after the inclusion of the covariates. About half of the gaps are thus due to other factors that are associated both with SES status and with the outcomes themselves. The reduction in the 1998 gaps for noncognitive skills varies from 28 percent (approaches to learning as reported by teachers) to 74 percent (approaches to learning as reported by parents). (For self-control as reported by teachers, the reduction is 51 percent versus 35 percent when reported by parents.) While the gaps hold after the inclusion of controls across outcomes, gaps in 2010 are less sensitive to the inclusion of the covariates than they were in 1998. This trend can also be seen in Table 5.18 Declining values from 1998 to 2010 indicate that factors such as early literacy activities and other controls are not, as a group, explaining SES-based gaps as much as they had a decade prior. This change could be due to the failure of the index to fully capture parents’ efforts to nurture their children’s development and/or the index becoming somewhat out-of-date. In any event, the resistance of gaps to these controls should worry researchers and policymakers. The waning influence of these controls makes it harder to understand what drives SES gaps. It also suggests that the gaps may be growing more intractable or, at least are less easily narrowed via the enactment of known policy interventions. Finally, we examine the association of performance outcomes (not performance gaps) with selected early educational practices, including having attended center-based pre-K, literacy/reading activities and other activities, and total number of children’s books in the home (Table 6).19 We are mainly interested in two potential patterns: whether these factors are associated with outcomes (and, if so, how intense the associations are), and whether the relationships have changed over time. In keeping with established research, having attended center-based pre-K is positively associated with children’s early reading and math skills. For 1998, the estimated coefficients are 0.11 sd for reading skills and 0.10 sd for math skills, substantial associations that do not change significantly over time. In other words, attending pre-K in 1998 improved kindergartners’ reading skills by 0.11 sd and improved kindergartners’ math skills by 0.10 sd relative to not attending pre-K. However, while center-based pre-K continues to reduce self-control as reported by teachers in 2010, the effect is less negative in 2010 (the 0.06 improvement from 1998 to 2010 shown in the bottom panel of the table shows us that the effect in 2010 was -0.07 [-0.13 plus 0.06], compared with -0.13 sd in 1998). We find no independent effect of center-based prekindergarten schooling (i.e., no effect in addition to SES, in addition to other individual and family characteristics, or in addition to other SES-mediated factors), on approaches to learning or on self-control as reported by parents.20 The number of books children have at home likewise supports their skills at the beginning of kindergarten. Indeed, this factor is positively associated with all outcomes but self-control reported by parents. The coefficients are very small, of about 0.01 to 0.02 sd (associated with changes in outcomes for each 10 additional/fewer books the child has, as expressed by the continuous scale with which number of books in the home is measured, which is divided by 10 for the analyses (as mentioned in Appendix A), and these relationships do not change over the time period. The two types of parenting activities that are summarized by the indices “reading/literacy activities” and “other activities” show interesting correlations with performance and patterns over time. On the one hand, the “reading/literacy activities” index (a composite of how frequently parents read books to their child, tell stories, sing songs, and talk about nature, and how frequently the child reads picture books and reads outside of school) is strongly and positively associated with all outcomes other than children’s self-control as reported by the teacher. The associations with cognitive skills, especially with reading, are strong and statistically significant—0.17 sd for reading performance and 0.07 sd for math—and these associations did not change significantly between 1998 and 2010. For noncognitive skills, the relationships are strong for those assessed by parents, though they shrink by about half over time: self-control is 0.14 sd in 1998 and decreases by 0.08 sd by 2010; approaches to learning is 0.32 sd in 1998 and decreases by 0.17 sd by 2010). The relationship is much weaker, though still statistically significant, for teachers’ assessed approaches to learning (it is 0.03 sd in 1998 and does not change significantly by 2010). On the other hand, the index that measures other enrichment activities that parents do with their children (a composite of how frequently parents and children play games, do sports, build things, work on puzzles, do arts and crafts, and do chores) shows significant correlations with all of the skills, but they may be either positively correlated or negatively correlated, depending on the skill. For cognitive skills, the associations are statistically significant and negative, though stronger and somewhat more meaningful or more intense with reading achievement (-0.12 sd in 1998) than with math achievement (-0.04 sd).21 These associations did not intensify nor weaken over time. For noncognitive skills the associations are highly positive and statistically significant, and very strong for parents’ assessment of approaches to learning (0.29 sd in 1998). As explained by García (2015), these correlations between “other activities” and noncognitive skills as assessed by parents could be bidirectional: engaging children in enrichment activities might enhance their noncognitive skills, but, at the same time, parents who are
would've been to do it in Python. Python has a lot of handy libraries for that kind of thing and it would've have taken very long. I've been on a Go kick lately though, and I ran across colly, which looked like a pretty solid scraping framework for Go, so I decided to implement it with that. First, using colly, I wrote a very basic scraper for bandcamp to give me a nice layer of abstraction. Then I threw together a real simple program using it to go through my list of links, scrape the data for each, and generate the markdown syntax: I just run that like: go run yearly.go | sort --ignore-case > output.txt And a minute or two later, I end up with a nice sorted list that I can paste into my blog software and I'm done. ## 2017 Music For the third year in a row, here is my roundup of music released in 2017 that I enjoyed. One of the reasons that I've been making these lists is to counteract a sentiment that I encounter a lot, especially with people my age or older. I often hear people say something to the effect of "The music nowadays just isn't as good as [insert time period when they were in their teens and twenties]". Sometimes this also comes with arguments about how the internet/filesharing/etc. have killed creativity because artists can't make money anymore so all that's left is the corporate friendly mainstream stuff. I'm not going to get into the argument about filesharing and whether musicians are better or worse off than in the past (hot take: musicians have always been screwed over by the music industry, the details of exactly how are the only thing that technology is changing). But I think the general feeling that music now isn't like the "good old days" is bullshit and the result of mental laziness and stagnation. We naturally fall into habits of just listening to the music that we know we like instead of going out looking for new stuff and exploring with an open mind. My tastes run towards weird dark heavy metal, so that's what you'll see here, but I guarantee that for whatever other genres you are into, if you put the effort into looking just a little off the beaten path, you could find just as much great new music coming out every year. I certainly love many of the albums and bands of my youth, but I also feel like the sixteen year old me would be really into any one of these as well. OK, I know I've said that I don't do "top 10" lists or anything like that, but if you've made it all the way to the bottom of this post, I do want to highlight a few that were particularly notable: Bell Witch, Boris, Chelsea Wolfe, Goatwhore, Godflesh, King Woman, Lingua Ignota, Myrkyr, Pallbearer, Portal, The Bug vs Earth, Woe, and Wolves in the Throne Room. Plus special mention to Tyrannosorceress for having my favorite band name of the year. TAGS: music ## In the Wild Last night, I was scanning the /r/guitarpedals subreddit. Something I have been known to do... occasionally. I see this post: OK, someone's trying to identify a pedal they came across in a studio. I'm not really an expert on boutique pedals, but I have spent a little time on guitar forums over the years so who knows, maybe I can help? Clicking the link, there's a better shot of the pedal: Hmm... nope, don't recognize it. Close the tab... OK, yeah, the pedal doesn't look familiar, but the artwork on it sure does... That's one of my drawings from about 2008. So at some point, someone out there built a custom guitar pedal, used one of my drawings for it, it ended up in a recording studio somewhere, someone else found the pedal in the studio, took a picture, posted it on reddit, and I stumbled on it. Anyone who's known me for very long knows that I post all of my artwork online under a Creative Commons Public Domain license. I'm not a career artist and it's not worth the hassle for me to try to restrict access on my stuff and I'd rather just let anyone use it for whatever they want. So this obviously makes me very happy. I've had plenty of people contacting me over the years asking to use them. That's unnecessary but appreciated. My paintings and drawings have appeared on dozens of websites and articles. There are a couple books out there that include them (besides the Abstract Comics Anthology that I was actively involved in). I know that there's at least one obscure death metal album out there that uses one of my paintings for the cover. I've had a few people say they were going to get tattoos, but I've never seen a photo of the results, so I can't say for certain whether anyone followed through on that. This is the first time that I've run into my own work like this randomly in a place that I wasn't looking. BTW, no one has yet identified the pedal, so obviously, if you know anything about who built it, let me know. TAGS: art ## Ghosts of Frameworks Past Recently, Github added the ability to archive repositories. That prompted me to dig up some code that I wrote long ago. Stuff that really shouldn't be used anymore but that I'm still proud of. In general, this got me remeniscing about old projects and I thought I'd take a moment to talk about a couple of them here, which are now archived on Github. Both are web frameworks that I wrote in Python and represent some key points in my personal programming history. I started writing Python in about 2003, after spending the previous five years or so working mostly in Perl. Perl was an important language for me and served me well, but around that point in time, it felt like every interesting new project I saw was written in Python. I started using Python on some non web-based applications and immediately liked it. Back then, many smaller dynamic sites on the web were still using CGI. Big sites used Java but it was heavy-weight and slow to develop in. MS shops used ASP. PHP was starting to gain some popularity and there were a ton of other options like Cold Fusion that had their own niches. Perl CGI scripts running on shared Linux hosts were still super popular. With Perl, if you needed a little more performance than CGI scripts offered, if you ran your own Apache instance, you could install mod_perl and see a pretty nice boost (along with some other benefits). Once you outgrew simple guestbooks and form submission CGI scripts, you needed a bit more structure to your app. Perl had a number of templating libraries, rudimentary ORMs, and routing libraries. Everyone generally picked their favorites and put together a basic framework. Personally, I used CGI::Application, Class::DBI, and HTML::Template. It was nowhere near as nice as frameworks that would come later like Ruby on Rails or Django, but at the time, it felt pretty slick. I could develop quickly and keep the code pretty well structured with cleanly separated models, views, and templates. Python wasn't really big in the web world yet. There was Zope, which had actually been around for quite a while and had proven itself to be quite capable. Zope was... different though. Like, alien technology different. It included an object database and basically took a completely different approach to solving pretty much every problem. Later, I would spend quite a bit of time with Zope and Plone, and I have a great deal of respect for it, but as a newcomer to Python,
- Definition A PDA P := ( Q,∑, , δ,q 0,Z 0,F ): Q: states of the -NFA ∑: input alphabet : stack symbols δ: transition function q 0: start state Z 0: Initial stack top s mbolInitial stack top symbol F: Final/accepting states 3 Here is the increasing sequence of expressive power of machines : As we can observe that FA is less powerful than any other machine. Seven tuples used to define the pushdown automata3. TOC: Pushdown Automata (Formal Definition)Topics Discussed:1. Basically a pushdown automaton is − "Finite state machine" + "a stack" A pushdown automaton has three components − The stack head scans the top symbol of the stack. For implementation of genetic programming. They are more capable than finite-state machines but … Please enter your comment! This implies that while taking a transition from state p to state q, the input symbol ‘a’ is consumed, and the top of the stack ‘T’ is replaced by a new string ‘α’. These all are Pushdown Automata. Pushdown automata (PDAs) can be thought of as combining an NFA “control-unit” with a “memory” in the form of an infinite stack. Applications of Push Down Automata (1.) Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready. Algorithm: Read one letter at a time from the input, in a loop. .....118 8.2 … Hopcroft & Ullman. A DFA can remember a finite amount of information, but a PDA can remember an infinite amount of information. (ii) Pushdown Automata (PDA) equivalence: The Applications of these Automata are given as follows: Attention reader! Difference between Pushdown Automata and Finite Automata, Inclusion-Exclusion and its various Applications, Difference between various Implementations of Python, Advantages and Disadvantages of various CPU scheduling algorithms, Various Properties of context free languages (CFL), Advantages and Disadvantages of various Page Replacement algorithms, Basic Laws for Various Arithmetic Operations, Advantages and Disadvantages of various Disk scheduling algorithms, Various Instructions for five stage Pipeline, Allowed Functional Dependencies (FD) in Various Normal Forms (NF), Designing Finite Automata from Regular Expression (Set 1), Construct Pushdown Automata for given languages, Generating regular expression from Finite Automata, Pushdown Automata Acceptance by Final State, Data Structures and Algorithms – Self Paced Course, Most visited in Theory of Computation & Automata, We use cookies to ensure you have the best browsing experience on our website. A pushdown automaton is used to implement a context-free grammar in same way we design DFA for a regular grammar. generate link and share the link here. Applications of Automata Theory. A pushdown automaton has three components −. We represent (possibly infinite) sets of configurations of such systems by means of finite-state automata. Don’t stop learning now. Lecture Pushdown Automata 2. tapetape head stack head finite stack control 3. a l p h a b e tThe tape is divided into finitely many cells.Each cell contains a symbol in an alphabetΣ. For recognizing the pattern using regular expressions. The Applications of these Automata are given as follows: 1. – Kevin Panko Dec 14 '09 at 19:46. add a comment | 1 Answer Active Oldest Votes. The stack head always scans the topsymbol of the stack. For recognizing the pattern using regular expressions. Its most common use is in Compilers. A Pushdown Automata (PDA) can be defined as : Q is the set of states ∑is the set of input symbols Pushdown automata are used in theories about what can be computed by machines. Addison-Wesley, 1978. Pushdown Automata is a finite automata with extra memory called stack which helps Pushdown automata to recognize Context Free Languages. Google Scholar; Fischer, Patrick C., "On Computability by Certain Classes of Restricted Turing Machines," Paper presented at Conference on Switching Circuits and Automata Theory, Chicago, Oct., 1963. Some are given below; ... Push Down Automata (PDA) Introduction and Requirement. You have entered an incorrect email address! For implementation of artificial intelligence. Google Scholar Automata is a machine that can accept the Strings of a Language L over an input alphabet . A PDA may or may not read an input symbol, but it has to read the top of the stack in every transition. Outline 1 What is automata theory 2 Why to bother with automata theory? Automata theory is the basis for the theory of formal languages. The transition functions that describe the pushdown automaton (usually represented by labels on the arrows between the state circles) tell the automaton what to do. Pushdown Automata accepts a Context Free Language. 4 min read. The Applications of these Automata are given as follows: 1. A DFA can remember a finite amount of information, but a PDA can remember an infinite amount of information. Pop − the top symbol is read and removed. DFA can remember a finite amount of information while PDA can remember an infinite amount of information. Automata theory has come into prominence in recent years with a plethora of applications in fields ranging from verification to XML processing and file compression. Pushdown Automata is a finite automata with extra memory called stack which helps Pushdown automata to recognize Context Free Languages. 3 Intuition: PDA Think of an ε-NFA with the additional power that it can manipulate a stack. For the designing of lexical analysis of a compiler. It is important to note that DFA and NFA are of same power because every NFA can be converted into DFA and every DFA can be converted into NFA . [this course] Textbooks Harrison. Shift Reduce parser is nothing but Pushdown Automata. History and applications EPDAs were first described by K. Vijay-Shanker in his 1988 doctoral thesis. Expressive Power of various Automata: Engelfriet, Pushdown Automata. By using our site, you A DFA can remember a finite amount of information, but a PDA can remember an infinite amount of information. Introduction to Formal Language Theory. A stack allows pushdown automata a limited amount of memory. Note − If we want zero or more moves of a PDA, we have to use the symbol (⊢*) for it. A proper treatment of formal language theory begins with some basic definitions: A symbol is simply a character, an abstraction that is meaningless by itself. (ii) Pushdown Automata (PDA) equivalence: PDA ≡ Finite Automata with Stack (iii) Turing Machine (TM) equivalence: Turing Machine ≡ PDA with additional Stack ≡ FA with 2 Stacks . Most programming languages have deterministic PDA’s. Applications of regular expressions to compilers, networks, and operating systems are described. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Page Replacement Algorithms in Operating Systems, Network Devices (Hub, Repeater, Bridge, Switch, Router, Gateways and Brouter), Complexity of different operations in Binary tree, Binary Search Tree and AVL tree, Relationship between number of nodes and height of binary tree, Characteristics of Biological Data (Genome Data Management), Restoring Division Algorithm For Unsigned Integer, Computer Organization | Instruction Formats (Zero, One, Two and Three Address Instruction), Difference between Mealy machine and Moore machine, Last Minute Notes - Theory of Computation, Write Interview Online Transaction Processing system In this work an attempt is made to model the on-line transaction processing system of a banking organization with timed automata. Application of Finite Automata (FA): We have several application based on finite automata and finite state machine. For the implementation of spell checkers. Evey, R. J., "The Theory and Applications of Pushdown Store Machines," Doctoral Thesis, Harvard University (1963). A word is a finite string of symbols from a given alphabet. A context-free grammar (CFG) is a set of rewriting rules that can be used
of 13 particle velocities for each grid point. \section{Measured characteristic of the flow field} \subsection{Velocity and deviation fields} \begin{figure*}[htbp] \centering \includegraphics[width=172mm]{Fig_4.pdf} \caption{ Direct comparison between the measured velocity fields (a--c), the analytical solution of Moffatt \cite{moffatt64} (d--f) and the corresponding deviation fields (g--i). Each column represents one measurement solution and the contact line velocity is always \SI{200}{\micro\meter\per\second}. The first column is the measured flow field of pure water (a), (d) is the corresponding analytical solution and (g) is the calculated deviation field between (a) and (d). The second column shows the measured flow field for a \SI{30}{\percent CMC} $\mathrm{C_{12}E_5}$ solution (b) and the calculated deviation field (h). The third column shows the results for a \SI{30}{\percent CMC} $\mathrm{C_8E_3}$ solution. The comparison between the measured flow fields and the analytical solution shows that in the case of surfactant solutions, the flow at the interface between liquid and air is reduced. This is shown clearly by the deviation fields (h, i). } \label{fig:3_Deviation_fields} \end{figure*} To compare experimental flow profiles (e.g. Figure \ref{fig:3_Deviation_fields} (a--c)) to hydrodynamic calculations, we assume a simplified geometry near the contact line: Rather than treating the full three--dimensional dewetting problem, we take a two-dimensional wedge geometry, because we do not observe significant transverse flow within the resolution of our measurements. For this geometry, hydrodynamic flow is known for simple liquids, like water \cite{moffatt64, huh71,voinov76, cox86}. These theories provide an analytical solution for the region very near the three--phase contact line (inner solution) and the bulk flow (outer solution). Due to the size of the tracer particles, we could not resolve experimentally the flow in the inner region. Therefore, we compare our measurements only to the outer theoretical solution, which is the same for all these theories. We use the theory of Moffatt \cite{moffatt64} for stress--free liquid-gas interface and constant contact line velocity. Figure \ref{fig:3_Deviation_fields} (g) depicts the deviation field between measurement (a) and theory (d) for pure water. This deviation field does not show a flow field but the normalized velocity difference between measurement and theory as calculated by: \begin{equation} u_{d} = u_{m} - u_{t},\quad w_{d} = w_{m} - w_{t}, \label{eqn:deviation_field} \end{equation} \begin{equation} S_d = \frac{|\vec{v}_d|}{|\vec{v}_t|} = \frac{\sqrt{u_d^2+w_d^2}}{\sqrt{u_t^2+w_t^2}} . \label{eqn:deviation_field_2} \end{equation} Here, $u$ and $w$ are the mean local velocity components in $x$ and $z$ direction at a given ($x$, $z$) position. The indices indicate deviation $d$, measurement $m$ and theory $t$. The relative strength of the deviation $S_d$ is represented by the color code and the length of the arrows, the angular deviation by the orientation of the arrows. The deviation field of water (Figure \ref{fig:3_Deviation_fields} (g)) has a randomly distributed angular deviation and its magnitude is \SI{<30}{\percent}. Due to this randomness and the small absolute deviation, measurement and theory agree very well. The random noise shows that no bias errors are present. At the liquid-gas interface, little deviation occurs. As expected, the liquid-gas interface is stress--free. In the presence of surfactants (\SI{30}{\percent CMC}), the receding contact angle at a velocity of \SI{200}{\micro\meter\per\second} decreased from \ang{78} (pure water) to \ang{54} ($\mathrm{C_{12}E_5}$, \SI{0.38}{ppm}) and \ang{25} ($\mathrm{C_8E_3}$, \SI{40}{ppm}). Because of this change in contact angle, it is impossible to directly compare the measured flow fields of surfactant solutions to those measured in pure water. No hydrodynamic theory is available to describe the flow of surfactant solutions near moving contact lines. As shown above, the water measurements match the hydrodynamic theory for water. So, we can compare the surfactant measurements with the hydrodynamic theory for pure water \cite{moffatt64} at the respective contact angle. For both surfactants, the region near the liquid-gas interface has a reduced velocity; the deviation field points towards the contact line (Figure \ref{fig:3_Deviation_fields} (h), (i)). The surface flow is slower than in pure water. The surface velocity is reduced in the entire observation volume near the free surface. As will be shown in the next section, this reduction in surface velocity results from a Marangoni stress caused by gradients in surfactant concentration, which opposes the surface flow. If we consider the shown fluid volume of Figure \ref{fig:3_Deviation_fields} (h) as a control volume, we can apply the conservation of mass. Fluid enters the volume at \mbox{$x\,=\,$\SI{100}{\micro\meter}} from the right and leaves the volume at \mbox{$z\,=\,$\SI{60}{\micro\meter}} parallel to the liquid-gas interface. Since the outflow of the fluid near the free surface is reduced due to the Marangoni stress, the inflow of the fluid into the volume is also reduced to fulfil mass conservation; the deviation field points to the right at \mbox{$x\,=\,$\SI{100}{\micro\meter}}. The same general qualitative behavior is observed for surfactants with very difference $\alpha$, showing this does not depend on the characteristic length scale $\alpha$ but is a generic feature. \subsection{Deduced changes in surface tension} To quantify the gradient in surface tension and surface excess, i.e., the surface concentration of surfactant molecules, along the free surface, we use the already mentioned Equation (\ref{eqn:shear_stress}). We first calculate the shear stress from the measured velocity profiles. Since any tangental stress occurring at a liquid-gas interface originates from a surface tension gradient, we calculate the surface tension gradient along the free surface $\nabla_{\parallel} \gamma$ from the shear stress $\tau = \mu \nabla_{\perp} \vec{v}_p$ in the flow profile just under the free surface \cite{landau_87}, Equation (\ref{eqn:shear_stress}): $ \tau = \mu \nabla_{\perp} \vec{v}_p = \nabla_{\parallel} \gamma $. The velocity $\vec{v}_p$ is the velocity component parallel to the free surface. To calculate the velocity derivative, we fit the velocity in the direction normal to the free surface by a cubic polynomial. We differentiate the polynomial function to calculate the shear stress $\tau$. We use the viscosity of water $\mu = \SI{1}{\milli\pascal\second}$ to calculate the stress for the different solutions. Due to the scattering of the experimental data the uncertainty of the calculated stress is around \SI{20}{\percent}. Since the statistical error becomes too large closer than \SI{20}{\micro\meter} to the contact line, we exclude this region from further analysis. The surface tension gradient $\nabla_{\parallel} \gamma$ is integrated from right to left to obtain the change in surface tension $\Delta \gamma$ (Figure \ref{fig:4_Delta_Gamma}). For all measured concentrations, a clear surface tension gradient was measurable. Far away from the contact line (\SI{>70}{\micro\meter}) the surface--tension gradient vanishes. At this distance the gradient in surface concentration is too small to produce measurable effects, i.e., we consider the surface tension to be close to constant. When approaching the contact line, the surface tension deviates from its equilibrium value. For both surfactants the magnitude in surface tension difference $\Delta \gamma$ increases with increasing surfactant concentration, Figure \ref{fig:4_Delta_Gamma}. Although the change in surface tension for the \SI{5}{\percent CMC} solutions is small, it is significantly larger than our resolution limit that is given by the apparent surface tension gradient for water, see Supplemental Material \cite{SI}. We quantify this range of the increased surface tension by a decay length $L_D$ over which $\Delta \gamma$ reduces to a fraction of $1/e$ of its initial value. The decay lengths of the surface tension gradient differ by less than a factor of two between the used surfactants and concentrations, Table \ref{tab:surfactant_transport_process}. Additionally, there is no clear tendency of how the decay length depends on the characteristic length $\alpha$, i.e., on the type of the surfactant, or on the Laplace number $La$, i.e., on the balance of the surface forces to the viscous forces in the liquid. So neither molecular nor local dynamic properties seem to play a dominating role in defining the hydrodynamic flow close to the receding contact line in these surfactant solutions. Although the surface tension near the contact line is only increased by \mbox{\num{1}--\num{2}$\,$\si{\micro\newton\per\meter}}, the gradient in surface tension, i.e. the Marangoni stress, is around \mbox{\num{70}--\num{130}$\,$\si{\micro\newton\per\meter\squared}}. This increase in surface tension corresponds to a decreasing surface concentration, more precisely surface excess, of surfactant near the contact line. \begin{figure}[htbp] \center \includegraphics[width=75.6mm]{Vergleich_Surfactant_Shear_Stress_10_bw_safe_plus_5CMC_single_column.png} \caption{ (a) and (b) show the calculated change in surface tension $\Delta \gamma$ for different concentrations of $\mathrm{C_{12}E_5}$ and $\mathrm{C_8E_3}$ along the free surface $x_{Int}$. See
if there is any further literature on this model beyond this paper (though the number field analogue of low-lying zeroes of Dirichlet ${L}$-functions is certainly well studied). In this model it is possible to set ${N}$ fixed and let ${T}$ go to infinity, thus providing a simple finite-dimensional model problem for problems involving the statistics of zeroes of the zeta function. In this post I would like to record this analogue precisely. We will need a finite field ${{\mathbb F}}$ of some order ${q}$ and a natural number ${N}$, and set $\displaystyle T := q^{N+1}.$ We will primarily think of ${q}$ as being large and ${N}$ as being either fixed or growing very slowly with ${q}$, though it is possible to also consider other asymptotic regimes (such as holding ${q}$ fixed and letting ${N}$ go to infinity). Let ${{\mathbb F}[X]}$ be the ring of polynomials of one variable ${X}$ with coefficients in ${{\mathbb F}}$, and let ${{\mathbb F}[X]'}$ be the multiplicative semigroup of monic polynomials in ${{\mathbb F}[X]}$; one should view ${{\mathbb F}[X]}$ and ${{\mathbb F}[X]'}$ as the function field analogue of the integers and natural numbers respectively. We use the valuation ${|n| := q^{\mathrm{deg}(n)}}$ for polynomials ${n \in {\mathbb F}[X]}$ (with ${|0|=0}$); this is the analogue of the usual absolute value on the integers. We select an irreducible polynomial ${Q \in {\mathbb F}[X]}$ of size ${|Q|=T}$ (i.e., ${Q}$ has degree ${N+1}$). The multiplicative group ${({\mathbb F}[X]/Q{\mathbb F}[X])^\times}$ can be shown to be cyclic of order ${|Q|-1=T-1}$. A Dirichlet character of modulus ${Q}$ is a completely multiplicative function ${\chi: {\mathbb F}[X] \rightarrow {\bf C}}$ of modulus ${Q}$, that is periodic of period ${Q}$ and vanishes on those ${n \in {\mathbb F}[X]}$ not coprime to ${Q}$. From Fourier analysis we see that there are exactly ${\phi(Q) := |Q|-1}$ Dirichlet characters of modulus ${Q}$. A Dirichlet character is said to be odd if it is not identically one on the group ${{\mathbb F}^\times}$ of non-zero constants; there are only ${\frac{1}{q-1} \phi(Q)}$ non-odd characters (including the principal character), so in the limit ${q \rightarrow \infty}$ most Dirichlet characters are odd. We will work primarily with odd characters in order to be able to ignore the effect of the place at infinity. Let ${\chi}$ be an odd Dirichlet character of modulus ${Q}$. The Dirichlet ${L}$-function ${L(s, \chi)}$ is then defined (for ${s \in {\bf C}}$ of sufficiently large real part, at least) as $\displaystyle L(s,\chi) := \sum_{n \in {\mathbb F}[X]'} \frac{\chi(n)}{|n|^s}$ $\displaystyle = \sum_{m=0}^\infty q^{-sm} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \chi(n).$ Note that for ${m \geq N+1}$, the set ${n \in {\mathbb F}[X]': |n| = q^m}$ is invariant under shifts ${h}$ whenever ${|h| < T}$; since this covers a full set of residue classes of ${{\mathbb F}[X]/Q{\mathbb F}[X]}$, and the odd character ${\chi}$ has mean zero on this set of residue classes, we conclude that the sum ${\sum_{n \in {\mathbb F}[X]': |n| = q^m} \chi(n)}$ vanishes for ${m \geq N+1}$. In particular, the ${L}$-function is entire, and for any real number ${t}$ and complex number ${z}$, we can write the ${L}$-function as a polynomial $\displaystyle L(\frac{1}{2} + it - \frac{2\pi i z}{\log T},\chi) = P(Z) = P_{t,\chi}(Z) := \sum_{m=0}^N c^1_m(t,\chi) Z^j$ where ${Z := e(z/N) = e^{2\pi i z/N}}$ and the coefficients ${c^1_m = c^1_m(t,\chi)}$ are given by the formula $\displaystyle c^1_m(t,\chi) := q^{-m/2-imt} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \chi(n).$ Note that ${t}$ can easily be normalised to zero by the relation $\displaystyle P_{t,\chi}(Z) = P_{0,\chi}( q^{-it} Z ). \ \ \ \ \ (2)$ In particular, the dependence on ${t}$ is periodic with period ${\frac{2\pi}{\log q}}$ (so by abuse of notation one could also take ${t}$ to be an element of ${{\bf R}/\frac{2\pi}{\log q}{\bf Z}}$). Fourier inversion yields a functional equation for the polynomial ${P}$: Proposition 1 (Functional equation) Let ${\chi}$ be an odd Dirichlet character of modulus ${Q}$, and ${t \in {\bf R}}$. There exists a phase ${e(\theta)}$ (depending on ${t,\chi}$) such that $\displaystyle a_{N-m}^1 = e(\theta) \overline{c^1_m}$ for all ${0 \leq m \leq N}$, or equivalently that $\displaystyle P(1/Z) = e^{i\theta} Z^{-N} \overline{P}(Z)$ where ${\overline{P}(Z) := \overline{P(\overline{Z})}}$. Proof: We can normalise ${t=0}$. Let ${G}$ be the finite field ${{\mathbb F}[X] / Q {\mathbb F}[X]}$. We can write $\displaystyle a_{N-m} = q^{-(N-m)/2} \sum_{n \in q^{N-m} + H_{N-m}} \chi(n)$ where ${H_j}$ denotes the subgroup of ${G}$ consisting of (residue classes of) polynomials of degree less than ${j}$. Let ${e_G: G \rightarrow S^1}$ be a non-trivial character of ${G}$ whose kernel lies in the space ${H_N}$ (this is easily achieved by pulling back a non-trivial character from the quotient ${G/H_N \equiv {\mathbb F}}$). We can use the Fourier inversion formula to write $\displaystyle a_{N-m} = q^{(m-N)/2} \sum_{\xi \in G} \hat \chi(\xi) \sum_{n \in T^{N-m} + H_{N-m}} e_G( n\xi )$ where $\displaystyle \hat \chi(\xi) := q^{-N-1} \sum_{n \in G} \chi(n) e_G(-n\xi).$ From change of variables we see that ${\hat \chi}$ is a scalar multiple of ${\overline{\chi}}$; from Plancherel we conclude that $\displaystyle \hat \chi = e(\theta_0) q^{-(N+1)/2} \overline{\chi} \ \ \ \ \ (3)$ for some phase ${e(\theta_0)}$. We conclude that $\displaystyle a_{N-m} = e(\theta_0) q^{-(2N-m+1)/2} \sum_{\xi \in G} \overline{\chi}(\xi) e_G( T^{N-j} \xi) \sum_{n \in H_{N-j}} e_G( n\xi ). \ \ \ \ \ (4)$ The inner sum ${\sum_{n \in H_{N-m}} e_G( n\xi )}$ equals ${q^{N-m}}$ if ${\xi \in H_{j+1}}$, and vanishes otherwise, thus $\displaystyle a_{N-m} = e(\theta_0) q^{-(m+1)/2} \sum_{\xi \in H_{j+1}} \overline{\chi}(\xi) e_G( T^{N-m} \xi).$ For ${\xi}$ in ${H_j}$, ${e_G(T^{N-m} \xi)=1}$ and the contribution of the sum vanishes as ${\chi}$ is odd. Thus we may restrict ${\xi}$ to ${H_{m+1} \backslash H_m}$, so that $\displaystyle a_{N-m} = e(\theta_0) q^{-(m+1)/2} \sum_{h \in {\mathbb F}^\times} e_G( T^{N} h) \sum_{\xi \in h T^m + H_{m}} \overline{\chi}(\xi).$ By the multiplicativity of ${\chi}$, this factorises as $\displaystyle a_{N-m} = e(\theta_0) q^{-(m+1)/2} (\sum_{h \in {\mathbb F}^\times} \overline{\chi}(h) e_G( T^{N} h)) (\sum_{\xi \in T^m + H_{m}} \overline{\chi}(\xi)).$ From the one-dimensional version of (3) (and the fact that ${\chi}$ is odd) we have $\displaystyle \sum_{h \in {\mathbb F}^\times} \overline{\chi}(h) e_G( T^{N} h) = e(\theta_1) q^{1/2}$ for some phase ${e(\theta_1)}$. The claim follows. $\Box$ As one corollary of the functional equation, ${a_N}$ is a phase rotation of ${\overline{a_1} = 1}$ and thus is non-zero, so ${P}$ has degree exactly ${N}$. The functional equation is then equivalent to the ${N}$ zeroes of ${P}$ being symmetric across the unit circle. In fact we have the stronger Theorem 2 (Riemann hypothesis for Dirichlet ${L}$-functions over function fields) Let ${\chi}$ be an odd Dirichlet character of modulus ${Q}$, and ${t \in {\bf R}}$. Then all the zeroes of ${P}$ lie on the unit circle. We derive this result from the Riemann hypothesis for curves over function fields below the fold. In view of this theorem (and the fact that ${a_1=1}$), we may write $\displaystyle P(Z) = \mathrm{det}(1 - ZU)$ for some unitary ${N \times N}$ matrix ${U = U_{t,\chi}}$. It is possible to interpret ${U}$ as the action of the geometric Frobenius map on a certain cohomology group, but we will not do so here. The situation here is simpler than in the number field case because the factor ${\exp(A)}$ arising from very small primes is now absent (in the function field setting there are no primes of size between ${1}$ and ${q}$). We now let ${\chi}$ vary uniformly at random over all odd characters of modulus ${Q}$, and ${t}$ uniformly over ${{\bf R}/\frac{2\pi}{\log q}{\bf Z}}$, independently of ${\chi}$; we also make the distribution of the random variable ${U}$ conjugation invariant in ${U(N)}$. We use ${{\mathbf E}_Q}$ to denote the expectation with respect to this randomness. One can then ask what the limiting distribution of ${U}$ is in various regimes; we will focus in this post on the regime where ${N}$ is fixed and ${q}$ is being sent to infinity. In the spirit of the Sato-Tate conjecture, one should expect ${U}$ to converge in distribution to the circular unitary ensemble (CUE), that is to say Haar probability measure on ${U(N)}$. This may well be provable from Deligne’s “Weil II”
implicitly identified by Ter-Martirosyan and Skornyakov by means of the condition \eqref{eq:TMS-eqn-3bosons} for its eigenstates at given two-body scattering length $-\alpha^{-1}$, is \emph{not} a self-adjoint operator and it admits a one-parameter family of self-adjoint extensions, labelled by $\beta\in\mathbb{R}$; for each $\beta$, the corresponding self-adjoint Hamiltonian has a countable discrete spectrum accumulating exponentially to $-\infty$ with corresponding eigenfunctions that collapse onto the barycentre; the union of the negative spectra of all such self-adjoint extensions is the whole negative real line. Motivated by the scheme of Ter-Martirosyan and Skornyakov for the three-body problem with point interaction and by Danilov's observation, Minlos and Faddeev \cite{Minlos-Faddeev-1961-1,Minlos-Faddeev-1961-2} in the same year 1961 provided essentially the whole explanation above, including the asymptotics \eqref{eq:Danilov-many} and \eqref{eq:3bosons_EV_asymptotics_Thomas}, in the form of two beautiful short announcements, albeit with no proofs or further elaborations. Theirs can be considered as the beginning of the mathematics of quantum systems with zero-range interactions. This is even more so because for the first time the problem was placed within a general mathematical framework, the theory of self-adjoint extensions of semi-bounded symmetric operator, that Kre{\u\i}n, Vi\v{s}ik, and Birman had developed between the mid 1940's and the mid 1950's (see Appendix \ref{app:KVB}). A somewhat different approach characterised the start of the mathematical study of the \emph{two-body} problem. In 1960-1961, a few months before the works of Minlos and Faddeev on the three-body problem, Berezin and Faddeev \cite{Berezin-Faddeev-1961} published the first rigorous analysis of a three-dimensional model with two particles coupled by a delta-like interaction. The emphasis was put in realising the formal Hamiltonian $-\Delta+\delta(x)$ as a self-adjoint extension of the restriction $-\Delta|_{C^\infty_0(\mathbb{R}^3\setminus\{0\})}$ (in the relative variable $x=x_1-x_2$ between the two particles). Working in Fourier transform, they recognised that the latter operator has deficiency indices $(1,1)$ and they characterised the whole family $\{H_\alpha\,|\,\alpha\in\mathbb{R}\}$ of its self-adjoint extensions as the operators \begin{equation}\label{eq:Berezin-Faddeev-1} \widehat{(H_\alpha\psi)}(p)\;=\;p^2\widehat{\psi}(p)-\lim_{R\to\infty}\frac{1}{4\pi R}\int_{\substack{\,p\in\mathbb{R}^3 \\ \! |p|<R}}\widehat{\psi}(q)\,\mathrm{d} q \end{equation} defined on the domain of $L^2(\mathbb{R}^3)$-functions $\psi$ such that, as $R\to\infty$, \begin{equation}\label{eq:Berezin-Faddeev-2} \int_{\substack{\,p\in\mathbb{R}^3 \\ \! |p|<R}}\widehat{\psi}(q)\,\mathrm{d} q\;=\;c \,(R+2\pi^2\alpha)+o(1)\quad\textrm{ and }\quad \int_{\mathbb{R}^3}|H_\alpha\psi|^2\mathrm{d} x<\infty\,. \end{equation} (For a more direct comparison -- see \eqref{eq:TMS_cond_asymptotics_1} in the following -- we have replaced here the parameter $\alpha$ of the notation of \cite{Berezin-Faddeev-1961} with $-(8\pi^3\alpha)^{-1}$.) As \eqref{eq:Berezin-Faddeev-1}-\eqref{eq:Berezin-Faddeev-2} were only announced with no derivation, with a sole reference to the monograph \cite{Akhiezer-Glazman-1961-1993} of Akhiezer and Glazman on linear operators in Hilbert space, we are to understand that Berezin and Faddeev came to their conclusion by methods of von Neumann's self-adjoint extension theory, as presented in \cite[Chapter VII]{Akhiezer-Glazman-1961-1993}, combined with explicit calculations in Fourier transform. This leaves the question open on why they did not approach the extension problem within the same language of Kre{\u\i}n, Vi\v{s}ik, and Birman, as used by Minlos and Faddeev for the three-body case. In this language, as we work out in Section \ref{sec:2body-point}, \eqref{eq:Berezin-Faddeev-1}-\eqref{eq:Berezin-Faddeev-2} would have emerged as a very clean application of the general theory and, most importantly, the asymptotics in \eqref{eq:Berezin-Faddeev-2} would have arisen with a natural and intimate connection with the TMS equation \eqref{eq:TMS-eqn-3bosons}. Berezin and Faddeev rather focused on re-interpreting the action of the Hamiltonian $H_\alpha$ as a renormalised rank-one perturbation of the free Laplacian, re-writing \eqref{eq:Berezin-Faddeev-1} in position coordinates as \begin{equation} H_\alpha\psi\;=\;-\Delta\psi-\frac{1}{4\pi\alpha}\,\lim_{R\to\infty}\frac{1}{\,2\pi^2+R/\alpha\,}\frac{\sin R|x|}{|x|}\int_{\mathbb{R}^3}\frac{\sin R|y|}{|y|}\psi(y)\,\mathrm{d} y\,. \end{equation} We conjecture that they did not know the old work of Bethe and Peierls for two nucleons, or they did not consider it relevant in their context, for no word is spent in \cite{Berezin-Faddeev-1961} to derive the singularity $\psi(x)\sim |x|^{-1}$ as $|x|\to 0$ from their asymptotics \eqref{eq:Berezin-Faddeev-2}. With the subsequent theoretical and experimental advances in nuclear physics -- the initial playground for models of point interactions -- it became clear that the assumption of zero range was only a crude simplification of no fundamental level. The lack of a physically stringent character for the idealisation of zero range in experimentally observed quantum-mechanical systems, and the somewhat obscure emergence of the unboundedness from below for the self-adjoint realisations of the three-body Hamiltonian, decreased the physical interest towards point interactions and left their rigorous study in a relatively marginal position, and the approach of Ter-Martirosyan and Skornyakov quiescent. Moreover, after Faddeev published in 1963 his fundamental work \cite{Faddeev-1963-eng-1965-3body} on the three-body problem with regular two-body forces, the concern of the physicists switched over to the numerical solutions of the corresponding Faddeev equations. In the Russian physical literature, mainly under the input of Faddeev, methods and models of point interactions, albeit not fully rigorous, moved their applicability to atomic and molecular physics, a mainstream that ideally culminates with the late 1970's monograph of Demkov and Ostrovskii \cite{Demkov-Ostrovskii-book} on the ``zero-range potentials'' and their application to atomic physics. The use of formal delta-like potentials remained for some decades as a tool for a formal first-order perturbation theory; in addition, the Kre{\u\i}n-Vi\v{s}ik-Birman self-adjoint extension theory lost ground to von Neumann's theory in the literature in English language on the mathematics for quantum mechanics -- it rather evolved in more modern forms in application to boundary value problems for partial differential equations, mainly in the modern theory of boundary triplets. It is the merit of Albeverio, Gesztesy, and H\o{}egh-Krohn, and their collaborators (among whom, Streit and Wu), in the end of the 1970's and throughout the 1980's, to have unified an amount of previous investigations by establishing a proper mathematical branch on rigorous models of point interactions, with a systematic study of \emph{two-body} Hamiltonians and of \emph{one-body} Hamiltonians with finite or infinitely many \emph{fixed centers} of point interaction. We refer to the monograph \cite{albeverio-solvable} for a comprehensive overview on this production, and especially to the end-of-chapter notes in \cite{albeverio-solvable} for a detailed account of the previous contributions. The main tools in this new mainstream were: von Neumann's extension theory on the first place (hence with no reference any longer to the methods of Kre{\u\i}n-Vi\v{s}ik-Birman), by which point interaction Hamiltonians were constructed as self-adjoint extensions of the restriction of the free Laplacian to functions that vanish in a neighbourhood of the point where the interaction is supported; resolvent identities (of Kre{\u\i}n and of Konno-Kuroda type, see \cite[Appendices A and B]{albeverio-solvable}) by which these self-adjoint extensions were recognised to be finite-rank perturbations of the free Laplacian, in the resolvent sense, and were also re-obtained by resolvent limits of Schr\"{o}dinger Hamiltonians with shrinking potentials; plus an amount of additional methods (Dirichlet quadratic forms, non-standard analysis methods, renormalisation methods) for specific problems. Let us emphasize, in particular, that the original heuristic arguments of Bethe and Peierls and their two-body contact condition find a rigorous ground based on the fact, which can be proved within von Neumann's extension theory (see, e.g., \cite[Theorems I.1.1.1 and I.1.1.3]{albeverio-solvable}), that any self-adjoint extension of $\Delta|_{C^\infty_0(\mathbb{R}^3\setminus\{0\})}$ on $L^2(\mathbb{R}^3)$ has a domain whose elements behave as $\psi(x)\sim(|x|^{-1}+\alpha)$ as $|x|\to 0$, as an $s$-wave (hence a ``low-energy'') boundary conditions, for some $\alpha\in(-\infty,+\infty]$. As for the initial three-body problem with two-body point interaction, it finally re-gained centrality from the mathematical point of view (while physically a stringent experimental counterpart was still lacking) around the end of the 1980's and throughout the 1990's. This was first due to Minlos and his school \cite{Minlos-1987,Minlos-Shermatov-1989,mogilner-shermatov-PLA-1990,Menlikov-Minlos-1991,Menlikov-Minlos-1991-bis,Minlos-TS-1994,Shermatov-2003} (among which Melnikov, Mogilner, and Shermatov), by means of the operator-theoretic approach used for three identical bosons by Minlos and Faddeev, and slightly later due to Dell'Antonio and his school \cite{Teta-1989,dft-Nparticles-delta,DFT-proc1995} (among which Figari and Teta), with an approach based on quadratic forms, where the ``physical'' energy form is first regularised by means of an ultra-violet cut-off and a suitable renormalisation procedure, and then is shown to be realised by a self-adjoint Hamiltonian. An alternative direction was started further later by Pavlov and a school that included Kuperin, Makarov, Melezhik, Merkuriev, and Motovilov, \cite{Kuperin-Makarov-Merk-Motovilov-Pavlov-1989-JMP1990,Makarov-Melezhik-Motovilov-1995}, by indroducing internal degrees of freedom, i.e., a spin-spin contact interaction, so as to realise semi-bounded below three-body Hamiltonians. After a further period of relative quiescence, the subject has been experiencing
motion, for example, to open or close and push or pull a load. Velocity vs. SOLIDWORKS Simulation Standard is an intuitive virtual testing environment for static linear, time-based motion, and high-cycle fatigue simulation. 77 F = 18. This part of the linear actuator rotates, extending or retracting the nut/inner tube, which creates a linear motion. Around you, people feel more at ease. Projectile motion occurs when objects are fired at some initial velocity or dropped and move under the influence of gravity. 2.Ship driving simulation system. As an application of our simulation software, we present a method for hu-man character animation where motion capture data is used to drive a rigid body simulation. Motion platforms can provide movement in all of the six degrees of freedom (DOF) that can be experienced by an object that is free to move, such as an aircraft or spacecraft:. Simulation and motion study of crank-rocker mechanism - Duration: 13:28. Apply to Character Animator, Mechanical Designer, 3d Designer and more!. Among the applications of DC actuators, we find the electromagnetic contactors based on linear motion. Image Stabilization and Multi-Axis Patterned Motion Simulation Motion simulation is crucial in the evaluation of many devices, including vibrating testing of vehicles or airplanes, response testing of accelerometers or gyroscopes and image sensors such as those used in high end cameras and smartphones. H2W Technologies, Inc. Oh boy! The answer to this question has the potential to get very deep and technical and complicated. China 2018linear Actuator for Dynamic Platform of Racing Car Simulator, Find details about China Linear Actuator, Servo Motor from 2018linear Actuator for Dynamic Platform of Racing Car Simulator - Xuzhou Long Shine Industrial Co. in Abstract— Underwater sensing is a challenging domain in which the acoustic signals to be captured and processed. Every day, we see objects such as cars, people, and tennis balls move in different directions with different speeds. -Depending on the particular motion controller used, the number of independent axis will vary from 2-32 axis of motion. It is achieved by connecting a slider and a crank with a rod. Projectile motion occurs when objects are fired at some initial velocity or dropped and move under the influence of gravity. The angular motion distributions displayed a variation that can be related with wave forces. swift delivery 5. Biased simulations have great potential for the study of slow processes, including protein folding. Explain your ng. Motion Simulation and Mechanism Design with SolidWorks Motion. Linear Motion Simulator in real time FrameRate(60) how?. 35 years later, that successful bearing design is now part of a larger family of inventive products and services related to linear motion technologies. It extracts the game values like the force of speed, the direction of movement, shifting gear, RPM of the engine, the force of acceleration and a lot more. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. PI (Physik Instrumente) LP, a leading manufacturer of precision motion control and nanopositioning equipment, has recently released the new H-860KMAG 6-axis hexapod for high dynamics motion and vibration simulations that require extremely smooth, and quiet high-speed motion. Simulation and motion study of crank-rocker mechanism - Duration: 13:28. Through the motion simulator, the motion state of the ship during the various sea conditions is simulated to simulate the actual motion state of the ship indoors. The data also include intensity images, inertial measurements, and ground truth from a motion-capture system. Glencoe/McGraw-Hill. % - All 6 by 1 spatial vectors are represented as [angular motion; linear motion] % (e. For any given object, a larger force causes a larger change in motion. A linear equation is constructed by adding the results for each term. Understand that the acceleration due to gravity is constant (9. Beykirch3, P. An event-based camera is a revolutionary vision sensor with three key advantages: a measurement rate that is almost 1 million times faster than standard cameras, a latency of 1 microsecond, and a high dynamic range of 130 decibels. Communicate the true feelings of the ship in distress, the ground shaking and the wall swaying. 2 kHz driving frequency (the third vibration modal), 25 gw loading and the position of loading or mass at x = 5 mm & 45 mm respectively. The object obeys Newton laws of motion and the motion is based on the premise that the ramp has dry friction. Using locators this Demonstration lets you create a 10-second acceleration profile that is then integrated to give plots of the velocity and position. 35 years later, that successful bearing design is now part of a larger family of inventive products and services related to linear motion technologies. The minerals processing enterprises are widely using vibrating machines to separate different fractions of materials. What are Linear and Non-linear Circuits? Simply we can say that the linear circuit is an electric circuit and the parameters of this circuit are resistance, capacitance, inductance and etc are constant. references 37. With the integrated measuring system IMScompact, the measuring sensor is fully integrated in the runner block - this not only saves a lot of installation space, but also budget, since there are no add-on parts. While it is convenient to have the linear equations of. Based on initial results of this simulation, a linear electric motor was chosen as the cornerstone of the engine. Simulation speed and capacity is a key advantage in this design process. Motion simulation conducts interference checks in real time, and provides the. American Institute of Aeronautics and Astronautics 1 Does jerk have to be considered in linear motion simulation? F. The full detailed. For reliable and dynamic linear motion, Bishop-Wisecarver’s innovative DualVee guide wheel design is unparalleled. Example: Equations of motion for a system with gears. achievable with finite linear motion. For the purpose of simulation data analysis and post processing, Plotting Library is provided by the Simulation Workbench as a graphical technique for representing a simulation data set as a graph, showing the relationship between two or more variables. Due to gravity, its trajectory will be a parabola which shape will vary based on the angle and initial velocity of the projectile. Linear Motion Explained with Worked Examples – offers 100 worked examples. use a value between 0 and 90 degrees) or the velocity. Whether you have laptops, iPads, chromebooks, or BYOD, your favorite PhET sims are always right at your fingertips. Simulation Results from Simscape Logging - Abstract. 4: conduct an inquiry into the uniform and non-uniform linear motion of an object (e. $\endgroup$ – Ammar Taha Aug 8 '18 at 21:21. Nose-Hoover thermostat Langevin thermostat Andersen thermostat Berendsen thermostat Temperature rescaling Bussi-Parinello velocity rescaling thermostat Below are discussions on some of the above: 1. Besides these fields, the simulation is also used by various companies around the world for training their specialists. Experiment 2 – Free Fall and Projectile Motion Objectives Learn how to solve projectile motion problems. It delivers a concurrent engineering approach, helping you know if your product will perform properly and how long it will last—during the design phase. Sieving efficiency is greatly dependent on particle trajectories, or orbit, of periodical motion over the sieving decks. A linear electric motor (LEM) is an electromechanical device that produces linear motion without using any mechanism to convert rotary motion to linear motion. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. This is an efficient equivalent to the. Under a fixed current excitation of 2A,
Re}\,h_{ij}$ and ${\text {Im} } \, h_{ij}$ are i.i.d. with distribution ${\omega}_{ij}$ i.e., $\nu_{ij} = {\omega}_{ij}\otimes{\omega}_{ij}$ in the sense that $\nu_{ij}({\rm d} h) = {\omega}_{ij}({\rm d} {\rm Re}\,h) {\omega}_{ij}({\rm d} {\text {Im} } \, h)$, but this assumption is not essential for the result. The distribution $\nu_{ij}$ and its variance $\sigma_{ij}^2$ may depend on $N$, but we suppress this in the notation. We assume that, for any $j$ fixed, \begin{equation} \sum_{i} \sigma^2_{ij} = 1 \, . \label{sum} \end{equation} Matrices with independent, zero mean entries and with the normalization condition \eqref{sum} will be called {\it universal Wigner matrices.} For a forthcoming review on this matrix class, see \cite{Spe}, where the terminology of {\it random band matrices} was used. Define ${ C_{inf}}$ and ${ C_{sup}}$ by \begin{equation}\label{defCiCs} { C_{inf}}:= \inf_{N, i,j}\{N\sigma^2_{ij}\}\leq \sup_{N, i,j}\{N\sigma^2_{ij}\}=:{ C_{sup}}. \end{equation} Note that ${ C_{inf}}={ C_{sup}}$ corresponds to the standard Wigner matrices and the condition $0< { C_{inf}} \le { C_{sup}} <\infty$ defines more general Wigner matrices with comparable variances. We will also consider an even more general case when $\sigma_{ij}$ for different $(i,j)$ indices are not comparable. The basic parameter of such matrices is the quantity \begin{equation}\label{defM} M:= \frac{1}{\max_{ij} \sigma_{ij}^2}. \end{equation} A special case is the band matrix, where $\sigma_{ij}=0$ for $|i-j|>W$ with some parameter $W$. In this case, $M$ and $W$ are related by $M\le CW$. Denote by $B:=\{ \sigma^2_{ij}\}_{i,j=1}^N$ the matrix of variances which is symmetric and doubly stochastic by \eqref{sum}, in particular it satisfies $-1\leq B\leq 1$. Let the spectrum of $B$ be supported in \begin{equation}\label{de-de+} \mbox{Spec}(B)\subset [-1+\delta_-, 1-\delta_+]\cup\{1\} \end{equation} with some nonnegative constants $\delta_\pm$. We will always have the following spectral assumption \begin{equation}\label{speccond} \mbox{\it 1 is a simple eigenvalue of $B$ and $\delta_-$ is a positive constant, independent of $N$.} \end{equation} The local semicircle law will be proven under this general condition, but the precision of the estimate near the spectral edge will also depend on $\delta_+$ in an explicit way. For the orientation of the reader, we mention two special cases of universal Wigner matrices that provided the main motivation for our work. \bigskip \noindent {\it Example 1. Generalized Wigner matrix.} In this case we have \begin{equation}\label{VV} 0<{ C_{inf}}\leq { C_{sup}}<\infty, \end{equation} and one can easily prove that $1$ is a simple eigenvalue of $B$ and \eqref{de-de+} holds with \begin{equation}\label{de-de+2} \delta_\pm \ge C_{inf}, \end{equation} i.e., both $\delta_-$ and $\delta_+$ are positive constants independent of $N$. \bigskip \noindent {\it Example 2. Band matrix.} The variances are given by \begin{equation}\label{BM} \sigma^2_{ij} = W^{-1} f\Big(\frac{ [i-j]_N}{W}\Big), \end{equation} where $W\ge 1$, $f:{\mathbb R}\to {\mathbb R}_+$ is a bounded nonnegative symmetric function with $\int f =1$ and we defined $[i-j]_N\in{\mathbb Z}$ by the property that $[i-j]_N\equiv i-j \; \mbox{mod}\,\,\, N$ and $-\frac{1}{2}N < [i-j]_N \le\frac{1}{2}N $. Note that the relation \eqref{sum} holds only asymptotically as $W\to \infty$ but this can be remedied by an irrelevant rescaling. If the bandwidth is comparable with $N$, then we also have to assume that $f(x)$ is supported in $|x|\le N/(2W)$. The quantity $M$ defined in \eqref{defM} satisfies $M\le W/\|f\|_\infty$. In Appendix \ref{sec:spec} we will show that \eqref{speccond} is satisfied for the choice of \eqref{BM} if $W$ is large enough. \bigskip The Stieltjes transform of the empirical eigenvalue distribution of $H$ is given by \begin{equation}\label{defmz} m(z) \equiv{m_N} (z) = \frac{1}{N} \mbox{Tr\,}\, \frac{1}{H-z}\,,\,\,\, z=E+i\eta. \end{equation} We define the density of the semicircle law \begin{equation} \label{def rho sc} \varrho_{sc}(x) \;:=\; \frac{1}{2 \pi} \sqrt{[4 - x^2]_+}\,, \end{equation} and, for ${\text {Im} }\, z > 0$, its Stieltjes transform \begin{equation} \label{def m sc} m_{sc}(z) \;:=\; \int_{\mathbb R } \frac{\varrho_{sc}(x)}{x - z} \, {\rm d} x\,. \end{equation} The Stieltjes transform $m_{sc}(z) \equiv m_{sc}$ may also be characterized as the unique solution of \begin{equation} \label{defmsc} m_{sc} + \frac{1}{z + m_{sc}} \;=\; 0 \end{equation} satisfying ${\text {Im} }\, m_{sc}(z) > 0$ for ${\text {Im} }\, z > 0$, i.e., \begin{equation}\label{temp2.8} m_{sc}(z)=\frac{-z+\sqrt{z^2-4}}{2}\,. \end{equation} Here the square root function is chosen with a branch cut along the positive real axis. This guarantees that the imaginary part of $m_{sc}$ is non-negative. The Wigner semicircle law states that $m_N(z) \to m_{sc} (z) $ for any fixed $z$ provided that $\eta={\text {Im} } \, z>0$ is independent of $N$. The local version of this result for universal Wigner matrices is the content of the following Theorem. \begin{theorem}[Local semicircle law] \label{mainls}{} Let $H=(h_{ij})$ be a Hermitian $N\times N$ random matrix where the matrix elements $h_{ij}=\overline{h}_{ji}$, $ i \le j$, are independent random variables with ${\mathbb E }\, h_{ij}=0$, $1\leq i,j\leq N$, and assume that the variances $\sigma_{ij}^2 ={\mathbb E } |h_{ij}|^2$ satisfy \eqref{sum}, \eqref{de-de+} and \eqref{speccond}. Suppose that the distributions of the matrix elements have a uniformly subexponential decay in the sense that there exist constants $\alpha$, $\beta>0$, independent of $N$, such that for any $x> 0$ we have \begin{equation}\label{subexp} \P(|h_{ij}|\geq x^\alpha |\sigma_{ij}|)\leq \beta e^{- x}. \end{equation} Then there exist constants $C_1$, $C_2$, $C$ and $c>0$, depending only on $\alpha$, $\beta$ and $\delta_-$ in \eqref{speccond}, such that for any $z=E+i\eta$ with $\eta={\mbox Im} \, z>0$, $|z|\leq 10$ and \begin{equation}\label{fakerelkaeta} \frac{1}{\sqrt{M\eta}}\leq \frac{\kappa^2}{(\log N)^{C_1}}, \end{equation} where $\kappa : = \big| \, |E|-2 \big|$, the Stieltjes transform of the empirical eigenvalue distribution of $H $ satisfies \begin{equation}\label{fakemainlsresult} \P\left(|{m_N}(z)-m_{sc}(z)|\geq (\log N)^{C_2} \frac{1}{\sqrt{M\eta}\,\kappa}\right)\leq CN^{-c(\log \log N)} \end{equation} for sufficiently large $N$. In fact, the same result holds for the individual matrix elements of the Green's function $G_{ii}(z) = (H-z)^{-1}(i,i)$: \begin{equation}\label{Gii} \P\left(\max_i | G_{ii}(z)-m_{sc}(z)|\geq (\log N)^{C_2} \frac{1}{\sqrt{M\eta}\,\kappa}\right)\leq CN^{-c(\log \log N)}. \end{equation} \end{theorem} We remark that once a local semicircle law is obtained on a scale essentially $M^{-1}$, it is straightforward to show that eigenvectors are delocalized on a scale at least of order $M$. The precise statement will be formulated in Corollary \ref{cor}. We will prove Theorem \ref{mainls} in Sections \ref{sec:proofloc}--\ref{proofsolvesleq} by extending the approach of \cite{ESY1, ESY2, ESY3}. The main ingredients of this approach consist of i) a derivation of a self-consistent equation for the Green's function and ii) an induction on the scale of the imaginary part of the energy. The key novelty in this paper is that the self-consistent equation is formulated for the array of the diagonal elements of the Green's function $(G_{11}, G_{22}, \ldots , G_{NN})$ instead of the Stieltjes transform $m =\frac{1}{N} \mbox{Tr\,} G = \frac{1}{N}\sum_i G_{ii}$ itself as in \cite{ESY3}. This yields for the first time a strong pointwise control on the diagonal elements $G_{ii}$, see \eqref{Gii}. The subexponential decay condition \eqref{subexp} can be weakened if we are not aiming at error estimates faster than any power law of $N$. This can be easily carried out and we will not pursue it in this paper. \bigskip Denote the eigenvalues of $H$ by $\lambda_1, \ldots , \lambda_N$ and let $p_N(x_1, \ldots , x_N)$ be their (symmetric) probability density. For any $k=1,2,\ldots, N$, the $k$-point correlation function of the eigenvalues is defined by \begin{equation} p^{(k)}_N(x_1, x_2,\ldots, x_k):= \int_{{\mathbb R}^{N-k}} p_N(x_1, x_2, \ldots , x_N){\rm d} x_{k+1}\ldots {\rm d} x_N. \label{corrfn} \end{equation} We now state our main result concerning these correlation functions. \begin{theorem}[Universality for generalized Wigner matrices] \label{mainsk} We consider a generalized hermitian Wigner matrix such that \eqref{VV} holds. Assume that the distributions $\nu_{ij}$ of the $(i,j)$ matrix elements have a uniformly subexponential decay in the sense of \eqref{subexp}. Suppose that the real and imaginary parts of $h_{ij}$ are i.i.d., distributed according to ${\omega}_{ij}$, i.e., $\nu_{ij}({\rm d} h) = {\omega}_{ij}({\rm d} {\text {Im} } \, h){\omega}_{ij}({\rm d} {\text {Re} } h)$. Let $m_k(i,j)=\int x^k{\rm d}{\omega}_{ij}(x)$, $1\leq k\leq 4$, denote the $k$-th moment of ${\omega}_{ij}$ ($m_1=0$). Suppose that \begin{equation}\label{no3no4} \inf_{N} \min_{1\leq i,j\leq N}\left\{\frac{m_4(i,j)}{(m_2(i,j))^2}- \frac{(m_3(i,j))^2}{(m_2(i,j))^3}\right\}>1, \end{equation} then, for any $k\ge 1$ and for any compactly supported continuous test function $O:{\mathbb R}^k\to {\mathbb R}$, we have \begin{equation} \begin{split} \lim_{b\to0}\lim_{N\to \infty} \frac{1}{2b} \int_{E-b}^{E+b}{\rm d} E' \int_{{\mathbb R }^k} & {\rm d}\alpha_1 \ldots {\rm d}\alpha_k \; O(\alpha_1,\ldots,\alpha_k) \\ &\times \frac{1}{\varrho_{sc}(E)^k} \Big ( p_{N}^{(k)} - p_{GU\! E, N} ^{(k)} \Big ) \Big (E'+\frac{\alpha_1}{N\varrho_{sc}(E)}, \ldots, E'+\frac{\alpha_k}{N \varrho_{sc}(E)}\Big) =0, \label{matrixthm} \end{split} \end{equation} where $p_{GU\! E, N} ^{(k)}$ is the $k$-point correlation function of the GUE ensemble. The same statement holds for generalized symmetric Wigner matrices, with GOE replacing the GUE ensemble. \end{theorem} The limiting correlation functions of the GUE ensemble are given by the sine kernel $$ \frac{1}{\varrho_{sc}(E)^k} p_{GU\! E, N} ^{(k)} \Big (E+\frac{\alpha_1}{N\varrho_{sc}(E)}, \ldots, E+\frac{\alpha_k}{N \varrho_{sc}(E)}\Big) \to \det\{ K(\alpha_i-\alpha_j)\}_{i,j=1}^k, \qquad K(x) = \frac{\sin \pi x}{\pi x}, $$ and similar universal formula is available for the limiting gap distribution. \bigskip {\it Remark}: The quantity in the bracket in \eqref{no3no4} is always greater or equal to 1 for any real distribution with mean zero, which can be obtained by $$ m_3^2= \big[ \int x^3 {\rm d} {\omega} \big]^2 = \big[ \int x(x^2-m_2) {\rm d}
00:00 - 22:0022:00 - 00:00 12:00 AM @Morwenn Yup, basically that's what I'm thinking. Just wondering if there's any better way. Rebol has all these macros, and one option would be to say "just use inline functions" but part of the puzzle here is writing something weird. It doesn't make much sense to be writing something weird if we aren't compiling the weird though. So I'd like to see us build for a platform which doesn't have inline functions... Return<int> is convertible to int, Parameter<int> can be constructed from int and is convertible to int. Double conversions are not allowed by the language. That's settled. Or so I think. But I may be a little tired. 2 hours later… 1:48 AM Thanks for the upvotes guys. This is all kinda new to me so I'll probably be in lurker mode a while yet. I'm pleasantly surprised by both the level of activity/engagement here, and the "Rebol spirit" that comes through clearly in both the conversation and technical direction. 4 @Ashley Glad to hear it. If you want to read some high points, or general notes, the "starred posts" at right can walk you through some history. If you click on the link icon at the left of a message, you can jump to read a conversation in context. Chat supports basic markdown constructs. So in the above I had typed: [the "starred posts" at right](http://chat.stackoverflow.com/rooms/info/291/rebol-and-red/?tab=stars). Two asterisks for **bold** (bold) and one asterisk for *italic* (italic) 0 Here's a stab at handling the "*" case: like: funct [ series [series!] search [series!] ][ rule: copy [] remove-each s b: parse/all search "*" [empty? s] foreach s b [ append rule reduce ['to s] ] append rule [to end] all [ parse series rule ... 2:45 AM posted on July 25, 2015 by shawndumas [Hacker News] The Rebol Scripting Language on drdobbs.com (1 point) 3:31 AM @HostileFork Here's a late-night thought on this. Consider how dates with a time become a date followed by a time in paths, one element becoming two. There are three ways to "deal with" "the ugliness of" this behaviour: 1) Change the separator character from / to some other character like \ 2) Leave it as is so that they break on / in paths; to keep joined you could use a paren or some other separator like \ 3) Make dates "grabby" so that they incorporate the time; a paren will break them if you want that Coincidentally this seems to parallel the three ways I've thought of to handle urls and files in paths: 1) Change the separator character from / to some other character like \ 2) Make them "giving" so that they break on / in paths; to join you could use a paren or some other separator like \ 3) Leave it as is; a paren can break them if you want that To me, it seems like for consistency we should choose the same option number from both triples. If I'm right, it means we must change something. I'm of the feeling that we really need to get a NewPath-enabled Rebol in our hands to see what happens. It's very hard to predict all the ups and downs of it and what will work well and what won't. I'm of the feeling that we should make what we have consistent so that we can understand the consequences of change. A good idea as well. But if you're talking about file literals in paths you're not really talking about what we have today, anymore. Not a great idea maybe, but perhaps something that can work in the short term while real language design takes place. @HostileFork Pardon? We do have file literals in paths, that behave as in option 3 ("grabby"). foo/%bar/baz works? Hm, didn't know. 3:41 AM Yah, it's length 2. Basically where I'm coming from here is to change all those letter groups into YYY and NNN and as few other cases as possible. I think you should have to say foo/%"bar/baz" or foo/file!{bar/baz} or foo/%{bar/baz} to get that. So my only question is if foo/%bar/baz should give you a three element path or be an error. The problem with %"foo/bar" is that it's not the canonical output. If we're going for mold/load consistency, we need a paren. It could be canonical output if it's in a path. Hmmm, didn't think of that! I'd be wary of varying canonical output by context, but, that's a knee-jerk reaction I have to anything ... The main thing I've been hmmming over is this issue of whether to ease up on the demands NewPath has where it would turn every %foo/baz/bar into a path and then depend on evaluation to make element 1 a FILE!, then stringify baz and append /baz, and then stringify bar and append that... or to let it be a prefixed string literal as today. 3:46 AM I don't even know if files can have slashes in their names if you encode the / to %2F ... ! Which was my original thought, because I wanted to be able to say %foo/(some expression to do)/[some expression to combine or join] And I can't do that if %foo/ gets "grabby" and tries to go eat the parentheses and brackets. But... bending the compromise another way, it can be done with %"foo"/(...)/[...] Does that mean you'd go for "slashes stop files if you are in a path"? I am kind of on the fence here. I'm wondering if for consistency's sake, making it illegal is the thing to do. This is one of the two questions I told everybody I was going to need their help on. You have to put it in quotes, or use parentheses, or something. 3:49 AM @HostileFork I don't want to force () circumlocution unless it's driven by strong reasoning. You can do it with quotes or construction syntax. But parentheses if you want (e.g. building that exact structure isn't important) As long as it's consistent, any default behaviour becomes more reasonable, even if it's the rarer option for your use case. I think it seems reasonable. If you start a sequence out with %foo... then it can consume the slashes all the way down the line. @HostileFork Right. That's why it would work in the parentheses. Same with @ for URLs. But if you don't start with it, then you're in a path. 3:52 AM Essato. And once in a path, any files must have an additional form of delimiting to know their ending. Hm, but even if they're terminal? a/b/c/%foo :-/ Terminal is fine as-is. Well, we can think on what's consistent there. I don't know about the dates. My original thought was that in path dispatch, if a date! saw a TIME! as the next value in the path it would evaluate to pick them both up. @HostileFork Right, my option 3. At first I thought the default was dead wrong. Now I like it better and want the same for files, go figure ... And it's like Carl saw this coming, since the \ separator as it works now would provide infix file joining in mid-path. I don't think in general we're going to see that many file literals in the middle of paths. 3:58 AM @HostileFork No, of course not, I would be surprised to see one, ever. But we're talking consistency, and that's ... never really real. You will see perhaps things like %"foo"/[(either condition %bar %baz) %.html)] It's what the language does when used in a way we can't even think of that "really" matters. Hm, you'd think FILE! would be more popular for doing short string literal appends. append "abc" %def is one character fewer than append "abc" "def" or append {abc} {def} In fact, I was just thinking the other day, with all these types I'm supporting in the lexer, I doubt if the kind of coding I want to use Rebol for would use more than 3. I'm all about utypes and language extensions, and I can't wait ... @MarkI It sometimes happens like that. But if you design things, sometimes they're used as intended. 4:03 AM @HostileFork Cool idea, never
that is reflexive, antisymmetric and transitive. \end{definition} \begin{example} For any set $X$, the power set of $X$ ordered by the set inclusion relation $\subseteq$ forms a poset $( \mathcal{P}(X),\subseteq)$ \end{example} \begin{definition} Two partially ordered sets $P=(X,\leq)$ and $Q=(X,\leq')$ are said to be isomorphic if there exist a bijection $f:X \rightarrow X'$ such that $x \leq y$ if and only if $f(x)\leq'f(y).$ \end{definition} \begin{definition}\label{ConnPoset} A poset $(X,\leq )$ is connected if for all $x,y\in X$, there exists sequence of elements $x=x_1,x_2, \ldots, x_n=y $ such that every two consecutive elements $x_i$ and $x_{i+1}$ are comparable (meaning $x_i < x_{i+1}$ or $x_{i+1} < x_i$). \end{definition} \noindent {\bf Notation:} Given an order $\leq$ on a set $X$, we will denote $x<y$ whenever $x \neq y$ and $x \leq y$. Finite posets $(X,\leq)$ can be drawn as directed graphs where the vertex set is $X$ and an arrow goes from $x$ to $y$ whenever $x \leq y$. For simplicity, we will not draw loops which correspond to $x \leq x$. We will then use the notation $(X,<)$ instead of $(X,\leq)$ whenever we want to ignore the reflexivity of the partial order \begin{example}. Let $X=\mathbb{Z}_8$ be the set of integers modulo $8$. The map $f:X \rightarrow X$ given by $f(x)=3x-2$ induces an isomorphism between the following two posets $(X,<)$ and $(X,<')$. \begin{center} \includegraphics{example1.png}, \qquad \includegraphics{example2.png} \end{center} \end{example} \begin{definition} A chain in a poset $(X, <)$ is a subset $C$ of $X$ such that the restriction of $<$ to $C$ is a total order (i.e. every two elements are comparable). \end{definition} Now we recall some basics about topological spaces called $T_0$ and $T_1$ spaces. \begin{definition} A topological space $X$ is said to have the property $T_0$ if for every pair of distinct points of $X$, at least one of them has a neighborhood not containing the other point. \end{definition} \begin{definition} A topological space $X$ is said to have the property $T_1$ if for every pair of distinct points of $X$, each point has a neighborhood not containing the other point. \end{definition} Obviously the property $T_1$ implies the property $T_0$. Notice also that this definition is equivalent to saying singletons are closed in $X$. Thus a $T_1$-topology on a \emph{finite} set is a discrete topology. Since any finite $T_1$-space is discrete, we will focus on the category of finite $T_0$-spaces. First we need some notations. Let $X$ be a finite topological space. For any $x \in X$, we denote \[ U_x:=\textit{the smallest open subset of $X$ containing $x$} \] It is well known \cite{Alex} that the category of $T_0$-spaces is isomorphic to the category of posets. We have $x \leq y$ if and only if $U_y \subseteq U_x$ which is equivalent to $C_x \subset C_y$, where $C_v$ is the complelement $U_v^{c}$ of $U_v$ in $X$. Thus one obtain that $U_x=\{w \in X;\; x \leq w\}$ and $C_x=\{v \in X; \;v < x\}$. Under this correspondence of categories, the subcategory of finite posets is equivalent to the category of finite $T_0$-spaces. \\ Through the rest of this article we will use the notation of $x<y$ in the poset whenever $x\neq y$ and $x \leq y$. \begin{comment} $$\begin{array}{|c|c| c| c| c| c|} \hline n \ & Distinct\\& Topologies & Distinct\\&& T_0-Topologies & inequivalent\\&&& topologies & T_0-inequivalent\\&&&& topologies \\ \hline \ 1 & 1 &1 & 1 & 1 \\ \hline \ 2 & 4 & 3 & 3 & 2 \\ \hline \ 3 & 29 & 19 & 9 & 5 \\ \hline \ 4 & 355 & 219 & 33 & 16 \\ \hline \ 5 & 6942 & 4231 & 139 & 63 \\ \hline \ 6 & 209527 & 130023 & 718 & 318 \\ \hline \end{array} $$ \end{comment} \section{Topologies on non- connected Quandles}\label{Main} As we mentioned earlier, since $T_1$-topologies on a finite set are discrete, we will focus in this article on $T_0$-topologies on \emph{finite quandles}. A map on finite spaces is continuous if and only if it preserves the order. It turned out that on a finite quandle with a $T_0$-topology, left multiplications can not be continuous as can be seen in the following theorem \begin{theorem}\label{left} Let $X$ be a finite quandle endowed with a $T_0$-topology. Assume that for all $z \in X$, the map $L_z$ is continuous, then $x \leq y$ implies $L_z(x)=L_z(y)$. \end{theorem} \begin{proof} We prove this theorem by contradiction. Let $X$ be a finite quandle endowed with a $T_0$-topology. Assume that $x \leq y$ and $L_z(x) \neq L_z(y)$. If $x=y$, then obviously $L_z(x)=L_z(y)$. Now assume $x <y$, then for all $a \in X$, the continuity of $L_a$ implies that $a*x \leq a*y$. Assume that there exist $a_1 \in X$ such that, $z_1:=a_1*x=L_{a_1}(x) < a_1*y=L_{a_1}(y)$. The invertibility of right multiplications in a quandle implies that there exist unique $a_2$ such that $a_2*x=a_1*y$ hence $a_1*x<a_2*x$ which implies $a_1 \neq a_2.$ Now we have $a_1*x<a_2*x \leq a_2 *y=z_2$. We claim that $a_2*x < a_2 *y$. if $a_2*y=a_2*x$ and since $a_2*x=a_1*y$ we will have $a_2*y=a_2*x=a_1*y$ hence $a_2*y=a_1*y$ but $a_1 \neq a_2$, thus contradiction. Now that we have proved $a_2*x < a_2 *y$, then there exists $a_3$ such that $a_2*y=a_3*x$ we get, $a_2*x<a_3*x$ repeating the above argument we get, $a_3*x<a_3*y$. Notice that $a_1,a_2$ and $a_3$ are all pairwise disjoint elements of $X$. Similarly, we construct an \emph{infinite} chain, $a_1 * x< a_2*x <a_3 *x < \cdots $, which is impossible since $X$ is a finite quandle. Thus we obtain a contradiction. \end{proof} \begin{comment} Theorem~\ref{left} states that we can not have a notion of \emph{semitopological quandles} (continuity with respect to each variable considered separately) on finite quandles. For more on semitopological groups see \cite{Husain}. Thus through the rest of the paper, the word continuity will mean continuity of \emph{right multiplications} in the finite quandle. We will then say that the quandle with its topology is \emph{right continuous}. We borrow the following terminologies from the book \cite{Rup}. \end{comment} We have the following Corollary \begin{corollary} Let $X$ be a finite quandle endowed with a $T_0$-topology. If $C$ is a chain of $X$ as a poset then any left continuous function $L_x$ on $X$ is a constant function on $C$. \end{corollary} \begin{definition} A quandle with a topology in which right multiplications (respectively left multiplications) are continuous is called \emph{right topological quandle} (respectively \emph{left topological quandle}). \end{definition} In other words, right topological quandle means that for all $x,y,z \in X$, \[ x<y \implies x*z < y*z. \] and, since left multiplications are not necessarly bijective maps, left topological quandle means that for all $x,y,z \in X$, \[ x<y \implies z*x \leq z*y. \] \begin{comment} \begin{definition} A quandle with a topology in which both right multiplications and left multiplications are continuous is called \emph{semitopological quandle}. \end{definition} Obviously topological quandle implies semitopological quandle which implies right topological quandles, but the converses are not true. \end{comment} \begin{theorem}\label{noT0} There is no $T_0$-topology on a finite connected quandle $X$ that makes $X$ into a right topological quandle. \end{theorem} \begin{proof} Let $x <y$. Since $X$ is connected quandle, there exists $\phi \in Inn(X)$ such that $y=\phi(x)$. Since $X$ is finite, $\phi$ has a finite order $m$ in the group $Inn(X)$. Since $\phi$ is a continous automorphism then $x<\phi(x)$ implies $x<\phi^m(x)$ giving a contradiction. \end{proof} \begin{corollary}\label{noodd} There is no $T_0$-topology on any latin quandle that makes it into a right topological quandle. \end{corollary} \begin{comment} \begin{proof} For Dihedral quandle $R_y(x)=2y-x$\\ If n is odd then 2 is invertible in $Z_n$, hence for every\\ $z \in \mathbb(z)_n$ there exist $y=\frac{z+x}{2}$ such that $R_y(x)=z$ hence $R_y(x)$ is both injective in $x$ and $y$ hence quandle will have only one orbit.\\ Hence it is connected. \end{proof} \end{comment} Thus Theorem~\ref{noT0} leads us to consider quandles $X$ that are not connected, that is $X=X_1 \cup X_2\cup \ldots X_k$ as orbit decomposition, search for $T_0$-topology on $X$ and investigate the continuity of the binary operation. \begin{proposition}\label{Prop} Let $X$ be a finite quandle with orbit decomposition $X=X_1 \cup \{a\}$, then there exist unique non trivial $T_0$-topology which makes $X$ right continuous. \end{proposition} \begin{proof} Let $X=X_1 \cup \{a\}$ be the orbit decomposition of the quandle $X$. For any $x,y \in X_1$, there exits $\phi \in Inn(X)$ such that $\phi(x)=y$ and $\phi(a)=a$. Declare that $x<a$, then $\phi(x)<a$. Thus for any $z\in X_1$ we have $z<a$. Uniqueness is obvious. \end{proof} The $T_0$-topology in Proposition~\ref{Prop} is precisely given
fine-tuning the generator so the exact appearance would be reconstructed with this code. In addition, they ensure the process does not impair the disentangled latent space through regularization. This simple approach produces significantly better reconstructions (see Fig.~\ref{fig:inv}), and allows employing off-the-shelf editing techniques with high quality, essentially bypassing the notorious distortion-editability trade-off (see Section~\ref{sec:encoding}). Such fine-tuning sessions are typically brief, lasting an order of a single minute. As described in Section \ref{sec:inversion}, this generator tuning can alternatively be performed as a forward pass procedure using hypernetworks \cite{alaluf2021hyperstyle, dinh2021hyperinverter}. Tzaban~{{et al}. }~\shortcite{tzaban2022stitch} further improve the tuning scheme to semantically edit a video while preserving temporal coherence. First, they observe that using an encoder for the initial inversion allows for a temporally-smooth edit after tuning the generator. Second, they propose to further tune the generator to better stitch the edited cropped face back to the original frame. Bau~{{et al}. }~\shortcite{bau2020rewriting} perform a similar tuning operation, but where the awareness is to the task instead of the data. The authors propose changing semantic and physical properties (or \textit{rules}) of deep generative networks, relying on the concept of linear associative memory. While current methods for image editing allow users to manipulate single images, this method allows changing semantic rules and properties of the network, so that all images generated by the network have the desired property. This includes removing undesired patterns such as watermarks and adding objects such as human crowds or trees. Kwong~{{et al}. }~\shortcite{kwong2021unsupervised} outline a method for cross-domain editing by inverting images into a source domain and re-synthesizing them in a fine-tuned model using the same code. Cherepkov~{{et al}. }~\shortcite{cherepkov2021navigating} expand the range achievable by existing state-of-the-art generative models used for image editing and manipulation, such as StyleGAN2. While existing methods find interpretable directions in the model’s latent space and operate on latent codes, they find interpretable directions in the space of generator parameters and use them to manipulate images and expand the range of possible visual effects. They show that their discovered manipulations, such as changing car wheel size, cannot be achieved by manipulating the latent code. Finally, Liu~{{et al}. }~\shortcite{liu2022selfconditioned} demonstrate that brief fine-tuning sessions can be used to condition a model on labels derived from the latent-space itself, thereby ``baking'' editing directions into the GAN and improving treatment of rare data modalities, such as extreme poses or underrepresented ethnicities. \section{Introduction} The ability of GANs to generate images of phenomenal realism at high resolutions is revolutionizing the field of image synthesis and manipulation. More specifically, StyleGAN \cite{karras2019style} has reached the forefront of image synthesis, gaining recognition as the state-of-the-art generator for high-quality images and becoming the de-facto golden standard for the editing of facial images. See Figure~\ref{fig:teaser}, top for some visual examples. \input{Figures/teaser/teaser} \input{Figures/teaser/teaser_editing} StyleGAN presents a fascinating phenomenon. It is unsupervised, and yet its latent space is surprisingly well behaved. As it turns out, it is so well behaved that it even supports linear latent arithmetic. For example, it supports adding a vector representing age to a set of latent codes, resulting in images representing the original individuals, but older. Similarly, it has been demonstrated that StyleGAN arranges its latent space not only linearly, but also in a disentangled manner, where traversal directions exist that alter only specific image properties, while not affecting others. Such properties include global, domain-agnostic aspects (e.g., viewing angles or zoom), but also domain-specific properties such as expressions or gender for human faces, car colors, dog breeds, and more (see Figure~\ref{fig:teaser}, and Figure \ref{fig:edit}). Exploring what these qualities entail, recent StyleGAN-based work has presented astounding realism, impressive control, and inspiring insights into how neural networks operate. As groundbreaking as it may be, these powerful editing capabilities only reside within the model's latent space, and hence only operate on images generated by StyleGAN itself. Seeking to bring real-world images to the power of StyleGAN's latent control, inversion into StyleGAN's latent space has received considerable attention. Further harnessing StyleGAN's powers, other applications have also arisen, bringing contributions to the worlds of segmentation, augmentation, explainability, and others. In this report, we map out StyleGAN's phenomenal success story, along with analyzing its severe drawbacks. We start by discussing the architecture itself and analyze the role it plays in creating the leading generative model since its conception in 2018. % We then shift the discussion to the resources and characteristics StyleGAN's training requires, and lay out the work that reduces, re-uses, and recycles it. In Section \ref{sec:editing}, we discuss StyleGAN's latent spaces. We show how linear editing directions can be found, encouraged, and leveraged into powerful semantic editing. We inquire into what properties StyleGAN can and cannot disentangle well and dive into a surprisingly wide variety of approaches to achieve meaningful semantic latent editing. In Section \ref{sec:encoding}, the quest for applying StyleGAN's power in real-world scenarios turns to a discussion about StyleGAN inversion. To express a given real image in StyleGAN's domain, many different approaches have been suggested, all of which thoroughly analyze and exploit the generator's architecture. Some propose latent code optimization and others apply data-driven inference. Some works seek an appropriate input seed vector, while others interface with StyleGAN at other points along the inference path, greatly increasing its expressive power. Unsurprisingly though, it turns out that the well-behaved nature of StyleGAN's latent space diminishes in regions far from its well-sampled distribution. This in practice means that given a real-life image, its accurate reconstruction quality (or \textit{distortion}) comes at the cost of \textit{editability}. Finding different desired points on this reconstruction-editability trade-off is a main point of discussion in the works covered in this section. Encoding an image into StyleGAN's latent space has more merit than for image inversion per se. There are many applications where the image being encoded is not the one the desired latent code should represent. Such encoding allows for various image-to-image translation methods \cite{Nitzan2020FaceID,richardson2020encoding,alaluf2021matter}. In Section \ref{sec:encoding}, we present and discuss such supervised and unsupervised methods. In Section \ref{sec:discriminative}, we show the competence of StyleGAN beyond its generative power and discuss the discriminative capabilities StyleGAN can be leveraged for. This includes applications in explainability, regression, segmentation, and more. In most works and applications, the pre-trained StyleGAN generator is kept fixed. However, in Section \ref{sec:fine-tuning}, we present recent works that fine-tune the StyleGAN generator and modify its weights to bridge the gap between the training domain (in-domain) and the target domain, which could possibly be out-of-domain. Each section addresses both the newcomer, with basic concepts and conceptual intuition, and the experienced, with a summary of the most established and promising approaches, along with some pointers regarding when to use them. \section{Evaluation Metrics}~\label{sec:metrics} While many aspects of GAN quality can be evaluated qualitatively, it is often desirable to assess the model quality more objectively. Evaluation metrics can be used to produce reliable, standardized benchmarks and to better gauge the advancement of the field. As we discuss below, this problem is not restricted to StyleGAN editing alone, but to the evaluation of most GANs and editing operations. \paragraph*{GAN Evaluation} The evaluation of generative models is straightforward when ground truth is at hand. For example, GAN inversion can be measured by various metrics assessing the distortion, such as pixel-wise distance using mean-squared error, perceptual similarity using LPIPS \cite{zhang2018unreasonable}, structural similarity using MS-SSIM \cite{wang2003multiscale}, or identity similarity~\cite{marriott2020assessment}, employed for facial images using a face recognition network \cite{deng2019arcface}. In the absence of such ground truth for the task of unconditional image synthesis, the evaluation of GAN quality remains an open challenge. Undoubtedly, the most popular metric is the Frechet Inception Distance (FID) \cite{heusel2017gans}. FID measures the similarity between two distributions using the Frechet Distance, where each distribution consists of visual features extracted by utilizing a pretrained recognition network ~\cite{szegedy2016rethinking}. Namely, given two sets of images, low FID indicates these sets share similar visual statistics. For
new data-driven approach for appearance learning capable of compensating motion artifacts. Our network architecture for motion appearance learning is based on a siamese triplet network trained in a multi-task scenario. Therefore, we incorporate not only a single axial slice but make use of information from 9 slices, extracted from axial, sagittal and coronal orientations. Using a multi-task loss, we estimate both (1) an overall motion score of the reconstructed volume similar to \cite{preuhs2019image} and (2) a prediction which projections are affected by the motion. To stabilize the network prediction, we deploy a novel pre-processing scheme to compensate for training data variability. These extensions allow us to learn realistic motion appearance, composed of three translation and three rotation parameters per acquired view. We evaluate the accuracy of the motion appearance learning in dependence of the patient anatomy and also the motion type. In a rigid motion estimation benchmark, we demonstrate the performance of the appearance learning approach in comparison to state-of-the-art methods. Finally, we demonstrate its applicability to real clinical data using a motion-affected clinical scan. We devise the proposed framework for CBCT, however, by exchanging the backward model and training data, this approach is seamlessly applicable to radial sampled MRI or \textit{positron emission tomography} (PET). In addition, by adjusting the regression target, also for Cartesian sampled MRI. \section{Rigid Motion Model for CBCT} \subsection{Cone-Beam Reconstruction} \label{subsec:conebeamreconstruction} In tomographic reconstruction we compute anatomical structures denoted by $\vec{x}$ from measurements $\vec{y}$ produced with a forward model $\vec{A}$ by $\vec{A}\vec{x}=\vec{y}$. For X-ray transmission imaging $\vec{x}$ are attenuation coefficients and $\vec{y}$ are the attenuation line integrals measured at each detector pixel. The system geometry~---~e.\,g., pixel spacing, detector size and source-detector orientation~---~is part of the forward model $\vec{A}$. Using the pseudo-inverse \begin{equation} \vec{x} = \vec{A}^\top (\vec{A}\sysmat^\top)^{-1} \vec{y} \end{equation} we get an analytic solution to this inverse problem, which consists of the back-projection $\vec{A}^\top$ of filtered projection data $(\vec{A}\sysmat^\top)^{-1} \vec{y}$ \cite{maier2019learning}, commonly known as \textit{filtered back-projection} (FBP). For CBCT with circular trajectories, an approximate solution is provided by the \textit{Feldkamp-Davis-Kress} (FDK) algorithm \cite{feldkamp1984practical}. The algorithm is regularly used for autofocus approaches \cite{Kingston2011,sisniega2017motion} (see\,Sec.\,\ref{subsubsec:reconstructionconsistency}) due to its low computational costs. Rit\,et\,al.\,\cite{rit2009comparison} have further shown that even due to its approximate nature, an FDK-based motion-compensated CBCT reconstruction is capable of correcting most motion artifacts. Thus, we use the FDK reconstruction algorithm, having the benefit of only filtering the projection images once and thereafter only altering the back-projection operator for motion trajectory estimation. It is possible to formulate the FDK algorithm using a tuple of projection matrices $\vec{P} = (\vec{P}_0, \vec{P}_1, ...,\vec{P}_N)$ describing the geometry of operator $\vec{A}$. The measurements $\vec{y}$ are reshaped to a tuple of \mbox{2-D} projection images $\vec{Y} = (\vec{Y}_0, \vec{Y}_0, ...,\vec{Y}_N)$. In analogy to \cite{feldkamp1984practical}, we implement the FDK for a short scan trajectory using Parker redundancy weights $W_i(u,v)$ \cite{parker1982optimal}, where $i \in [1,2, ..., N]$ describes the projection index and $(u,v)$ denotes a 2-D pixel. The first step is a weighting and filtering of the projection images \begin{equation} \vec{Y}^\prime_i (u,v) = W_i(u,v) \int_{\mathbb{R}} \mathcal{F} \, \tilde{\vec{Y}}_i(\eta,v) \, \textnormal{e}^{i 2 \pi u v} \frac{|\eta|}{2} \, \textnormal{d} \eta \enspace, \end{equation} with $\mathcal{F} \, \tilde{\vec{Y}}_i$ being the 1-D Fourier transform of the $i$\textsuperscript{th} cosine weighted projection image along the tangential direction of the scan orbit. Thereafter, a distance-weighted voxel-based back-projection is applied mapping a homogeneous world point $\vec{a} \in \mathbb{P}^3$ to a detector pixel described in the projective two-space $\mathbb{P}^2$ \begin{equation} f_\textnormal{FDK}(\vec{a},\vec{P},\vec{Y}) = \sum_{i \in N} U(\vec{P}_i,\vec{a}) \vec{Y}^\prime_i(\phi_u(\vec{P}_i \vec{a}), \phi_v(\vec{P}_i \vec{a})) \label{eq:fdk} \end{equation} with $\vec{P}_i$ describing the system calibration associated with $\vec{Y}_i$. (see\,Fig.\,\ref{fig:rpevisualization}). The mapping function $\phi_\diamond : \mathbb{P}^2 \rightarrow \mathbb{R}$ is a dehomogenization \begin{equation} \phi_\diamond((x,y,w)^\top) = \begin{cases} \frac{x}{w} & \textnormal{if $\diamond = u$} \\ \frac{y}{w} & \textnormal{if $\diamond = v$} \end{cases} \enspace, \label{eq:dehomogenization} \end{equation} and $U(\vec{P}_i,\vec{a})$ is the distance weighting according to \cite{feldkamp1984practical}. \subsection{Rigid Motion Model} \label{subsec:rigidmotionmodel} \begin{figure} \centering \includegraphics[width=\linewidth]{fig_1.pdf} \caption{Visualization of the geometry for a point $\vec{a}$ and two geometries $\vec{P}_i$ and $\tilde{\vec{P}}_i$. The $L_2$ distance between the two projected points on the 2-D detector defines the RPE of the scene.} \label{fig:rpevisualization} \end{figure} We assume the rigid motion to be discrete w.\,r.\,t.~the acquired projections. To this end, we define the motion trajectory $\vec{M}$ as a tuple of motion states $\vec{M}_i \in \mathbb{SE}(3)$ describing the orientation of the patient during the acquisition of the $i$\textsuperscript{th} projection $\vec{Y}_i$. Each motion state is associated to a projection matrix $\vec{P}_i$. The motion modulated trajectory is obtained by \begin{equation} \vec{P} \circ \vec{M} = (\vec{P}_0\vec{M}_0, \, \vec{P}_1\vec{M}_1, \, ..., \, \vec{P}_N\vec{M}_N) \enspace, \end{equation} where $\circ$ is the element-wise matrix multiplication of two tuples. Typically, the motion trajectory is unknown and the task of motion compensation is to find a tuple of matrices $\vec{C}_i \in \mathbb{SE}(3)$ annihilating the resulting geometry corruption produced by $\vec{M}$. The compensation is successful if an annihilating trajectory $\vec{C} = (\vec{C}_0, ...\vec{C}_N)$ is found that suffices $\vec{C} \circ \vec{M} = \vec{1}$, with $\vec{1}$ being a tuple of identities. Each motion matrix defined in $\mathbb{SE}(3)$ is parameterized by 3 rotations ($r_x,r_y,r_z$) and 3 translations ($t_x,t_y,t_z)$, describing Euler angles and translations along the corresponding coordinate axis, respectively. Therefore, the annihilating trajectory has $6N$ free parameters for an acquisition with $N$ projections. To reduce the high dimensionality, we model the trajectory using Akima splines \cite{akima1970new}. This reduces the free parameters to $6M$, where $M$ is the number of nodes typically chosen as $M \ll N$. Based on the expected frequency of the motion the number of spline nodes can be adapted. \section{Appearance Learning} Conventionally, autofocus approaches are based on hand-crafted features, selected due to their correlation with an artifact-free reconstruction. For example, entropy gives a measure on contingency. As the human anatomy consists of mostly homogeneous tissues, entropy of the gray-value histogram can be expected to be minimal if all structures are reconstructed correctly. Motion blurs the anatomy or produces ghosting artifacts distributing the gray values more randomly. A similar rational is arguable for TV, which is also regularly used for constraining algebraic reconstruction \cite{taubmann2016convex}. Contrary to algebraic reconstruction, the motion estimation scenario is non-convex and optimization of a cost function based on hand-crafted image features is hardly solvable for geometric deviations exceeding a certain bound \cite{herbst2019misalignment}. We aim to overcome this problem by designing a tailored image-based metric, which reflects the appearance of the motion structure independent of the object. \subsection{Object-Independent Motion Measure} Several metrics have been proposed to quantify image quality of motion affected reconstructions based on a given ground truth: the \textit{structural similarity} (SSIM) \cite{silvia2019towards}, the $L_2$ distance \cite{braun2018motion} or binary classification to motion-free and -affected \cite{meding2017automatic}. However, they were not used for the compensation of motion, but merely for the assessment of image quality, which is of high relevance in the field of MRI to automize prospective motion compensation techniques. We choose the object-independent RPE for motion quantification. Its geometric interpretation is schematically illustrated in Fig.\,\ref{fig:rpevisualization}. The RPE measures reconstruction relevant deviations in the projection geometry and is defined by a 3-D marker position $\vec{a} \in \mathbb{P}^3$ and two projection geometries $\vec{P}_i, \tilde{\vec{P}}_i$. We consider $\vec{P}_i$ as the system calibration and $\tilde{\vec{P}}_i = \vec{P}_i \vec{M}_i$ as the actual geometry due to the patient motion. Accordingly, the RPE for a patient movement at projection $i$ is defined by \begin{equation} d_\textnormal{RPE}(\vec{P}_i,\tilde{\vec{P}}_i,\vec{a}) = \left\Vert \, \begin{pmatrix} \phi_u(\vec{P}_i \vec{a}) \\[0.3em] \phi_v(\vec{P}_i \vec{a}) \\[0.3em] \end{pmatrix} - \begin{pmatrix} \phi_u({\tilde{\vec{P}}_i} \vec{a}) \\[0.3em] \phi_v({\tilde{\vec{P}}_i} \vec{a}) \\[0.3em] \end{pmatrix}\, \right\Vert_2^2 \label{eq:projectionrpe} \end{equation} where $\phi_\diamond$ denotes the dehomogenization described in Eq.\,\eqref{eq:dehomogenization}. Using a single marker, the RPE is insensitive to a variety of motion directions. Therefore, we use $K=90$ virtual marker positions $\vec{a}_k$, distributed homogeneously at three sphere surfaces with the radii $30$\,mm, $60$\,mm, and $90$\,mm. The high number of markers ensures that the RPE is view-independent, i.\,e., a displacement of a projection at the beginning of the trajectory has the same effect on the RPE as a displacement of a projection at the end of the trajectory. Accordingly, the overall RPE for a single view is \begin{equation} d_\textnormal{RPE}(\vec{P}_i,\tilde{\vec{P}}_i) = \frac{1}{K} \sum_{k=1}^K d_\textnormal{RPE}(\vec{P}_i,\tilde{\vec{P}}_i,\vec{a}_k) \enspace. \label{eq:rpe} \end{equation} As shown in Strobel et\,al. \cite{strobel2003improving}, Eq.\,\eqref{eq:rpe} can be rewritten to a measurement matrix $\vec{X}$ containing the \mbox{3-D} marker positions, a vector $\vec{p}$ containing the elements of $\tilde{\vec{P}}_i$ and
# Prime factors buddies Given an integer N > 1, output all other numbers which prime decompositions have the same digits as the prime decomposition of N. For example, if N = 117, then the output must be [279, 939, 993, 3313, 3331], because 117 = 3 × 3 × 13 therefore, the available digits are 1, 3, 3 and 3 and we have 279 = 3 × 3 × 31 939 = 3 × 313 993 = 3 × 331 3313 = 3313 3331 = 3331 Those are the only other possible numbers, because other combination of these digits yield non-prime integers, which can't be the result of prime factorization. If N is any of 117, 279, 939, 993, 3313 or 3331, then the output will contain the five other numbers: they are prime factors buddies. You cannot use leading zeroes to get primes, e.g. for N = 107, its only buddy is 701 (017 is not considered). ### Input and Outputs • The input and output buddies must be taken and returned in the decimal base. • N will always be strictly greater than 1. • The output can be formatted rather freely, as long as it only contains the buddies and separators/list syntactic elements. • The ordering of the output is unimportant. • You may take the input through STDIN, as a function argument or anything similar. • You may print the output to STDOUT, return it from a function, or anything similar. ### Test cases Your program should solve any of the test cases below in less than a minute. N Buddies 2 [] 4 [] 8 [] 15 [53] 16 [] 23 [6] 42 [74, 146, 161] 126 [222, 438, 483, 674, 746, 851, 1466, 1631, 1679] 204 [364,548,692,762,782,852,868,1268,1626,2474,2654,2921,2951,3266,3446,3791,4274,4742,5426,5462,6233,6434,6542,7037,8561,14426,14642,15491,15833,22547] ### Scoring This is , so the shortest answer in bytes wins. # Jelly, 14 bytes ÆfVṢṚḌ ÇÇ€=ÇTḟ Execution time could be halved with DF instead of V, but it still completes the combined test cases in under thirty seconds. ### How it works ÆfVṢṚḌ Helper link. Argument: k (integer) Æf Decompose k into an array of primes with product k. V Eval. Eval casts a 1D array to string first, so this computes the integer that results of concatenating all primes in the factorization. Ṣ Sort. Sort casts a number to the array of its decimal digits. Ṛ Reverse. This yields the decimal digits in descending order. Ḍ Undecimal; convert the digit array from base 10 to integer. ÇÇ€=ÇTḟ Main link. Argument: n (integer) Ç Call the helper link with argument n. This yields an upper bound (u) for all prime factorization buddies since the product of a list of integers cannot exceed the concatenated integers. Ç€ Apply the helper link to each k in [1, ..., u]. Ç Call the helper link (again) with argument n. = Compare each result to the left with the result to the right. T Truth; yield all 1-based indices of elements of [1, ..., u] (which match the corresponding integers) for which = returned 1. ḟ Filter; remove n from the indices. • I think that Ç€=$ would be a bit faster than Ç€=Ç, given the time constraint. – Erik the Outgolfer Jul 18 '17 at 13:58 • Thanks, but for input 117, your improvement means the helper link will be called 3331 times instead of 3332 times, so the speed-up isn't measurable. Anyway, the newer (faster) TIO doesn't even need 20 seconds for the combined test cases. – Dennis Jul 18 '17 at 16:13 ## PowerShell v3+, 450 bytes param($n)function f{param($a)for($i=2;$a-gt1){if(!($a%$i)){$i;$a/=$i}else{$i++}}}$y=($x=@((f$n)-split'(.)'-ne''|sort))|?{$_-eq(f$_)} $a,$b=$x$a=,$a while($b){$z,$b=$b;$a=$a+($a+$y|%{$c="$_";0..$c.Length|%{-join($c[0..$_]+$z+$c[++$_..$c.Length])};"$z$c";"$c$z"})|select -u} $x=-join($x|sort -des) $l=@();$a|?{$_-eq(f$_)}|%{$j=$_;for($i=0;$i-le$x;$i+=$j){if(0-notin($l|%{$i%$_})){if(-join((f $i)-split'(.)'|sort -des)-eq$x){$i}}}$l+=$j}|?{$_-ne$n} Finally! PowerShell doesn't have any built-ins for primality checking, factorization, or permutations, so this is completely rolled by hand. I worked through a bunch of optimization tricks to try and reduce the time complexity down to something that will fit in the challenge restrictions, and I'm happy to say that I finally succeeded -- PS C:\Tools\Scripts\golfing> Measure-Command {.\prime-factors-buddies.ps1 204} Days : 0 Hours : 0 Minutes : 0 Seconds : 27 Milliseconds : 114 Ticks : 271149810 TotalDays : 0.000313830798611111 TotalHours : 0.00753193916666667 TotalMinutes : 0.45191635 TotalSeconds : 27.114981 TotalMilliseconds : 27114.981 ### Explanation There's a lot going on here, so I'll try to break it down. The first line takes input $n and defines a function, f. This function uses accumulative trial division to come up with a list of the prime factors. It's pretty speedy for small inputs, but obviously bogs down if the input is large. Thankfully all the test cases are small, so this is sufficient. The next line gets the factors of $n, -splits them on every digit ignoring any empty results (this is needed due to how PowerShell does regex matching and how it moves the pointer through the input and is kinda annoying for golfing purposes), then sorts the results in ascending order. We store that array of digits into $x, and use that as the input to a |?{...} filter to pull out only those that are themselves prime. Those prime digits are stored into $y for use later. We then split $x into two components. The first (i.e., smallest) digit is stored into $a, while the rest are passed into $b. If $x only has one digit, then $b will be empty/null. We then need to re-cast $a as an array, so we use the comma operator quick-like to do so. Next, we need to construct all possible permutations of the digits. This is necessary so our division tests later skip a bunch of numbers and make things faster overall. So long as there's element left in $b, we peel off the first digit into $z and leave the remaining in $b. Then, we need to accumulate into $a the result of some string slicing and dicing. We take $a+$y as array concatenation, and for each element we construct a new string $c, then loop through $c's .length and insert $z into every position, including prepending $z$c and appending $c$z, then selecting only the -unique elements. That's again array-concatenated with $a and re-stored back into $a. Yes, this does wind up having goofy things happen, like you can get 3333 for input 117, which isn't actually a permutation, but this is much shorter than attempting to explicitly filter them out, ensures that we get every permutation, and is only very marginally slower. So, now $a has an array of all possible (and then some) permutations of the factor's digits. We need to re-set $x to be our upper-bound of possible results by |sorting the digits in -descending order and -joining them back together. Obviously, no output value can be larger than this number. We set our helper array $l to be an array of values that we've previously seen. Next, we're pulling out every value from $a (i.e., those permutations) that are prime, and enter a loop that is the biggest time sink of the whole program... Every iteration, we're looping from 0 to our upper bound $x, incrementing by the current element $j. So long as the $i value we're considering is not a multiple of a previous value (that's the 0-notin($l|%{$i%$_}) section), it's a potential candidate for output. If we take the factors of $i, sort them, and they -equal $x, then add the value to the pipeline. At the end of the loop, we add our current element $j into our $l array for use next time, as we've already considered all those values. Finally, we tack on |?{$_-ne$n} to pull out those that are not the input element. They're all left on the pipeline and output is implicit. ### Examples PS C:\Tools\Scripts\golfing> 2,4,8,15,16,23,42,117,126,204|%{"$_ --> "+(.\prime-factors-buddies$_)} 2 --> 4 --> 8 --> 15 --> 53 16 --> 23 --> 6 42 --> 74 146 161 117 --> 279 939 993 3313 3331 126 --> 222 438 674 746 1466 483 851 1679 1631 204 --> 782 2921 3266 6233 3791 15833 2951 7037 364 868 8561 15491 22547 852 762 1626 692 548 1268 2654 3446 2474 5462 4742 5426 4274 14426 6542 6434 14642 • That's the most dollars I've ever seen!
# Chapter 13 Working with NIMBLE models Here we describe how one can get information about NIMBLE models and carry out operations on a model. While all of this functionality can be used from R, its primary use occurs when writing nimbleFunctions (see Chapter 15). Information about node types, distributions, and dimensions can be used to determine algorithm behavior in setup code of nimbleFunctions. Information about node or variable values or the parameter and bound values of a node would generally be used for algorithm calculations in run code of nimbleFunctions. Similarly, carrying out numerical operations on a model, including setting node or variable values, would generally be done in run code. ## 13.1 The variables and nodes in a NIMBLE model Section 6.2 defines what we mean by variables and nodes in a NIMBLE model and discusses how to determine and access the nodes in a model and their dependency relationships. Here we’ll review and go into more detail on the topics of determining the nodes and node dependencies in a model. ### 13.1.1 Determining the nodes in a model One can determine the variables in a model using getVarNames and the nodes in a model using getNodeNames, with optional arguments allowing you to select only certain types of nodes. We illustrate here with the pump model from Chapter 2. pump$getVarNames() ## [1] "lifted_d1_over_beta" "theta" "lambda" ## [4] "x" "alpha" "beta" pump$getNodeNames() ## [1] "alpha" "beta" "lifted_d1_over_beta" ## [4] "theta[1]" "theta[2]" "theta[3]" ## [7] "theta[4]" "theta[5]" "theta[6]" ## [10] "theta[7]" "theta[8]" "theta[9]" ## [13] "theta[10]" "lambda[1]" "lambda[2]" ## [16] "lambda[3]" "lambda[4]" "lambda[5]" ## [19] "lambda[6]" "lambda[7]" "lambda[8]" ## [22] "lambda[9]" "lambda[10]" "x[1]" ## [25] "x[2]" "x[3]" "x[4]" ## [28] "x[5]" "x[6]" "x[7]" ## [31] "x[8]" "x[9]" "x[10]" pump$getNodeNames(determOnly = TRUE) ## [1] "lifted_d1_over_beta" "lambda[1]" "lambda[2]" ## [4] "lambda[3]" "lambda[4]" "lambda[5]" ## [7] "lambda[6]" "lambda[7]" "lambda[8]" ## [10] "lambda[9]" "lambda[10]" pump$getNodeNames(stochOnly = TRUE) ## [1] "alpha" "beta" "theta[1]" "theta[2]" "theta[3]" "theta[4]" ## [7] "theta[5]" "theta[6]" "theta[7]" "theta[8]" "theta[9]" "theta[10]" ## [13] "x[1]" "x[2]" "x[3]" "x[4]" "x[5]" "x[6]" ## [19] "x[7]" "x[8]" "x[9]" "x[10]" pump$getNodeNames(dataOnly = TRUE) ## [1] "x[1]" "x[2]" "x[3]" "x[4]" "x[5]" "x[6]" "x[7]" "x[8]" "x[9]" ## [10] "x[10]" You can see one lifted node (see next section), lifted_d1_over_beta, involved in a reparameterization to NIMBLE’s canonical parameterization of the gamma distribution for the theta nodes. We can determine the set of nodes contained in one or more nodes or variables using expandNodeNames, illustrated here for an example with multivariate nodes. The returnScalarComponents argument also allows us to return all of the scalar elements of multivariate nodes. multiVarCode2 <- nimbleCode({ X[1, 1:5] ~ dmnorm(mu[], cov[,]) X[6:10, 3] ~ dmnorm(mu[], cov[,]) for(i in 1:4) Y[i] ~ dnorm(mn, 1) }) multiVarModel2 <- nimbleModel(multiVarCode2, dimensions = list(mu = 5, cov = c(5,5)), calculate = FALSE) multiVarModel2$expandNodeNames("Y") ## [1] "Y[1]" "Y[2]" "Y[3]" "Y[4]" multiVarModel2$expandNodeNames(c("X", "Y"), returnScalarComponents = TRUE) ## [1] "X[1, 1]" "X[1, 2]" "X[1, 3]" "X[6, 3]" "X[7, 3]" "X[8, 3]" ## [7] "X[9, 3]" "X[10, 3]" "X[1, 4]" "X[1, 5]" "Y[1]" "Y[2]" ## [13] "Y[3]" "Y[4]" As discussed in Section 6.2.6, you can determine whether a node is flagged as data using isData. ### 13.1.2 Understanding lifted nodes In some cases, NIMBLE introduces new nodes into the model that were not specified in the BUGS code for the model, such as the lifted_d1_over_beta node in the introductory example. For this reason, it is important that programs written to adapt to different model structures use NIMBLE’s systems for querying the model graph. For example, a call to pump$getDependencies("beta") will correctly include lifted_d1_over_beta in the results. If one skips this step and assumes the nodes are only those that appear in the BUGS code, one may not get correct results. It can be helpful to know the situations in which lifted nodes are generated. These include: 1. When distribution parameters are expressions, NIMBLE creates a new deterministic node that contains the expression for a given parameter. The node is then a direct descendant of the new deterministic node. This is an optional feature, but it is currently enabled in all cases. 2. As discussed in Section 5.2.6, the use of link functions causes new nodes to be introduced. This requires care if you need to initialize values in stochastic declarations with link functions. 3. Use of alternative parameterizations of distributions, described in Section 5.2.4 causes new nodes to be introduced. For example when a user provides the precision of a normal distribution as tau, NIMBLE creates a new node sd <- 1/sqrt(tau) and uses sd as a parameter in the normal distribution. If many nodes use the same tau, only one new sd node will be created, so the computation 1/sqrt(tau) will not be repeated redundantly. ### 13.1.3 Determining dependencies in a model Next we’ll see how to determine the node dependencies (or ‘descendants’ or child nodes) in a model. There are a variety of arguments to getDependencies that allow one to specify whether to include the node itself, whether to include deterministic or stochastic or data dependents, etc. By default getDependencies returns descendants up to the next stochastic node on all edges emanating from the node(s) specified as input. This is what would be needed to calculate a Metropolis-Hastings acceptance probability in MCMC, for example. pump$getDependencies("alpha") ## [1] "alpha" "theta[1]" "theta[2]" "theta[3]" "theta[4]" "theta[5]" ## [7] "theta[6]" "theta[7]" "theta[8]" "theta[9]" "theta[10]" pump$getDependencies(c("alpha", "beta")) ## [1] "alpha" "beta" "lifted_d1_over_beta" ## [4] "theta[1]" "theta[2]" "theta[3]" ## [7] "theta[4]" "theta[5]" "theta[6]" ## [10] "theta[7]" "theta[8]" "theta[9]" ## [13] "theta[10]" pump$getDependencies("theta[1:3]", self = FALSE) ## [1] "lambda[1]" "lambda[2]" "lambda[3]" "x[1]" "x[2]" "x[3]" pump$getDependencies("theta[1:3]", stochOnly = TRUE, self = FALSE) ## [1] "x[1]" "x[2]" "x[3]" # get all dependencies, not just the direct descendants pump$getDependencies("alpha", downstream = TRUE) ## [1] "alpha" "theta[1]" "theta[2]" "theta[3]" "theta[4]" ## [6] "theta[5]" "theta[6]" "theta[7]" "theta[8]" "theta[9]" ## [11] "theta[10]" "lambda[1]" "lambda[2]" "lambda[3]" "lambda[4]" ## [16] "lambda[5]" "lambda[6]" "lambda[7]" "lambda[8]" "lambda[9]" ## [21] "lambda[10]" "x[1]" "x[2]" "x[3]" "x[4]" ## [26] "x[5]" "x[6]" "x[7]" "x[8]" "x[9]" ## [31] "x[10]" pump$getDependencies("alpha", downstream = TRUE, dataOnly = TRUE) ## [1] "x[1]" "x[2]" "x[3]" "x[4]" "x[5]" "x[6]" "x[7]" "x[8]" "x[9]" ## [10] "x[10]" In addition, one can determine parent nodes using getParents. pump$getParents("alpha") ## character(0) ## 13.2 Accessing information about nodes and variables ### 13.2.1 Getting distributional information about a node We briefly demonstrate some of the functionality for information about a node here, but refer readers to the R help on modelBaseClass for full details. Here is an example model, with use of various functions to determine information about nodes or variables. code <- nimbleCode({ for(i in 1:4) y[i] ~ dnorm(mu, sd = sigma) mu ~ T(dnorm(0, 5), -20, 20) sigma ~ dunif(0, 10) }) m <- nimbleModel(code, data = list(y = rnorm(4)), inits = list(mu = 0, sigma = 1)) m$isEndNode('y') ## y[1] y[2] y[3] y[4] ## TRUE TRUE TRUE TRUE m$getDistribution('sigma') ## sigma ## "dunif" m$isDiscrete(c('y', 'mu', 'sigma')) ## y[1] y[2] y[3] y[4] mu sigma ## FALSE FALSE FALSE FALSE FALSE FALSE m$isDeterm('mu') ## mu ## FALSE m$getDimension('mu') ## value ## 0 m$getDimension('mu', includeParams = TRUE) ## value mean sd tau var ## 0 0 0 0 0 Note that any variables provided to these functions are expanded into their constituent node names, so the length of results may not be the same length as the input vector of node and variable names. However the order of the results should be preserved relative to the order of the inputs, once the expansion is accounted for. ### 13.2.2 Getting information about a distribution One can also get generic information about a distribution based on the name of the distribution using the function getDistributionInfo. In particular, one can determine whether a distribution was provided by the user (isUserDefined), whether a distribution provides CDF and quantile functions (pqDefined), whether a distribution is a discrete distribution (isDiscrete), the parameter names (include alternative parameterizations) for a distribution (getParamNames), and the dimension of the distribution and its parameters (getDimension). For more extensive information, please see the R help for getDistributionInfo. ### 13.2.3 Getting distribution
y is the axis of symmetry, as if you are making shell analysis, but do not constraint nodal rotations. Elliptical galaxies might well be spheroidal (but could also be ellipsoidal), while disk galaxies almost certainly are axisymmetric (though highly flattened). Education: B. Axisymmetric analysis requires that the center of rotation and radial axis of the axisymmetric model be properly aligned to the absolute coordinate system. or adj being symmetrical Axisymmetric - definition of axisymmetric by The Free Dictionary. Axisymmetric definition is - symmetric in respect to an axis. Beam Examples. The pressure drop is measured across an interval approximately 60m downstream of the inlet:. This program is based on the explicit dynamic finite element (FE) method and incorporates the dynamic characteristics involved in this process. We are going to define 3 overlapping rectangles as defined in the following table: 4. The first case illustrates the inverse piezoelectric effect, and the second case shows the direct piezoelectric effect. 04363323129985824) 3. An axisymmetric projectile with flat head is used. Meanwhile, in the. An Overview of Methods for Modelling Bolts in ANSYS Bolted joints are commonly used to assemble mechanical structures. A case can be classified as either a verification or validation case depending on whether the comparison data for the case is exact (analytical) or experimental, respectively. Airlock valve, and Multi Jet Venting CFD. Our model of the polymer boss is not only high symmetric, it's perfect symmetric. "An Investigation Into the Effects of Modelling Cylindrical Nozzle to Cylindrical Vessel Intersections Using 2D Axisymmetric Finite Element Models and a Proposed Method for Correcting the Results. axisymmetric finite element modeling for the design and analysis of cylindrical adhesive joints based on dimensional stability 4. But, if you observe the flow of water from its top view, its 2D planar flow(let the plane be xy). Toggle Main Navigation. Elements: Axisymmetric (2-D) elements Contact: Use bonded contact to simulate the threads, rather than modeling them explicitly. Rotational symmetry of order n, also called n-fold rotational symmetry, or discrete rotational symmetry of the nth order, with respect to a particular point (in 2D) or axis (in 3D) means that rotation by an angle of 360°/n (180°, 120°, 90°, 72°, 60°, 51 3 ⁄ 7 °, etc. This approach has recently been used to study the reset dynamics of TaO x-based resistive switches, revealing that the gradual nature of the reset process can be attributed to interacting ionic drift and diffusion processes approachi ng equilibrium [4]. The number crunching took about 10 minutes of CPU time on my P5-200 MMX computer. 1, you can solve a 2D axisymmetric problem that includes the prediction of the circumferential or swirl velocity. However, due to the assumption that azimuthal variations can be neglected, it is still sufficient to limit the simulations to axisymmetric 2D domains, thus saving significant amounts of computational time and effort. Type' to be 2D (it defaults to 3D) and then launch Simulation. Aqueous solutions of graphene oxide (GO) with a very large average flake aspect ratio, AR = O ( 10 4 ), are probed using capillary breakup experiments and shear rheological measurements for concent. Ti-6Al-4V Titanium alloy and AISI 316 stainless steel workpieces were rolled during this research and after rolling, they were mounted, sectioned and then examined under a microscope to determine the presence, location and magnitude of any defects. B Therm Stress - Thermal stresses in a vessel with spherical end caps. The yield criterion is expressed in terms of three quadratic constraints so that the problem can be solved using second order cone programming (SOCP). 3D Results Plot 2D Results Plot For the axisymmetric assumption, the results are the same for all cross sections about the axis of revolution. The pressure vessel has an inner radius of R = 0. Whether or not classical solutions of the 3D incompressible MHD equations with full dissipation and magnetic diffusion can develop finite-time singularities is a long standing open problem of fluid dynamics and PDE theory. On a personal computer, we obtain speedups up to 80 using the GPU versus the conventional processor. Feb 21, 2018 · I have a model with a 2D axisymmetric geometry and now, for the postprocessing, I need a 1D plot line that goes from -r to r, instead of just 0 to r. All other stress (strain) components are zero) Recall the (1) equilibrium, (2) strain-displacement and (3) stress-strain laws 2. who used the Zeus 2D code. These are labeled SZ. The 2D axisymmetric type of OSBPD model is recently developed by Zhang and Qiao (2018c). For me personally AXISYMMETRIC means something different. solved using the axisymmetric element developed in this chapter. The numbering of the local nodes is indicated in the figure below and is in accordance with the MSC. 1 Introduction Consider a thick walled cylinder with a given internal temperature and convection at its external radius. Axisymmetric geometry type: Model is axisymmetric about the Z axis and exists only in the positive Y quadrant of the YZ plane. Therefore, to create the geometry mentioned above, we must define a U-shape. I would probably answer my question on my own if I could create the revolved tyre from the 2D axisymmetric model. A significant advantage of the 3D FEM is the capability of modeling full geometry of the analyzed object, including end-region of the windings. $\begingroup$ @ValterMoretti, the boundary of my ''diffusion source'' is an upward convex axisymmetric surface, so I feel that it could be convenient for solving a Laplace's eq. We will use SI system units for this tutorial: length = m, mass = kg, time = sec, force = N, stress/pressure = Pa. 22 synonyms for symmetry: balance, proportion, regularity, form, order, harmony. Draw Model Allows you to access the 2D Modeler and build the objects that make. I'm wondering why there is no option to use non-uniform forces/pressures in 2D axisymmetric. The nozzle designs created are based on a perfect gas, inviscid nozzle solution algorithm. *4918 Malibu Drive Bloomfield Hills, MI 48302 Abstract: Electroplating is a vital technology widely employed for many technological appli-cations ranging from decorative or anti-corrosion coatings to high precision nanotechnology pas-. I would probably answer my question on my own if I could create the revolved tyre from the 2D axisymmetric model. Jun 01, 2011 · Read "Development of a parallelized 2D/2D-axisymmetric Navier–Stokes equation solver for all-speed gas flows, Computers & Fluids" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. In an axisymmetric analysis, plates may be used as circular plates. In this paper, we report a direct implicit and electrostatic PIC/MC simulation. Axisymmetric b. is less than the minref. OCC Exercise 6: Example of usage mesh 3D algorithms and hypothesis This exercise illustrates the use of SMESH SALOME 3D algorithms and hypothesis and functionalities for meshing of the prism shape. A C Program code to solve for Heat diffusion in 2D Axi-symmetric grid. Mar 13, 2014 · The prototype is an artificial lens. But, if you observe the flow of water from its top view, its 2D planar flow(let the plane be xy). Cfd-online. 3 Plane Stress and Plane Strain Two cases arise with plane axisymmetric problems: in the plane stress problem, the feature is very thin and unloaded on its larger free-surfaces, for example a thin disk under external pressure, as shown in Fig. 2D axisymmetric models are a special case as Corus pointed out. ABAQUS Tutorial - Axisymmetric Analysis Consider a steel (E=200 GPa, ν=0. The 2D-axisymmetric model with 290 (15-node) triangular elements was built with an equivalent diameter, (d e = 1. ax′i·sym′me·try n. Initially the elements define an axisymmetric reference geometry with respect
all the terms contributing the dynamics. In contrast, in an \textsc{Lnn}\xspace, the learned $L$ is used in the EL equation to obtain the dynamics. Similarly, in \textsc{Lnn}\xspace with explicit constraints, the learned Lagrangian is used to obtain the individual terms in the Eq.~\ref{eq:gnode_const} by differentiating the \textsc{Lnn}\xspace, which is then substituted to obtain the acceleration. However, this approach is transductive in nature. To overcome this challenge, we present a graph-based version of the \textsc{Node}\xspace namely, \textsc{Gnode}\xspace{}, which models the physical system as a graph. We show, that \textsc{Gnode}\xspace{} can generalize to arbitrary system size after learning the dynamics $\hat{F}$. \vspace{-0.10in} \subsection{Graph neural ODE (\textsc{Gnode}\xspace) for physical systems} \vspace{-0.10in} \begin{figure} \vspace{-0.30in} \centering \includegraphics[width=\columnwidth]{graph_architecture.png} \vspace{-0.20in} \caption{Graph Neural ODE architecture.} \label{fig:graph_architecture} \vspace{-0.20in} \end{figure} In this section, we describe the architecture of the \textsc{Gnode}\xspace{}. The graph topology is used the learn the approximate dynamics $\hat{F}$ by minimizing the loss on the predicted positions. The graph architecture is discussed in detail in the sequel.\\ {\bf Graph structure.} An $n$-particle system is represented as a undirected graph $\mathcal{G}=\{\mathcal{V,E}\}$, where the nodes represent the particles and the edges represents the connections or interactions between them. For instance, in pendulum or springs system, the nodes correspond to bobs or balls, respectively, and the edges correspond to the bars or springs, respectively.\\ {\bf Input features.} Each node is characterized by the features of the particles themselves, namely, the particle \textit{type} ($t$), \textit{position} ($q_i$), and \textit{velocity} ($\dot q_i$). The \textit{type} distinguishes particles of differing characteristics, for instance, balls or bobs with different masses. Further, each edge is represented by the edge features $w_{ij}=(q_i-q_j)$, which represents the relative displacement of the nodes connected by the given edge.\\ \textbf{Pre-Processing.} In the pre-processing layer, we construct a dense vector representation for each node $v_i$ and edge $e_{ij}$ using $\texttt{MLP}_{em}$ as: \begin{alignat}{2} \mathbf{h}\xspace^0_i &= \texttt{squareplus}\xspace(\texttt{MLP}\xspace_{em}(\texttt{one-hot}(t_i),q_i,\dot{q}_i)) \label{eq:one-hot}\\ \mathbf{h}\xspace^0_{ij} &= \texttt{squareplus}\xspace(\texttt{MLP}\xspace_{em}(w_{ij})) \end{alignat} $\texttt{squareplus}\xspace$ is an activation function. Note that the $\texttt{MLP}\xspace_{em}$ corresponding to the node and edge embedding functions are parametrized with different weights. Here, for the sake of brevity, we simply mention them as $\texttt{MLP}\xspace_{em}$.\\ \textbf{Acceleration prediction.} In many cases, internal forces in a system that govern the dynamics are closely dependent on the topology of the structure. To capture this information, we employ multiple layers of \textit{message-passing} between the nodes and edges. In the $l^{th}$ layer of message passing, the node embedding is updated as: \begin{equation} \mathbf{h}\xspace_i^{l+1} = \texttt{squareplus} \left( \mathbf{h}\xspace_i^{l}+\sum_{j \in \mathcal{N}_i}\mathbf{W}\xspace_{\mathcal{V}\xspace}^l\cdot\left(\mathbf{h}\xspace_j^l || \mathbf{h}\xspace_{ij}^l\right) \right) \end{equation} where, $\mathcal{N}_i=\{v_j\in\mathcal{V}\xspace\mid e_{ij}\in\mathcal{E}\xspace \}$ are the neighbors of $v_i$. $\mathbf{W}\xspace_{\mathcal{V}\xspace}^{l}$ is a layer-specific learnable weight matrix. $\mathbf{h}\xspace_{ij}^l$ represents the embedding of incoming edge $e_{ij}$ on $v_i$ in the $l^{th}$ layer, which is computed as follows. \begin{equation} \mathbf{h}\xspace_{ij}^{l+1} = \texttt{squareplus} \left( \mathbf{h}\xspace_{ij}^{l} + \mathbf{W}\xspace_{\mathcal{E}\xspace}^{l}\cdot\left(\mathbf{h}\xspace_i^l || \mathbf{h}\xspace_{j}^l\right) \right) \end{equation} Similar to $\mathbf{W}\xspace_{\mathcal{V}\xspace}^{l}$, $\mathbf{W}\xspace_{\mathcal{E}\xspace}^{l}$ is a layer-specific learnable weight matrix specific to the edge set. The message passing is performed over $L$ layers, where $L$ is a hyper-parameter. The final node and edge representations in the $L^{th}$ layer are denoted as $\mathbf{z}\xspace_i=\mathbf{h}\xspace_i^L$ and $\mathbf{z}\xspace_{ij}=\mathbf{h}\xspace_{ij}^L$ respectively. Finally, the acceleration of the particle $\ddot q_i$ is predicted as: \begin{equation} \ddot q_i=\texttt{squareplus}(\texttt{MLP}_{\mathcal{V}\xspace}(\mathbf{z}\xspace_i)) \end{equation} \textbf{Trajectory prediction and training.} Based on the predicted $\ddot{q}$, the positions and velocities are predicted using the \textit{velocity Verlet} integration. The loss function of \textsc{Gnode}\xspace{} is computed by using the predicted and actual accelerations at timesteps $2, 3,\ldots,\mathcal{T}$ in a trajectory $\mathbb{T}$, which is then back-propagated to train the MLPs. Specifically, the loss function is as follows. \begin{equation} \label{eq:lossfunction} \mathcal{L}= \frac{1}{n}\left(\sum_{i=1}^n \left(\ddot{q}_i^{\mathbb{T},t}-\left(\hat{\ddot{q}}_i^{\mathbb{T},t}\right)\right)^2\right) \end{equation} Here, $(\hat{\ddot{q}}_i^{\mathbb{T},t})$ is the predicted acceleration for the $i^{th}$ particle in trajectory $\mathbb{T}$ at time $t$ and $\ddot{q}_i^{\mathbb{T},t}$ is the true acceleration. $\mathbb{T}$ denotes a trajectory from $\mathfrak{T}$, the set of training trajectories. Note that the accelerations are computed directly from the ground truth trajectory using the Verlet algorithm as: \begin{equation} \ddot{q}(t)=\frac{1}{(\Delta t)^2}[q(t+\Delta t)+q(t-\Delta t)-2q(t)] \end{equation} Since the integration of the equations of motion for the predicted trajectory is also performed using the same algorithm as: $q(t+\Delta t)=2q(t)-q(t-\Delta t)+\ddot{q}(\Delta t)^2$, this method is equivalent to training from trajectory/positions. \rev{It should be noted that the training approach presented here may lead to learning the dynamics as dictated by the Verlet integrator. Thus, the learned dynamics may not represent the ``true'' dynamics of the system, but one that is optimal for the Verlet integrator.} \vspace{-0.10in} \subsection{Decoupling the dynamics and constraints (\textsc{CGnode}\xspace)} \vspace{-0.10in} The proposed \textsc{Gnode}\xspace{} architecture learns the dynamics directly without decoupling the constraints and other components governing the dynamics from the equation $\ddot q = \hat{F}(q,\dot q, t)$. Next, we analyze the effect of providing the constraints explicitly on the learning process and enhance the performance of \textsc{Gnode}\xspace{}. To this extent, while keeping the graph architecture exactly the same, we use Eq.~\ref{eq:gnode_const} to compute the acceleration of the system. In this case, the output of \textsc{Gnode}\xspace{} is $N_i = \texttt{squareplus}(\texttt{MLP}_{\mathcal{V}\xspace}(\mathbf{z}\xspace_i))$ instead of $\ddot{q}_i$, \rev{where $N_i$ is the conservative force on particle $i$ (refer Eq.~\ref{eq:node_decoupled})}. Then, the acceleration is predicted using Eq.~\ref{eq:gnode_const} with explicit constraints. This approach, termed as \textit{constrained} \textsc{Gnode}\xspace{} (\textsc{CGnode}\xspace) enables us to isolate the effect of imposing constraints explicitly on the learning, and the performance of \textsc{Gnode}\xspace{}. It is worth noting that for particle systems in Cartesian coordinates, the mass matrix is a diagonal one, with the diagonal entries, $m_{ii}$s, representing the mass of the $i^{th}$ particle . Thus, the $m_{ii}$s are learned as trainable parameters. As a consequence of this design, the graph directly predicts $N_i$ of each particle. \rev{Since we consider a particle based system with no external force and drag, $N_i = \Upsilon_i = 0$}. Note that in the case of systems with drag, the graph would contain cumulative of $N_i$ and $\Upsilon_i$. \looseness=-1 \vspace{-0.10in} \subsection{Decoupling the internal and body forces (\textsc{CDGnode}\xspace)} \vspace{-0.10in} In a physical system, internal forces arise due to interactions between constituent particles. On the contrary, the body forces are due to the interaction of each particle with an external field such as gravity or electromagnetic field. While internal forces are closely related to the topology of the system, the effect of external fields on each particle is independent of the topology of the structure. In addition, while the internal forces are a function of the relative displacement between the particles, the external forces due to fields depend on the actual position of the particle. To decouple the internal and body forces, we modify the graph architecture of the \textsc{Gnode}\xspace{}. Specifically, the node inputs are divided into local and global features. Only the local features are involved in the message passing. The global features are passed through an MLP to obtain an embedding, which is concatenated with the node embedding obtained after message passing. This updated embedding is passed through an MLP to obtain the particle level force, $N_i$. Here, the local node features are the particle types and the global features are position and velocity. Note that in the case of systems with drag, the $\Upsilon_i$ can be learned using a separate node MLP using particle type and velocity as global input features. This architecture is termed as \textit{constrained and} \textit{decoupled} \textsc{Gnode}\xspace{} (\textsc{CDGnode}\xspace). The architecture of \textsc{CDGnode}\xspace~is provided in the Appendix. \vspace{-0.10in} \subsection{Newton's third law} \vspace{-0.10in} According to Newton's third law, \textit{``every action has an equal and opposite reaction''}. In particular, the internal forces on each particle exerted by the neighboring particles should be equal and opposite so that the total system is in equilibrium. That is, $f_{ij}=-f_{ji}$, where $f_{ij}$ represents the force on particle $i$ due to its neighbor $j$. Note that this is a stronger requirement than the EL equation itself, i.e., $\frac{d}{dt} \left(\nabla_{\dot{q_i}}L\right)=\nabla_{q_i} L$. EL equation enforces that the rate of change of momentum of each particle is equal to the total force on the particle, $N_i = \sum_{j=1}^{n_i}f_{ij}$, where $n_i$ is the number of neighbors of $i$ --- it does not enforce any restrictions on the individual component forces, $f_{ij}$ from the neighbors that add to this force. Thus, although the total force on each particle may be learned by the \textsc{Gnode}\xspace, the individual components may not be learned correctly. We show that \textsc{Gnode}\xspace
effect can be achieved with the pages option, if an empty page is inserted in front of the first page. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: openright=false)pagecommand Declares LATEX commands, which are executed on each sheet of paper. (Default: pagecommand={\thispagestyle{empty}})turn By default pages in landscape format are displayed in landscape orien- tation (if the PDF viewer supports this). With turn=false this can be prohibited. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: turn=true)noautoscale By default pages are scaled automatically. This can be sup- pressed with the noautoscale option. In combination with the scale option (from graphicx) the user has full control over the scaling process. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: noautoscale=false)fitpaper Adjusts the paper size to the one of the inserted document. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: fitpaper=false)reflect Reflects included pages. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: reflect=false)signature Creates booklets by rearranging pages into signatures and setting nup=1x2 or nup=2x1, respectively. This option takes one argument spec- ifying the size of the signature, which should be a multiple of 4.An example for documents in portrait orientation:\includepdf[pages=-, signature=8, landscape]{portrait-doc.pdf}An example for documents in landscape orientation:\includepdf[pages=-, signature=8]{landscape-doc.pdf}signature* Similar to signature, but now for right-edge binding.booklet This option is just a shortcut of the ‘signature’ option, if you choose a signature value so large that all pages fit into one signature. Either 4 ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: booklet=false)picturecommand Declares picture commands which are executed on every page within a picture environment with the base point at the lower left corner of the page. (The base point does not change if the page is rotated, e.g. by the landscape option.) (Default: picturecommand={})picturecommand* Like picturecommand, but with the restriction that picturecommand* executes its picture commands only on the very first page. (Default: picturecommand*={})pagetemplate By default the first inserted page will be used as a template. This means that all further pages are scaled such that they match within the contour of this first page. This option allows to declare another page to be used as a template; which is only useful if a PDF document contains different page sizes or page orientations. The argument should be a page number. (Default: pagetemplate= first inserted page )templatesize This option is similar to the pagetemplate option, but its arguments specify the size of the template directly. Its syntax is: templatesize={ width }{ height } Note: The two lengths should be a bit larger than desired, to keep away from rounding errors. (Default: templatesize= size of the first inserted page )rotateoversize This option allows to rotate oversized pages. E.g. pages in landscape orientation are oversized relatively to their portrait counter- part, because they do not match within the contour of a portrait page without rotating them. By default oversized pages are scale and are not rotated. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: rotateoversize=false)doublepages Inserts every page twice. This is useful for 2-up printing, if one wants to cut the stack of paper afterwards to get two copies. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: doublepages=false)doublepagestwist Whereas with doublepages the cutting edge is once on the inner side and ones on the outer side, doublepagestwist turns the pages such, that the cutting edge is always on the inner side. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: doublepagestwist=false)doublepagestwistodd Turns the pages such, that the cutting edge is always on the outer side. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: doublepagestwistodd=false)doublepagestwist* Like doublepagestwist but for double side printing. Ei- ther ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: doublepagestwist*=false)doublepagestwistodd* Like doublepagestwistodd but for double side printing Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: doublepagestwistodd*=false)duplicatepages Duplicates each page n times, with n being the argument to this option. (Default: duplicatepages=2) 5 • Miscellaneous options: lastpage In DVI mode pdfpages cannot determine the number of pages of the included document. So this option is suitable to specify the number of pages. This option is only used in DVI mode and has no meaning in any other mode. The argument should be a page number. (Default: lastpage=1)• Hypertext options: link Inserted pages become a target of a hyperlink. The name of the link is ‘ filename . page number ’. The filename extension of filename must not be stripped. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: link=false) linkname Changes the default linkname created by the option link. Instead of filename the value of this option is used. E.g. linkname=mylink produces the linknames ‘mylink. page number ’. thread Combines inserted pages to an article thread. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: thread=false) threadname Several threads are distinguished by their threadnames. By default the threadname is equal to the filename (plus filename exten- sion), but it can be changed with this option. This is useful if the same file is inserted twice or more times and should not be combined to one single thread. Or the other way round if pages from differ- ent documents should be combined to one single thread. (Default: threadname= filename.ext ) linktodoc Lets the inserted pages be hyperlinks to the document from which they were extracted. Note that the PDF-Viewer will not find the file, if filename has not filename extension (.pdf). Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: linktodoc=false)• Additional hypertext options: linkfit Specifies, how the viewer displays a linked page. This option changes the default behavior of the option link. Possible values are: Fit, FitH top , FitV left , FitB, FitBH top , FitBV left , and Region. See [2] for a details description of these PDF destinations. The region destination was added by pdfpages and is not a real PDF destinations. It scales a page such that the included page fits exactly into the window of the PDF viewer. Note that not all of these options are supported by all TEX-engines or drivers, respectively. (Default: linkfit=fit) linktodocfit By default the option linktodoc opens the page in ‘Fit in Win- dow’ view. Another view can be specified with this option. Possible values are the legal PDF tokens: /FitH top , /FitV left , etc. (See [2] for more details.) (Default: linktodocfit=/Fit) newwindow By default option linktodoc opens a new window. This can be changed with option newwindow. Either ‘true’ or ‘false’ (or no value, which is equivalent to ‘true’). (Default: newwindow=true) 6 linkfilename Sets the name (with path) of the file to be linked to by the option linktodoc. You will hardly ever need this option. (Default: linkfilename= filename.ext )• Experimental options: (Syntax may change in future versions!) addtotoc Adds an entry to the table of contents. This option requires five arguments, separated by commas: addtotoc={ page number , section , level , heading , label } page number : Page number of the inserted page. section : LATEX sectioning name – e.g., section, subsection, . . . level : Number, denoting depth of section – e.g., 1 for section level, 2 for subsection level, . . . heading : Title inserted in the table of contents. label : Name of the label. This label can be referred to with \ref and \pageref. Note: The order of the five arguments must not be mixed. Otherwise you will get very strange error messages.
of $\mathbf{J} \cdot \mathbf{Y}$. } \label{alg:L2sample} \begin{algorithmic}[1] \STATE Sample a block $i \in \mathcal{B}$ with probability \[\frac{\left\langle \text{root}(\tau_i(\mathbf{T}_1)) , \text{root}(\tau_i(\mathbf{T}_2))\right\rangle}{\sum_{j} \left\langle \text{root}(\tau_j(\mathbf{T}_1)) , \text{root}(\tau_j(\mathbf{T}_2))\right\rangle}\] \STATE Sample $ l_1 \in[ s_{(i),1}]$ with probability \[p_{l_1} = \frac{\left\langle v^{(i),1}_{l_1} , \text{root}(\tau_i(\mathbf{T}_2))\right\rangle}{\sum_{l} \left\langle v^{(i),1}_{l} , \text{root}(\tau_j(\mathbf{T}_2))\right\rangle}\] \label{line:l1} \STATE Sample $ l_2 \in[ s_{(i),2}]$ with probability \[p_{l_2 \; | \; l_1} = \frac{\left\langle v^{(i),1}_{l_1} , v^{(i),2}_{l_2} \right\rangle}{\sum_{l} \left\langle v^{(i),1}_{l_1} , v^{(i),2}_{l} \right\rangle}\] \label{line:l2} \STATE Return the row corresponding to $(l_1,l_2)$ in block $i$. \end{algorithmic} \end{algorithm} We start our proof by showing Algorithm \ref{alg:L2sample} can sample one row with probability according to the $\ell_2$ norm quickly after running the pre-processing Algorithm \ref{alg:L2presample}. \begin{lemma}\label{lem:sampfast} Let $\mathbf{J} = \mathbf{T}_1 \Join \mathbf{T}_2 \in \mathbb{R}^{N \times d}$ be any arbitrary join on two tables, with $\mathbf{T}_i \in \mathbb{R}^{n_i \times d_i}$, and fix any $\mathbf{Y} \in \mathbb{R}^{d \times t}$. Then Algorithms \ref{alg:L2presample} and \ref{alg:L2sample}, after an $O(( \mathtt{nnz}(\mathbf{T}_1) + \mathtt{nnz}(\mathbf{T}_2)) (t + \log N))$-time pre-processing step (Algorithm \ref{alg:L2presample}), can produce samples $i^* \sim \mathcal{D}_{\mathbf{Y}}$ (Algorithm \ref{alg:L2sample}) from the distribution $\mathcal{D}_{\mathbf{Y}}$ over $[N]$ given by \[ \text{Pr}_{i^* \sim \mathcal{D}_{\mathbf{Y}}} \left[ i^* = j \right] = \frac{\|(\mathbf{J} \cdot \mathbf{Y})_{j,*}\|_2^2}{ \|\mathbf{J} \cdot \mathbf{Y}\|_F^2 } \] such that each sample is produced in $O(\log N)$ time. \end{lemma} \begin{proof} We begin by arguing the correctness of Algorithms \ref{alg:L2presample} and \ref{alg:L2sample}. Let $j^*$ be any row of $\mathbf{J} \cdot \mathbf{Y}$. Note that the row $j^*$ corresponds to a unique block $i \in \mathcal{B} = \mathcal{B}(\mathbf{J})$, and two rows $l_1 \in [s_{(i),1}],l_2 \in [s_{(i),2}]$, such that $\mathbf{J}_{j^*,*} = (\hat{T}_1)_{l_1',*} + (\hat{T}_2)_{l_2',*}$, where $l_1' \in [n_1], l_2' \in [n_2]$ are the indices which correspond to $l_1,l_2$. For $l \in [s_{(i),j}]$, let $v_l^j, v^j_{l,q}$ be defined as in Algorithm \ref{alg:L2presample}. We first observe that if $j^*$ corresponds to the block $i \in \mathcal{B}$, then $\langle v^{(i),1}_{l_1} , v^{(i),2}_{l_2} \rangle = \|(\mathbf{J} \mathbf{Y})_{j^*,*}\|_2^2$, since \begin{equation} \begin{split} & \langle v^{(i),1}_{l_1} , v^{(i),2}_{l_2} \rangle\\ &= \sum_{q=1}^r v^{(i),1}_{l_1,q} \cdot v^{(i),2}_{l_2,q}\\ &= \sum_{q=1}^r (a_{(i)})_{l_1,q}^2 + 2 (a_{(i)})_{l_1,q} (b_{(i)})_{l_2,q}+ (b_{(i)})_{l_2,q}^2 \\ & =\sum_{q=1}^r \left( (a_{(i)})_{l_1,q}+ (b_{(i)})_{l_2,q}\right)^2 \\ & = \|(\mathbf{J} \mathbf{Y})_{j^*,*}\|_2^2 \end{split} \end{equation} where for each block $i \in \mathcal{B}$, we compute $a_{(i)} = \hat{T}_1^{(i)} \mathbf{Y}$ and $b_{(i)} = \hat{T}_2^{(i)}$ as defined in Algorithm \ref{alg:L2presample}. Thus it suffices to sample a row $j^*$, indexed by the tuple $(i,l_1,l_2)$ where $i \in \mathcal{B}, l_1 \in [s_{(i),1}],l_2 \in [s_{(i),2}]$, such that the probability we sample $j^*$ is given by $p_{j^*} = \langle v^{(i),1}_{l_1} , v^{(i),2}_{l_2} \rangle/( \sum_{i',l_1',l_2'}\langle v^{(i'),1}_{l_1'} , v^{(i'),2}_{l_2'} \rangle )$. We argue that Algorithm \ref{alg:L2sample} does precisely this. First note that for any $i \in \mathcal{B}$, we have \[\langle v^{(i),1}_{l_1} , \text{root}(\tau_i(\mathbf{T}_2))\rangle =\sum_{l_1\in [s_{(i),1}] ,l_2 \in [s_{(i),2}]}\langle v^{(i),1}_{l_1} , v^{(i),2}_{l_2} \rangle\] Thus we first partition the set of all rows $j^*$ by sampling a block $i$ with probability $\frac{\left\langle \text{root}(\tau_i(\mathbf{T}_1)) , \text{root}(\tau_i(\mathbf{T}_2))\right\rangle}{\sum_{j} \left\langle \text{root}(\tau_j(\mathbf{T}_1)) , \text{root}(\tau_j(\mathbf{T}_2))\right\rangle}$, which is exactly the distribution over blocks induced by the $\ell_2$ mass of the blocks. Conditioned on sampling $i \in \mathcal{B}$, it suffices now to sample $l_1,l_2$ from that block. To do this, we first sample $l_1$ with probability $\langle v^{(i),1}_{l_1} , \text{root}(\tau_i(\mathbf{T}_2))\rangle/ \allowbreak (\sum_{l} \langle v^{(i),1}_{l} , \text{root}(\tau_j(\mathbf{T}_2)) \allowbreak\rangle)$, which is precisely the distribution over indices $l_1 \in [s_{(i),1}]$ induced by the contribution of $l_1$ to the total $\ell_2$ mass of block $i$. Similarly, once conditioned on $l_1$, we sample $l_2$ with probability $\langle v^{(i),1}_{l_1} , v^{(i),2}_{l_2} \rangle/(\sum_{l} \langle v^{(i),1}_{l_1} , v^{(i),2}_{l} \rangle)$, which is the distribution over indices $l_2\in [s_{(i),2}]$ induced by the contribution of the row $(l_1,l_2)$, taken over all $l_2$ with $l_1$ fixed. Taken together, the resulting sample $j^* \cong (i,l_1,l_2)$ is drawn from precisely the desired distribution. Finally, we bound the runtime of this procedure. First note that computing $a_{(i)} = \hat{T}_1^{(i)} \mathbf{Y}$ and $b_{(i)} = \hat{T}_2^{(i)}$ for all blocks $i$ can be done in $O(t (\mathtt{nnz}(\mathbf{T}_1) + \mathtt{nnz}(\mathbf{T}_2)))$ time, since each row of the tables $\mathbf{T}_1,\mathbf{T}_2$ is in exactly one of the blocks, and each row is multiplied by exactly $t$ columns of $\mathbf{Y}$. Once the $a_{(i)},b_{(i)}$ are computed, each tree $\tau_{(i)}(\mathbf{T}_j)$ can be computed bottom up in time $O(\log N)$, giving a total time of $O( \log N(\mathtt{nnz}(\mathbf{T}_1) + \mathtt{nnz}(\mathbf{T}_2)))$ for all trees. Given this, the values $\left\langle \text{root}(\tau_i(\mathbf{T}_1)) , \text{root}(\tau_i(\mathbf{T}_2))\right\rangle$ can be computed in less than the above runtime. Thus the total pre-processing time is bounded by $O((t+ \log N) (\mathtt{nnz}(\mathbf{T}_1) + \mathtt{nnz}(\mathbf{T}_2)))$ as needed. For the sampling time, it then suffices to show that we can carry out Lines 2 and 3 in $O(\log N)$ time. But these samples can be samples from the root down, by first computing $\langle \text{root}_{\text{lchild}}(\tau_i(\mathbf{T}_1)) , \allowbreak \text{root}(\tau_i(\mathbf{T}_2))\rangle$ and $\langle \text{root}_{\text{rchild}}(\tau_i(\mathbf{T}_1)) , \allowbreak \text{root}(\tau_i(\mathbf{T}_2))\rangle$, sampling one of the left or right children with probability proportional to its size, and recursing into that subtree. Similarly, $l_2$ can be sampled by first computing $\langle v^{(i),1}_{l_1} , \text{root}_{\text{lchild}}(\tau_i(\mathbf{T}_2)) \rangle$ and $\langle v^{(i),1}_{l_1} , \allowbreak \text{root}_{\text{rchild}}(\tau_i(\mathbf{T}_2)) \rangle$ sampling one of the left or right children with probability proportional to its size, and recursing into that subtree. This completes the proof of the $O( \log N)$ runtime for sampling after pre-processing has been completed. \end{proof} Then we show how we can construct $\mathbf{S}$ by invoking Algorithm \ref{alg:L2presample} and \ref{alg:L2sample}. \begin{lemma}\label{lem:makeS} Let $\mathbf{J}_{\text{small}} \in \mathbb{R}^{n_{\text{small}} \times d}$ be the matrix \allowbreak constructed as in Algorithm \ref{alg:1} in the dense case. Then the diagonal sampling matrix $\mathbf{S}$, as defined within lines 4 through 8 of Algorithm \ref{alg:1}, can be constructed in time $\tilde{O}(\mathtt{nnz}(\mathbf{T}_1) + \mathtt{nnz}(\mathbf{T}_2) + d^2\gamma^2 / \epsilon^2)$. \end{lemma} \begin{proof} We first show how we can quickly construct the matrix $\tilde{\mathbf{J}}_{\text{small}}$, which consists of $m= \Theta((n_1 + n_2)/\gamma)$ uniform samples from the rows of $\mathbf{J}_{\text{small}}$. First, to sample the rows uniformly, since we already know the size of each block $s_{(i)}$, we can first sample a block $i \in \mathcal{B}_{\text{small}}$ with probability proportional to its size, which can be done in $O(\log(|\mathcal{B}_{\text{small}}|) )= O(\log N)$ time after the $s_{(i)}$'s are computed. Next, we can sample a row uniformly from $T_{i}^j$ for each $j \in [2]$, and output the join of the two chosen rows, the result of which is a truly uniform row from $\mathbf{J}_{\text{small}}$. Since we need $m$ samples, and each sample has $d$ columns, the overall runtime is $\tilde{O}((n_1 + n_2)d/\gamma)$ to construct $\tilde{\mathbf{J}}_{\text{small}}$, which is $O(\mathtt{nnz}(\mathbf{T}_1) + \mathtt{nnz}(\mathbf{T}_2) )$ in the sparse case. Once we have $\tilde{\mathbf{J}}_{\text{small}}$, we compute in line \ref{line:4} of Algorithm \ref{alg:1} the sketch $\mathbf{W} \cdot \tilde{\mathbf{J}}_{\text{small}}$, where $\mathbf{W} \in \mathbb{R}^{t \times d}$ is the OSNAP Transformation of Lemma \ref{lem:OSNAP} with $\epsilon = 1/100$, where $t = \tilde{O}(d)$, which we can compute in $\tilde{O}(md) = \tilde{O}((n_1 + n_2)d/\gamma)$ time by Lemma \ref{lem:OSNAP}. Given this sketch $\mathbf{W} \cdot \tilde{\mathbf{J}}_{\text{small}} \in \mathbb{R}^{t \times d}$, the SVD of the sketch can be computed in time $O(d^\omega)$ \cite{demmel2007fast}, where $\omega < 2.373$ is the exponent of fast matrix multiplication. Since $\mathbf{W} \tilde{\mathbf{J}}_{\text{small}}$ is a $1/100$ subspace embedding for $\tilde{\mathbf{J}}$ with probability $99/100$ by Lemma \ref{lem:OSNAP}, by Proposition \ref{prop:tausubspace} we have $\tau^{\mathbf{W} \tilde{\mathbf{J}}_{\text{small}}}(\mathbf{A}) = (1 \pm 1/100)\tau^{\tilde{\mathbf{J}}_{\text{small}}}(\mathbf{A})$ for any matrix $\mathbf{A}$. Next, we can compute $\mathbf{V} \Sigma\mathbf{G}$ in the same $O(d^\omega)$ runtime, where $\mathbf{G} \in \mathbb{R}^{d \times t }$ is a Gaussian matrix with $t= \Theta( \log N)$ and with entries drawn independently from $\mathcal{N}(0,1/t^2)$. By standard arguments for Johnson Lindenstrauss random projections (see, e.g., Lemma 4.5 of \cite{li2013iterative}), we have that $\|( x^T \mathbf{G})\|_2^2 =(1 \pm 1/100) \|x\|_2^2$ for any fixed vector $x \in \mathbb{R}^d$ with probability at least $1-n^{-c}$ for any constant $c \geq 1$ (depending on $t$). We now claim that $\tilde{\tau_i}$ as defined in Algorithm \ref{alg:1} satisfies $C^{-1} \allowbreak\tau_i^{\tilde{\mathbf{J}}_{\text{small}}} (\mathbf{J}_{\text{small}}) \leq \tilde{\tau_i} \leq C\tau_i^{\tilde{\mathbf{J}}_{\text{small}}} (\mathbf{J}_{\text{small}}) $ for some fixed constant $C \geq 1$. As noted above, $\tau_i^{\tilde{\mathbf{J}}_{\text{small}}} (\mathbf{J}_{\text{small}}) = (1 \pm 1/100)\tau^{\mathbf{W} \tilde{\mathbf{J}}_{\text{small}}}(\mathbf{A})$, so it suffices to show $C^{-1} \tau_i^{\mathbf{W} \tilde{\mathbf{J}}_{\text{small}}} (\mathbf{J}_{\text{small}}) \allowbreak \leq \tilde{\tau_i} \allowbreak \leq C\tau_i^{\mathbf{W}\tilde{\mathbf{J}}_{\text{small}}} (\mathbf{J}_{\text{small}})$. To see this,
we obtain quasi-coverings of \ensuremath{\mathbf G}\xspace. See Fig.~\ref{fig:troncatureuniv}. The Reidemeister Theorem (Th.~\ref{reid}) is another tool to easily build quasi-covering of arbitrary radius. \end{remark} \section{Introduction} This paper presents results concerning two fundamental problems in the area of distributed computing: the termination detection problem and the election problem. The proofs are done in the model of local computations and use mainly common results and tools. Namely, they use Mazurkiewicz' algorithm \cite{MazurEnum}, the Szymanski-Shi-Prywes algorithm \cite{SSP}, coverings and quasi-coverings of graphs. \noindent \subsection{The Model} We consider networks of processors with arbitrary topology. A network is represented as a connected, undirected graph where vertices denote processors and edges denote direct communication links. Labels are attached to vertices and edges. The identities of the vertices, a distinguished vertex, the number of processors, the diameter of the graph or the topology are examples of labels attached to vertices; weights, marks for encoding a spanning tree or the sense of direction are examples of labels attached to edges. The basic computation step is to modify labels \emph{locally}, that is, on a subgraph of fixed radius $1$ of the given graph, according to certain rules depending on the subgraph only ({\em local computations}). The relabelling is performed until no more transformation is possible, {\it i.e.}, until a normal form is obtained. This is a model first proposed by A. Mazurkiewicz \cite{Mazur}. This model has numerous interests. As any rigorously defined model, it gives an abstract tool to think about some problems in the field of distributed computing independently of the wide variety of models used to represent distributed systems \cite{lamport}. As classical models in programming, it enables to build and to prove complex systems, and so, to get them right. And quoting D. Angluin in \cite{Angluin}, this kind of model makes it possible to put forward phenomena common to other models. It is true that this model is strictly stronger than other standard models (like message passing systems), but then, impossibility results remains true in weaker models. Furthermore, any positive solution in this model may guide the research of a solution in a weaker model or be implemented in a weaker model using randomised algorithms. Finally, this model gives nice properties and examples using classical combinatorial material, hence we believe this model has a very light overhead in order to understand and to explain distributed problems. We acknowledge, and underline, that the results presented here might be quantitatively different from other models, but we claim that they are not significantly different: they are qualitatively similar, as are all the impossibility results proved in different models since the seminal work of Angluin. All of them use the same ``lifting technique'', even though not on exactly the same kind of graph morphism \cite{Angluin,MazurEnum,YKsolvable,BVselfstab}. Thus it seems possible to extend the general results of this paper to more standard models like the ``message passing model''. Moreover, this direction has already given some results \cite{CGMT,CMelection,CGMlocalterm}. Note also that all the questions addressed in this paper are not specific of the model of local computations. E.g, is there a unique (universal) algorithm that can solve the election problem on the family {\ensuremath{\mathcal G_{\mathrm{min}}}}\xspace of networks that admit an election algorithm? Though this very\marginpar{ML ?} set {\ensuremath{\mathcal G_{\mathrm{min}}}}\xspace can be different depending on the model of computations that is used, we claim that the generic answer is no and that our main impossibility result can be extended to any other model. The reader should note that this question has not been previously thoroughly answered in any model (see the discussion about the election problem on Section~\ref{BVelection}). \subsection{Related Works} Among models related to our model there are local computation systems as defined by Rosenstiehl et al.~\cite{Rosen}, Angluin \cite{Angluin}, Yamashita and Kameda \cite{Yaka1}, Boldi and Vigna \cite{BV,BV0} and Naor and Stockmeyer \cite{Stockmeyer}. In \cite{Rosen} a synchronous model is considered, where vertices represent (identical) deterministic finite automata. The basic computation step is to compute the next state of each processor according to its state and the states of its neighbours. In \cite{Angluin} an asynchronous model is considered. A basic computation step means that two adjacent vertices exchange their labels and then compute new ones. In \cite{Yaka1} an asynchronous model is studied where a basic computation step means that a processor either changes its state and sends a message or it receives a message. In \cite{BV,BV0} networks are directed graphs coloured on their arcs; each processor changes its state depending on its previous state and on the states of its in-neighbours. Activation of processors may be synchronous, asynchronous or interleaved. In \cite{Stockmeyer} the aim is a study of distributed computations that can be done in a network within a time independent of the size of the network. \noindent \subsection{The Termination Detection Problem} Starting with the works by Angluin \cite{Angluin} and Itai and Rodeh \cite{IR}, many papers have discussed the question: what functions can be computed by distributed algorithms in networks where knowledge about the network topology is limited? Two important factors limiting the computational power of distributed systems are {\em symmetry} and {\em explicit termination}. Some functions can be computed by an algorithm that terminates {\em implicitly} but not by an {\em explicitly} terminating algorithm. In an implicitly terminating algorithm, each execution is finite and in the last state of the execution each node has the correct result. However, the nodes are not aware that their state is the last one in the execution; with an explicitly terminating algorithm, nodes know the local or global termination of the algorithm. \subsubsection{Known Results about the Termination Detection Problem.} Impossibility proofs for distributed computations quite often use the {\em replay} technique. Starting from a (supposedly correct) execution of an algorithm, an execution is constructed in which the same steps are taken by nodes in a different network. The mechanics of distributed execution dictate that this can happen, if the nodes are {\em locally} in the same situation, and this is precisely what is expressed by the existence of coverings. The impossibility result implies that such awareness can never be obtained in a finite computation. During the nineteen eighties there were many proposals for {\em termination detection} algorithms: such algorithms transform implicitly into explicitly terminating algorithms. Several conditions were found to allow such algorithms (thus to null the difference between implicitly and explicitly computable functions) and for each of these conditions a specific algorithm was given (see \cite{Mat87,Lynch,Tel}). These conditions include: \begin{enumerate} \item a unique {\em leader} exists in the network, \item the network is known to be a tree, \item the diameter of the network is known, \item the nodes have different identification numbers. \end{enumerate} \subsubsection{The Main Result.} In this paper we show that these four conditions are just special cases of one common criterion, namely that the local knowledge of nodes prohibits the existence of quasi-coverings of unbounded radius. We also prove, by generalising the existing impossibility proofs to the limit, that in families with quasi-coverings of unbounded radius, termination detection is impossible. Informally, we prove (see Theorem \ref{caracOTD}):\par {\it A distributed task $T=(\gfam,S)$ is locally computable with explicit termination detection if and only if \begin{theoenum} \item $S$ is covering-lifting closed on \gfam, \item there exists a recursive function $r$ such that for any $\ensuremath{\mathbf H}\xspace$, there is no strict quasi-covering of \ensuremath{\mathbf H}\xspace of radius $r(\ensuremath{\mathbf H}\xspace)$ in \gfam. \end{theoenum} } Actually, we investigate different termination detection schemes: local termination detection, observed termination detection and global termination detection. This is explained later in this introduction. This is the first time, to our knowledge, that computability of a distributed task (that is known to relate to ``local symmetries'') is fully distinguished from the problem of detecting a kind of termination of a distributed computation. \subsubsection{Structural Knowledge and Labelled Graphs} The definition of coverings and quasi-coverings are extended to include node and link labellings as well. In the extension it is required that a node is mapped to a node with the same label, and links are mapped to links with the same label. Our approach then naturally abstracts away the difference between anonymous or non-anonymous, centred or uniform networks. Indeed, the network being centred is modelled by considering as local knowledge that the graph family is the collection of graphs that contain exactly \emph{one} node with the label {\em leader}. Specific assumptions (leader, identities, sense of direction, knowledge of size) now are
\section{Introduction} Quasiseparable matrices arise frequently in various problems of numerical analysis and are becoming increasingly important in computer algebra, e.g. by their application to handle linearizations of polynomial matrices~\cite{BEG17}. Structured representations for these matrices and their generalisations have been widely studied but to our knowledge they have not been compared in detail with each other. In this paper we aim to adapt \texttt{SSS}~\cite{Eidelman1999OnAN} and \texttt{HSS} \cite{changu03, lyons2005fast}, two of the most prominent formats of numerical analysis to exact computations and compare them theoretically and experimentally to the \Bruhat format \cite{PS18} These formats all have linear storage size in both the dimension and the structure parameter. We do not investigate the Givens weight representation~\cite{DeBa08} as it strongly relies on orthogonal transformations in \CC, which transcription in the algebraic setting is more challenging. See \cite{Vandebril2005ABO, VVBM08, Hackbusch2015HierarchicalMA} for an extensive bibliography on computing with quasiseparable matrices. \begin{definition} An \(n \times n\) matrix \(A\) is \(s\)-quasiseparable if for all \(k \in \intset{1,n}\), \(\rank(\submat{A}{1..k}{k+1..n})\leq s\) and \(\rank(\submat{A}{k+1..n}{1..k})\leq s\). \end{definition} \smallskip \noindent {\em Complexity bound notation.} We consider matrices over an abstract commutative field $\K$, and count arithmetic operations in $\K$. Our detailed comparison of formats aims in particular to determine the asymptotic multiplicative constants, an insightful measure on the algorithm's behaviour in pratice. In this regard, we will use the leading term in the complexities as the measure for our comparison: namely a function \(\Time{XXX}(n,s)\) such that the number of field operations for running Algorithm XXX with parameters \(n,s\) is \(\Time{XXX}(n,s) + o(\Time{XXX}(n,s))\) asymptotically in \(n\) and \(s\). We proceed similarly for the space cost bounds with the notation \(\Space{XXX}(n,s)\). We denote by \(\omega\) a feasible exponent for square matrix multiplication, and \(C_\omega\) the corresponding leading constant; namely, using above notation, \(\Time{MM}(n)=C_\omega n^\omega\), where \texttt{MM} corresponds to the operation \(C = C + AB\) with \(A,B,C\in\K^{n \times n}\). The straightforward generalization gives \(\Time{MM}(m,k,n)=C_\omega mnk \min(m,k,n)^{\omega -3}\) for the product of an \(m\times k\) by a \(k\times n\) matrix. \subsection{Rank revealing factorizations} Space efficient representations for quasiseparable matrices rely on rank revealing factorizations: a rank \(r\) matrix \(A\in\K^{m\times n}\) is represented by two matrices \(L\in\K^{m\times r} R\in\K^{r\times n}\) such that \(A=LR\). In exact linear algebra, such factorizations are usually computed using Gaussian elimination, such as PLUQ, CUP, PLE, CRE decompositions~\cite{JPS13,DPS17,Sto00}, which we will generically denote by \PLUQ. Cost estimates of the above factorization algorithms are either given as \(\bigO{mnr^{\omega-2}}\) or with explicit leading constants \(\Time{RF}(m,n,r)=K_\omega n^\omega\) under genericity assumptions: \(m=n=r\) and generic rank profile~\cite{JPS13,DPS17}. We refer to~\cite{PSV23} for an analysis in the non-generic case of the leading constants in the cost of the two main variants of divide and conquer Gaussian elimination algorithms. We may therefore assume that \(\Time{RF}(m,n,r)=C_\texttt{RF}mnr^{\omega-2}\) for a constant $C_\texttt{RF}$, for \(\omega \geq 1+\log_2 3\), which is the case for all pratical matrix multiplication algorithm. Note that for \(\omega=3\), these costs are both equal to \(2mnr\). Unfortunately, the non-predictable rank distribution among the blocks being processed leads to an over-estimation of some intermediate costs which forbids tighter constants (i.e. interpolating the known one \(K_3=2/3\) in the generic case). The algorithms presented here still carry on for smaller values of \(\omega\), but we chose to skip the more complex derivation of estimates on their leading constants for the sake of clarity. Our algorithms for \SSS and \HSS can use any rank revealing factorization. On the other hand, the \Bruhat format requires one revealing the additional information of the rank profile matrix, e.g. the CRE decompositions used here (See~\cite{DPS17}). \begin{theorem}[\cite{MaHe07,DPS17}]\label{th:cre} Any rank \(r\) matrix \(A\in\K^{m\times n}\) has a CRE decomposition \(A=CRE\) where \(C\in \K^{m\times r}\) and \(E\in\K^{r\times n}\) are in column and row echelon form, and \(R\in\K^{r\times r}\) is a permutation matrix. \end{theorem} The costs we give in relation to \Bruhat generator therefore rely on constants \(C_\texttt{RF}\) from factorizations allowing to produce a CRE decomposition, like the ones in~\cite{PSV23}. \begin{table*}[ht] \caption{Summary of operation and storage costs}\label{tab} \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{\(\omega\)} & \multicolumn{3}{c}{\(\omega=3\)}\\ & \SSS & \HSS & \Bruhat & \SSS & \HSS & \Bruhat \\ \midrule Storage & \(7ns\) & \(18ns\) & \(4ns\)& \(7ns\) & \(18ns\) & \(4ns\)\\ Gen. from Dense & \(2C_\texttt{RF}n^2s^{\omega-2}\) &\(2^{\omega}C_\texttt{RF}n^2s^{\omega-2}\) & \(C_\texttt{RF}n^2s^{\omega-2}\) & \(4n^2s\) &\(16n^2s\) & \(2n^2s\) \\ \(\times\) Dense block vector\((n\times v)\) & \(7C_\omega nsv^{\omega-2}\) & \(18C_\omega nsv^{\omega-2}\) & \(8C_\omega nsv^{\omega-2}\) & \(14 nsv\) & \(36 nsv\) & \(16 nsv\) \\ Addition & \((10+2^\omega)C_\omega ns^{\omega-1}\) &&\(\paren{\frac{9 \cdot 2^{\omega - 2} - 8}{2^{\omega - 2} - 1}\cw + 2\crf} n s^{\omega - 1} \log n/s\)& \(36ns^2\) & & \(24ns^2 \log n/s\)\\ Product & \((31+2^\omega)C_\omega ns^{\omega-1}\) &&& \(78 n s^2\) \\ \bottomrule \end{tabular} \end{table*} \subsection{Contributions} In \cref{sec:gen} we define the \SSS, \HSS and \Bruhat formats. We then adapt algorithms operating with \HSS and \SSS generators from the literature to the exact context. The \HSS generation algorithm is given in a new iterative version and the \SSS product algorithm has an improved cost. We focus for \SSS on basic bricks on which other operations can be built. This opens the door to adaptation of fast algorithms for inversion and system solving \cite{CDGPV03, EG05bsss, CDGPSVW05} and format modeling operations such as merging, splitting and model reduction \cite{CDGPV03}. In \cref{sec:dtb} we give a generic \Bruhat generation algorithm from which we derive new fast algorithms for the generation from a sparse matrix and from a sum of matrices in \Bruhat form. \Cref{tab} displays the best cost estimates for differents operations on an \(n \times n\) \(s\)-quasi-separable matrix in the three formats presented in the paper. The best and optimal storage size is reached by the \Bruhat format which also has the fastest generator computation algorithm. However, this is not reflected in the following operation costs as applying a quasiseparable matrix to a dense matrix is least expensive with an \SSS generator and addition and product of \(n \times n\) matrices given in \Bruhat form is super-linear in \(n\). We notice in \cref{prop:hcost} that \HSS is twice as expensive as \SSS and gives no advantage in our context. We thus stop the comparison at the generator computation. We still give in \cref{tab} the cost of quasiseparable \(\times\) dense product which is proportional to the generator size \cite{lyons2005fast}. We complete this analysis with experiments showing that despite slightly worse asymptotic cost estimates, \SSS performs better than \Bruhat in practice for the construction in \Cref{sec:ecd} and the product by a dense block vector in \Cref{sec:eca}. \section{Presentation of the formats} \label{sec:gen} \subsection{\SSS generators} \label{sec:SSSgen} Introduced in~\cite{Eidelman1999OnAN}, \SSS generators were later improved independently in \cite{EG05bsss} and \cite{CDGPV03} using block-versions, which we present here. In particular, the space was improved from \(\bigO{ns^2}\) to \(\bigO{ns}\). An \(s\)-quasiseparable matrix is sliced following a grid of \(s \times s\) blocks. Blocks on, over and under the diagonal are treated separately. On one side of the diagonal, each block is defined by a product depending on its row (left-most block of the product), its column (right-most block), and its distance to the diagonal (number of blocks in the product). \begin{definition} \label{def:SSS} Let \(A = \left[\begin{smallmatrix}A_{1,1} & \cdots & A_{1,N} \\ \vdots && \vdots \\ A_{N,1} & \cdots & A_{N,N} \end{smallmatrix}\right] \in \K^{n \times n}\) with \(t \times t\) blocks \(A_{i,j}\) for \(i,j <N\) and \(N = \ceil{n/t}\). \(A\) is given in sequentially semi-separable format of order \(t\) (\(t\)-\SSS) if it is given by the \(t \times t\) matrices \(\paren{P_i, V_i}_{i\in\intset{2,N}}\), \( \paren{Q_i, U_i}_{i\in\intset{1, N - 1}}, \paren{R_i, W_i}_{i\in\intset{2,N - 1}}, \paren{D_i}_{i\in\intset{1,N} }\) s.t. \begin{equation} \label{eq:def} A_{i,j} = \left\{\begin{matrix} P_i R_{i - 1} \dots R_{j + 1} Q_j & \text{if } i > j \\ D_i & \text{if } i = j \\ U_i W_{i + 1} \dots W_{j - 1} V_j & \text{otherwise} \end{matrix}\right. \end{equation} \end{definition} \begin{proposition} Any \(n \times n\) \(s\)-quasiseparable matrix has an \(s\)-\SSS representation. It uses \(\Space{\SSS}(n,s)=7ns\) field elements. \end{proposition} \begin{proof} Direct consequence of \cref{prop:dts}. \end{proof} \subsection{\HSS generators} \label{sec:HSSSgen} The \HSS format was first introduced in \cite{CGP06ULV}, although the idea originated with the {\it uniform \(\mathcal H\)-matrices} of \cite{Hackbusch1999ASM} and in more details with the {\it \(\mathcal H^2\)-matrices} of \cite{HKS99H2}, with algorithms relying on \cite{Starr91}. The \(\mathcal H^2\) format is slightly different from \HSS, more details in \cite{Hackbusch2015HierarchicalMA}. The format is close to \SSS (see \cref{prop:altdef}) as the way of defining blocks is similar. Yet, the slicing grid is built recursively and the definition of blocks product depends on the path to
'Conv' in sampledLayer: # Probably, transform conv layer in residual connection pick = np.random.uniform(0, 100) if pick <= prob_residual: components['stride'] = 1 pick = np.random.uniform(0, 100) # Probability of the bottleneck having downsampling if pick <= prob_residual_downsample: # If the image size is already under 3 pixels in any dimension, does not downsample if not (any(i < 3 for i in img_size)): components['stride'] = 2 #print("OIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII") channels = components['out_channels'] * Bottleneck.expansion # Calculate new img_size based on the stride of the bottleneck img_size = [int(Conv_Output(3, elem, components['stride'],1)) for elem in img_size] newModelLayers.append(('Bottleneck',components)) lastLayer = 'ReLU' else: newModelLayers.append((sampledLayer,components)) else: # Add sampled layer to the end of the model # Sampled layer becomes the x in P(x+1|x) newModelLayers.append((sampledLayer,components)) #print(newModelLayers[-1]) # Some generated networks might not end in linear layers (e.g, if squeezenet is the last parent) try: if 'Linear' not in newModelLayers[-1][0]: if 'ReLU' not in newModelLayers[-1][0]: newModelLayers.append(('ReLU',{})) if 'Dropout' not in newModelLayers[-1][0]: modelSelected = modelWeightedRouletteSelection(models, 'Dropout') components = {} if modelSelected is None: components['p'] = 0.5 # Default for pytorch else: components['p'] = componentSelectionFromModelLayer(models[modelSelected]['hiddenStates']['Dropout'], 'p') newModelLayers.append(('Dropout',components)) newModelLayers.append(('Linear', {'out_features': n_classes})) except: return None # Adjust last linear layer for output_size = n_classes if 'Linear' in newModelLayers[-1][0] and newModelLayers[-1][1] != n_classes: #newModelLayers[-1] = ('Linear', {'out_features': n_classes}) # Original newModelLayers[-1] = ('Linear', {'out_features': n_classes}) #print(newModelLayers[-1]) return newModelLayers def train_network(model, utils, args, train_loader, val_loader): # Get Criterion criterion = utils.getCrossEntropyLoss() # Train the model for n epochs (normal training) # Store model in best val acc, best val loss best_model_wts = copy.deepcopy(model.state_dict()) best_model_lowest_loss = []#copy.deepcopy(model.state_dict()) best_acc_val = [0.0, 0] # accuracy, epoch best_loss_val = [20.0, 0] # loss epoch # Store each epoch evolution all_train_acc = [] all_train_loss = [] all_val_acc = [] all_val_loss = [] # Create SGD Optimizer for this model optimizer = utils.getSGDOptimizer(model, learningRate=args.learning_rate, momentum=args.momentum, weight_decay=args.weight_decay) # Implemented SWA #optimizer = SWA(optimizer, swa_start=50, swa_freq=10, swa_lr=learning_rate) # Scheduler scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, args.epochs)#, eta_min=0.001) startTimeFinalTrain = time.time() # Store time that it takes to train final model print("### Start Regular Training ###") for epoch in range(0,args.epochs): print ("-----") # Train model, top1, top5, train_loss = utils.train(model, train_loader, criterion, optimizer, epoch) all_train_acc.append(top1.avg) all_train_loss.append(train_loss.avg) # Learning Rate Decay Scheduler scheduler.step() # Validation val_top1, val_top5, val_loss = utils.evaluate(model, val_loader, criterion, epoch) all_val_acc.append(val_top1.avg) all_val_loss.append(val_loss.avg) if val_top1.avg > best_acc_val[0]: # store best model so far, for later, based on best val acc best_model_wts = copy.deepcopy(model.state_dict()) best_acc_val[0], best_acc_val[1] = val_top1.avg, epoch #if val_loss.avg < best_loss_val[0]: # store best model according to loss # best_model_lowest_loss = copy.deepcopy(model.state_dict()) # best_loss_val[0], best_loss_val[1] = val_loss.avg, epoch if args.prunning: model = unstructured_prune(model, threshold=args.prunning_threshold) #print (f'Epoch {epoch:3} -> Train Accuracy: {top1.avg}, Loss: {train_loss}, Val Accuracy: {val_acc}') print('Epoch {epoch:3} : ' 'Acc@1: {top1.avg:.3f}\t\t' 'Acc@5: {top5.avg:.3f}\t\t' 'Val Acc@1: {val_top1.avg:.4f}\t\t' 'Val Acc@5: {val_top5.avg:.4f}\t\t' 'Train Loss: {train_loss.avg:.4f}\t\t' 'Val Loss: {val_loss.avg:.4f}\t\t' .format( epoch=epoch, top1=top1, top5=top5, val_top1=val_top1, val_top5=val_top5, train_loss=train_loss, val_loss=val_loss)) endTimeFinalTrain = time.time() - startTimeFinalTrain # Store time that it takes to train final model #optimizer.swap_swa_sgd() return model, best_model_wts, best_model_lowest_loss, all_train_acc, all_train_loss, all_val_acc, all_val_loss, best_acc_val, best_loss_val, endTimeFinalTrain #----------------------------------------------------------------------------# #----------------------------------MAIN--------------------------------------# def main(args, save_to = "./experiments/"): # Parameters device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') data_dir, batch_size, generations, populationSize, elitism, probResidual, probResidualDownsample, epochs, \ learning_rate, weight_decay, maxneurons, epochsTrainSearch, dataset, datasetType, cutout, cutout_length, \ auto_augment \ = \ args.data_dir, args.batch_size, args.generations, args.population, args.elitism, args.prob_residual, \ args.prob_residual_downsample, args.epochs, args.learning_rate, args.weight_decay, args.max_neurons, \ args.search_epochs, args.dataset.lower(), args.dataset_type.lower(), args.cutout, args.cutout_length, \ args.auto_augment initPopDefined = args.init_pop modelsPath = './models_pytorch/' if not os.path.exists(modelsPath): os.makedirs(modelsPath) modelsPath = os.path.join(modelsPath, dataset+'/') modelsPath = os.path.join(modelsPath, datasetType+'/') if "Fly" not in datasetType.lower() else os.path.join(modelsPath, 'partial/') # Store time for everything startTime = time.time() # Create utils object (for train, optimizers, ...) utils = Utils(batch_size, device) # Get initial population # generate dictionary of models - initial population based on human-designed models if initPopDefined == True: # Read population from json population = readPopFromJSONs(path=modelsPath, args=args) if args.random_search_model: for key, _ in population.items(): population[key]['fitness'] = 10 else: trainloader, valloader, n_classes = utils.get_train_dataset(args, data_dir=data_dir, \ dataset=dataset, datasetType=datasetType, resize=True) # Generate search space using pytorch models startTimeGenerateSearchSpace = time.time() if not os.path.exists(modelsPath): os.makedirs(modelsPath) population = generateGraphsOfTorchVisionModels(utils, True, data_dir, dataset, datasetType, True, storeModelToFile=True, args=args, path=modelsPath, trainloader=trainloader, valloader=valloader) endTimeGenerateSearchSpace = time.time() print(f'Time taken to generate complete spearch space: {endTimeGenerateSearchSpace-startTimeGenerateSearchSpace}') exit() # Get dataloaders for the dataset ##trainloader, valloader, n_classes = data_loader_cifar10.get_train_valid_loader("./data/", batch_size, ## datasetType=datasetType) trainloader, valloader, n_classes = utils.get_train_dataset(args, data_dir=data_dir, \ dataset=dataset, datasetType=datasetType) #testloader = utils.get_test_dataset(args, data_dir="./data/", dataset=dataset) # Get image shape for this dataset/problem for train_images, _ in trainloader: sample_image = train_images[0] img_size = sample_image.shape # image size of the given problem break print(f'Image size: {img_size}') # Get Criterion criterion = utils.getCrossEntropyLoss() startTimeSearch = time.time() # Store time that takes to perform search generationInfos = {} # store information about each generation for gen in range(0,generations): #trainloader, valloader, n_classes = utils.get_train_dataset(args, data_dir=data_dir, \ # dataset=dataset, datasetType=datasetType) print(f"### Starting Generation {gen} [...] ###") # newPopulation will hold the new individuals (generated models) newPopulation = {} # Elitism : the best model from last gen continues #sortedPopulation -> tuples: (name, MODELINFO) sortedPopulation = sorted(population.items(), key=lambda x: x[1]['fitness'], reverse=True) del population utils.clean_memory() # avoid cases where the initial generation (search space) is much lower than the populationsize elitism = args.elitism if args.elitism < len(sortedPopulation) else len(sortedPopulation) # Elitism for generation 0, only 'search space models' are available if gen == 0: for i in range(elitism): newPopulation[i] = OrderedDict() newPopulation[i]['graph'] = sortedPopulation[i][1]['graph'] newPopulation[i]['hiddenStates'] = sortedPopulation[i][1]['hiddenStates'] newPopulation[i]['fitness'] = sortedPopulation[i][1]['fitness'] # Elitism for the rest of the generations else: iterator, countElitism, flagInitialModels = 0, 0, 1 #flaginitialmodels to 1, does not allow any of the search space models to be passed to next gens while(True): if countElitism == elitism: break #print(sortedPopulation) if 'model' not in sortedPopulation[iterator][1] and flagInitialModels == 1: iterator = iterator + 1 continue newPopulation[countElitism] = OrderedDict() newPopulation[countElitism]['graph'] = sortedPopulation[iterator][1]['graph'] newPopulation[countElitism]['hiddenStates'] = sortedPopulation[iterator][1]['hiddenStates'] newPopulation[countElitism]['fitness'] = sortedPopulation[iterator][1]['fitness'] if 'model' in sortedPopulation[iterator][1]: # initial pytorch models are ditched for memory optimization newPopulation[countElitism]['model'] = sortedPopulation[iterator][1]['model'] else: flagInitialModels = 1 iterator = iterator + 1 countElitism = countElitism + 1 print(f'Dictionary sizes -> newPopulation:{utils.get_size(newPopulation)} \t sortedPopulation:{utils.get_size(sortedPopulation)}') # transform list of Tuple(int, orderedict) into ordereddict for easier management sortedPopulation = OrderedDict(sortedPopulation) # Generate the rest of the population for individual in range(elitism,populationSize): #[elitism,popsize[ because the inicial indexes are the best from last generation # Generate individual based on the graphs from last generation modelLayersStringList, model = None, None # Initialize model variables graph, hiddenStates = None, None # Initilize HMC variables print("Generating new model [...]") while modelLayersStringList is None or model is None: # Generate models until one is ok # Generate a new model try: modelLayersStringList = generateNewModel(sortedPopulation, img_size[1:], n_classes=n_classes, prob_residual = probResidual, \ prob_residual_downsample = probResidualDownsample) except: #traceback.print_exc() modelLayersStringList = None utils.clean_memory() if modelLayersStringList is None: continue lengthModel = 0 for layer, components in modelLayersStringList: if 'Bottleneck' in layer: if components['stride'] == 2: lengthModel += 8 else: lengthModel += 7 else: lengthModel += 1 if lengthModel > 500: # Trying to avoid RAM segmentation fault modelLayersStringList = None continue # Transform model into pytorch try: num_classes=n_classes if args.without_training and args.mixed_training == False: num_classes = 1 model = processify(transformNetworkIntoPytorch, modelLayersStringList, input_shape=img_size, maxneurons=maxneurons, n_classes=num_classes) #model = transformNetworkIntoPytorch(modelLayersStringList, input_shape=img_size, # maxneurons=maxneurons, nClasses=num_classes) if model is None: continue model = model.to(device) except Exception as e: #traceback.print_exc() print(e) model = None #too big for memory #exit() utils.clean_memory() if 'broken pipe' in str(e.__str__).lower(): torch.cuda.empty_cache() continue # Generate Hidden-Markov Chain try: graph, hiddenStates = generateSummaryAndGraph(model, img_shape=img_size, automatedModel=True, input_image=sample_image) except Exception as e: #print (model) #print(modelLayersStringList) #if "NoneType" in str(e): print(f'Individual {individual} not capable of generating Hidden Markov Chain') print (e) del model model = None utils.clean_memory() continue #print (modelLayersStringList) # Add individual to the pool newPopulation[individual] = OrderedDict() fitness = 0 #np.random.randint(100) # Acquire fitness print("#### Starting Fitness Extraction [...] ####") try:
October 14, 2005 Expectations and Reality Yesterday was Yom Kippur and, by coincidence, the 3-year anniversary of Musing. A time, therefore, for a bit of reflection on (if not atonement for) having started this blog. What I set out to do, three years ago, was test the idea that weblogs could provide a useful medium for exchanging ideas in physics, less formal than research papers, but still archived, searchable, and hyperlinked. I also hoped that whatever software modifications were required to enable mathematical blogging would be easily adopted by the hordes of fellow physicists who would, surely, flock to the medium. One of the things I feared was that the comments section would be overwhelmed by the noise that long ago made USENET and most Web Forums (cosmoCoffee being a notable exception) hopeless sinkholes. How did the reality measure up? One thing I found was that this is harder than I thought it would be. I was lucky to have chosen what, at the time, was (and, in many ways, still is) the best weblogging platform available for my purpose. Still, creating a system that could reliably take TeX-like input and produce well-formed XHTML+MathML output is much harder than I ever imagined1. It took a long-running, concerted effort to beat the software into submission. A large fraction of my blog posts ended up devoted to documenting my efforts, so that someone wishing to replicate what I’ve done will not have to reinvent the wheel, the rack and the thumbscrew. I’d naïvely hoped that, in the end, setting up a weblog like this one would be as simple as installing MovableType and adding a few plugins. Someday, maybe that will be the case, but we’re still far from that day2. The other thing that was harder than I’d expected was the actual physics-blogging part. Early on, I met a colleague at a conference, who said, “Hey, I’ve been reading your blog,” (the first person to own up to reading the damned thing), “but all you do is say stuff like, ‘I just read hep-th/yymmnnn. Looks pretty interesting.’” He was right, of course. If this project was to be worthwhile, I’d have to do better than ‘Looks pretty interesting.’ I would have, to use the Economists’ phrase, to “add value.” Adding value, however, takes thought, effort and, most of all, time. Time is not something I have a lot of, so I post much less frequently than I thought I was going to. Perhaps, because it’s hard (in both of the above senses), only a trickle of physicists have taken the plunge and started their own blogs. On the whole, though, I’m pleased with the over 500 posts that I have made, and the over 1500 comments they have accrued. I learned a lot and I gather, from the feedback I’ve gotten, so have many of my readers. The quality of those comments have exceeded my wildest expectations. When I write about cosmic superstrings, Joe Polchinski and Koji Hashimoto chime in with explanations. When I write about using artificial diamonds as semiconductors, an expert on n-type doping (the hard case) comments. And, yes, when I write about some of the obscure (mis)feature of XML, none other than Tim Bray himself posts a comment. So far, at least, USENET-style flame wars, trolls and crackpots have simply not been a problem. To the contrary, the comment threads on many of my posts are far more interesting and insightful than the posts themselves. Comment (and Trackback) spam proved to be a bit of a challenge (and who can forget the crapflooders). But, so far, we seem to be winning the technological arms race with the spammers without hardly even trying. Another unexpected side benefit of this little endeavour was that I’ve gotten to make the acquaintance of some of the top people in the Web Standards community. That’s been fun and educational, in ways I never expected 3 years ago. I don’t know about the Web as a whole, but this little corner of it has been immeasurably improved by their comments and insights. All in all, it’s been a fun 3 years. It’ll be interesting to see where the next 3 years takes us. 1 Truth be told, I didn’t even know what the phrase “well-formed XHTML+MathMLmeant at the outset, so how could I have known what difficulties awaited me? 2 You’ll note that I haven’t upgraded this blog to MovableType 3.2. The effort involved in fixing the Administrative Interface to work as XHTML is simply not worth it for the new features of this release. Posted by distler at October 14, 2005 10:50 AM Re: Expectations and Reality It has indeed been a pleasure reading and occasionally understanding whatever little I could (not much outside the area of web related posts, I have to confess). Thanks for the tireless work you have put into making the web a more developed medium for scientific discourse. A special thanks for helping out with OpenPGPComment. It was fun! Here is to couple more of successful blogging years! Posted by: Srijith on October 14, 2005 2:27 PM | Permalink | PGP Sig | Reply to this OpenPGPComment It has indeed been a pleasure reading and occasionally understanding whatever little I could (not much outside the area of web related posts, I have to confess). My physicist readers have the same complaint about my web-related posts. :-) A special thanks for helping out with OpenPGPComment. It was fun! Yes, I really enjoyed working on OpenPGPComment. Thanks! One thing I’ve been wondering about is whether it’s crazy to dream of a bookmarklet or (more likely) a greasemonkey script which would do GPG signing of a text selection right in the browser, without requiring recourse to a separate application. It would still need access to your keyring and, maybe, the commandline gnupg. But there shouldn’t be a need for a separate GUI. Anyone tried this? Is it feasible? Posted by: Jacques Distler on October 14, 2005 5:21 PM | Permalink | PGP Sig | Reply to this Re: OpenPGPComment Anyone tried this? Is it feasible? Though I have not tried it, I know of one attempt to use the Javascript function to perform some simple hashing/encryption though I don’t think PK signing will be that easy. But I think it will be feasible. If Aspell can be used from Firefox, it shouldn’t be that difficult to call gpg. The question would be, how safe is it? Posted by: Srijith on October 17, 2005 2:48 AM | Permalink | PGP Sig | Reply to this Re: Cosmocoffee Though not a frequent poster, I read cosmocoffee pretty regularly. I guess moderated membership makes it faaar better than usenet. Any such attempt for hep-th?? Anyway, congrats on completing three years! Posted by: Aswin on October 14, 2005 9:38 PM | Permalink | Reply to this Re: Expectations and Reality Happy Anniversary Jacques! One thing that always puzzles me is the small number of people commenting, since it seems almost everyone is reading… makes me think maybe I am missing somthing and I should anticipate some big disaster in my future, where I pay for all those half-baked comments. Hope not. Posted by: Moshe on October 15, 2005 1:19 AM | Permalink | Reply to this Operant conditioning One thing that always puzzles me is the small number of people commenting, since it seems almost everyone is reading… One strong (but, evidently, not much considered) incentive for commenting is the subtle influence it exerts on yours truly. When a certain post receives a large number of comments, I tend to say to myself, “Hmmm… that went well. Maybe I should write more posts like that one …” It’s only half-conscious, but there’s no question that the level of response has an influence on my posting habits. So, if you want to see more of one type of post and fewer of another, there’s some easy Skinnerian conditioning you can engage in … Posted
• 344,996 # Archive for October 17th, 2011 ## PHY456H1F: Quantum Mechanics II. Lecture 11 (Taught by Prof J.E. Sipe). Spin and Spinors Posted by peeterjoot on October 17, 2011 [Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)] # Disclaimer. Peeter’s lecture notes from class. May not be entirely coherent. # Generators. Covered in section 26 of the text [1]. ## Example: Time translation \begin{aligned}{\lvert {\psi(t)} \rangle} = e^{-i H t/\hbar} {\lvert {\psi(0)} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.1) The Hamiltonian “generates” evolution (or translation) in time. ## Example: Spatial translation \begin{aligned}{\lvert {\mathbf{r} + \mathbf{a}} \rangle} = e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\mathbf{r}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.2) \begin{figure}[htp] \centering \includegraphics[totalheight=0.2\textheight]{qmTwoL11fig1} \caption{Vector translation.} \end{figure} $\mathbf{P}$ is the operator that generates translations. Written out, we have \begin{aligned}\begin{aligned}e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} &= e^{- i (a_x P_x + a_y P_y + a_z P_z)/\hbar} \\ &= e^{- i a_x P_x/\hbar}e^{- i a_y P_y/\hbar}e^{- i a_z P_z/\hbar},\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.3) where the factorization was possible because $P_x$, $P_y$, and $P_z$ commute \begin{aligned}\left[{P_i},{P_j}\right] = 0,\end{aligned} \hspace{\stretch{1}}(2.4) for any $i, j$ (including $i = i$ as I dumbly questioned in class … this is a commutator, so $\left[{P_i},{P_j}\right] = P_i P_i - P_i P_i = 0$). The fact that the $P_i$ commute means that successive translations can be done in any orderr and have the same result. In class we were rewarded with a graphic demo of translation component commutation as Professor Sipe pulled a giant wood carving of a cat (or tiger?) out from beside the desk and proceeded to translate it around on the desk in two different orders, with the cat ending up in the same place each time. ### Exponential commutation. Note that in general \begin{aligned}e^{A + B} \ne e^A e^B,\end{aligned} \hspace{\stretch{1}}(2.5) unless $\left[{A},{B}\right] = 0$. To show this one can compare \begin{aligned}\begin{aligned}e^{A + B} &= 1 + A + B + \frac{1}{{2}}(A + B)^2 + \cdots \\ &= 1 + A + B + \frac{1}{{2}}(A^2 + A B + BA + B^2) + \cdots \\ \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.6) and \begin{aligned}\begin{aligned}e^A e^B &= \left(1 + A + \frac{1}{{2}}A^2 + \cdots\right)\left(1 + B + \frac{1}{{2}}B^2 + \cdots\right) \\ &= 1 + A + B + \frac{1}{{2}}( A^2 + 2 A B + B^2 ) + \cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.7) Comparing the second order (for example) we see that we must have for equality \begin{aligned}A B + B A = 2 A B,\end{aligned} \hspace{\stretch{1}}(2.8) or \begin{aligned}B A = A B,\end{aligned} \hspace{\stretch{1}}(2.9) or \begin{aligned}\left[{A},{B}\right] = 0\end{aligned} \hspace{\stretch{1}}(2.10) ### Translating a ket If we consider the quantity \begin{aligned}e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\psi} \rangle} = {\lvert {\psi'} \rangle} ,\end{aligned} \hspace{\stretch{1}}(2.11) does this ket “translated” by $\mathbf{a}$ make any sense? The vector $\mathbf{a}$ lives in a 3D space and our ket ${\lvert {\psi} \rangle}$ lives in Hilbert space. A quantity like this deserves some careful thought and is the subject of some such thought in the Interpretations of Quantum mechanics course. For now, we can think of the operator and ket as a “gadget” that prepares a state. A student in class pointed out that ${\lvert {\psi} \rangle}$ can be dependent on many degress of freedom, for example, the positions of eight different particles. This translation gadget in such a case acts on the whole kit and kaboodle. Now consider the matrix element \begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = {\langle {\mathbf{r}} \rvert} e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\psi} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.12) Note that \begin{aligned}{\langle {\mathbf{r}} \rvert} e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} &= \left( e^{i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\mathbf{r}} \rangle} \right)^\dagger \\ &= \left( {\lvert {\mathbf{r} - \mathbf{a}} \rangle} \right)^\dagger,\end{aligned} so \begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = \left\langle{{\mathbf{r} -\mathbf{a}}} \vert {{\psi}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.13) or \begin{aligned}\psi'(\mathbf{r}) = \psi(\mathbf{r} - \mathbf{a})\end{aligned} \hspace{\stretch{1}}(2.14) This is what we expect of a translated function, as illustrated in figure (\ref{fig:qmTwoL11:qmTwoL11fig2}) \begin{figure}[htp] \centering \includegraphics[totalheight=0.2\textheight]{qmTwoL11fig2} \caption{Active spatial translation.} \end{figure} ## Example: Spatial rotation We’ve been introduced to the angular momentum operator \begin{aligned}\mathbf{L} = \mathbf{R} \times \mathbf{P},\end{aligned} \hspace{\stretch{1}}(2.15) where \begin{aligned}L_x &= Y P_z - Z P_y \\ L_y &= Z P_x - X P_z \\ L_z &= X P_y - Y P_x.\end{aligned} \hspace{\stretch{1}}(2.16) We also found that \begin{aligned}\left[{L_i},{L_j}\right] = i \hbar \sum_k \epsilon_{ijk} L_k.\end{aligned} \hspace{\stretch{1}}(2.19) These non-zero commutators show that the components of angular momentum do not commute. Define \begin{aligned}{\lvert {\mathcal{R}(\mathbf{r})} \rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar}{\lvert {\mathbf{r}} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.20) This is the vecvtor that we get by actively rotating the vector $\mathbf{r}$ by an angule $\theta$ counterclockwise about $\hat{\mathbf{n}}$, as in figure (\ref{fig:qmTwoL11:qmTwoL11fig3}) \begin{figure}[htp] \centering \includegraphics[totalheight=0.2\textheight]{qmTwoL11fig3} \caption{Active vector rotations} \end{figure} An active rotation rotates the vector, leaving the coordinate system fixed, whereas a passive rotation is one for which the coordinate system is rotated, and the vector is left fixed. Note that rotations do not commute. Suppose that we have a pair of rotations as in figure (\ref{fig:qmTwoL11:qmTwoL11fig4}) \begin{figure}[htp] \centering \includegraphics[totalheight=0.2\textheight]{qmTwoL11fig4} \caption{A example pair of non-commuting rotations.} \end{figure} Again, we get the graphic demo, with Professor Sipe rotating the big wooden cat sculpture. Did he bring that in to class just to make this point (too bad I missed the first couple minutes of the lecture). Rather amusingly, he points out that most things in life do not commute. We get much different results if we apply the operations of putting water into the teapot and turning on the stove in different orders. ### Rotating a ket \begin{aligned}{\lvert {\psi'} \rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar }{\lvert {\psi} \rangle},\end{aligned} \hspace{\stretch{1}}(2.21) we can form the matrix element \begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = {\langle {\mathbf{r}} \rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar }{\lvert {\psi} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.22) In this we have \begin{aligned}{\langle {\mathbf{r}} \rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar }&=\left( e^{i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar } {\lvert {\mathbf{r}} \rangle} \right)^\dagger \\ &=\left( {\lvert {\mathcal{R}^{-1}(\mathbf{r}) } \rangle} \right)^\dagger,\end{aligned} so \begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = \left\langle{{\mathcal{R}^{-1}(\mathbf{r}) }} \vert {{\psi'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.23) or \begin{aligned}\psi'(\mathbf{r}) = \psi( \mathcal{R}^{-1}(\mathbf{r}) )\end{aligned} \hspace{\stretch{1}}(2.24) # Generalizations. Recall what you did last year, where $H$, $\mathbf{P}$, and $\mathbf{L}$ were defined mechaniccally. We found \begin{itemize} \item $H$ generates time evolution (or translation in time). \item $\mathbf{P}$ generates spatial translation. \item $\mathbf{L}$ generates spatial rotation. \end{itemize} For our mechanical definitions we have \begin{aligned}\left[{P_i},{P_j}\right] = 0,\end{aligned} \hspace{\stretch{1}}(3.25) and \begin{aligned}\left[{L_i},{L_j}\right] = i \hbar \sum_k \epsilon_{ijk} L_k.\end{aligned} \hspace{\stretch{1}}(3.26) These are the relations that show us the way translations and rotations combine. We want to move up to a higher plane, a new level of abstraction. To do so we define $H$ as the operator that generates time evolution. If we have a theory that covers the behaviour of how anything evolves in time, $H$ encodes the rules for this time evolution. Define $\mathbf{P}$ as the operator that generates translations in space. Define $\mathbf{J}$ as the operator that generates rotations in space. In order that these match expectations, we require \begin{aligned}\left[{P_i},{P_j}\right] = 0,\end{aligned} \hspace{\stretch{1}}(3.27) and \begin{aligned}\left[{J_i},{J_j}\right] = i \hbar \sum_k \epsilon_{ijk} J_k.\end{aligned} \hspace{\stretch{1}}(3.28) In the simple theory of a spinless particle we have \begin{aligned}\mathbf{J} \equiv \mathbf{L} = \mathbf{R} \times \mathbf{P}.\end{aligned} \hspace{\stretch{1}}(3.29) We actually need a generalization of this since this is, in fact, not good enought, even for low energy physics. ## Many component wave functions. We are free to construct tuples of spatial vector functions like \begin{aligned}\begin{bmatrix}\Psi_I(\mathbf{r}, t) \\ \Psi_{II}(\mathbf{r}, t)\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.30) or \begin{aligned}\begin{bmatrix}\Psi_I(\mathbf{r}, t) \\ \Psi_{II}(\mathbf{r}, t) \\ \Psi_{III}(\mathbf{r}, t)\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.31) etc. We will see that these behave qualitatively different than one component wave functions. We also don’t have to be considering multiple particle wave functions, but just one particle that requires three functions in $\mathbb{R}^{3}$ to describe it (ie: we are moving in on spin). Question: Do these live in the same vector space? Answer: We will get to this. ### A classical analogy. “There’s only bad analogies, since if the are good they’d be describing the same thing. We can however, produce some useful bad analogies” \begin{enumerate} \item A temperature field \begin{aligned}T(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(3.32) \item Electric field \begin{aligned}\begin{bmatrix}E_x(\mathbf{r}) \\ E_y(\mathbf{r}) \\ E_z(\mathbf{r}) \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.33) \end{enumerate} These behave in a much different way. If we rotate a scalar field like $T(\mathbf{r})$ as in figure (\ref{fig:qmTwoL11:qmTwoL11fig5}) \begin{figure}[htp] \centering \includegraphics[totalheight=0.2\textheight]{qmTwoL11fig5} \caption{Rotated temperature (scalar) field} \end{figure} Suppose we have a temperature field generated by, say, a match. Rotating the match above, we have \begin{aligned}T'(\mathbf{r}) = T(\mathcal{R}^{-1}(\mathbf{r})).\end{aligned} \hspace{\stretch{1}}(3.34) Compare this to the rotation of an electric field, perhaps one produced by a capacitor, as in figure (\ref{fig:qmTwoL11:qmTwoL11fig6}) \begin{figure}[htp] \centering \includegraphics[totalheight=0.2\textheight]{qmTwoL11fig6} \caption{Rotating a capacitance electric field} \end{figure} Is it true that we have \begin{aligned}\begin{bmatrix}E_x(\mathbf{r}) \\ E_y(\mathbf{r}) \\ E_z(\mathbf{r}) \end{bmatrix}\stackrel{?}{=}\begin{bmatrix}E_x(\mathcal{R}^{-1}(\mathbf{r})) \\ E_y(\mathcal{R}^{-1}(\mathbf{r})) \\ E_z(\mathcal{R}^{-1}(\mathbf{r})) \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.35) No. Because the components get mixed as well as the positions at which those components are evaluated. We will work with many component wave functions, some of
their centers are placed in the simulation box of size $L^3$, using a uniform distribution before orientations are adjusted. The homogenization scale $L$ is chosen such that it significantly exceeds the size of the largest possible fracture $k$ by $L\gg d_k$, with $d_k=d_\text{max}$ the diameter of the circumscribed circle of fracture $k$. The fracture orientation is represented by the normal vector of the fracture plane. The distribution of a random set of normal vectors can be described by means of a probability density function $f(\theta,\phi)$, expressed in standard spherical coordinates $\theta$ and $\phi$. The simplest probability density function would be the uniform distribution of the normal vectors on the unit sphere. However, in order to study anisotropic, three-dimensional (3D) DFN, which are closer to reality, the \textit{Fisher distribution} can be used \cite{Khamforoush2007,Khamforoush2008}. This distribution is the analogue of the Gaussian distribution on the sphere \cite{Fisher1953} and can be deduced in polar coordinates $\theta\in\left[0,\pi\right]$ and $\phi\in\left[0,2\pi\right]$ as follows: \begin{equation} f(\theta,\phi,\kappa) = \frac{\kappa}{4\pi\sinh\kappa} \sin\theta \exp{\left(\kappa\left[\cos\theta_0\cos\theta+\sin\theta_0\cos(\phi-\phi_0)\right]\right)} \ , \label{eq: eq_fisher_distr1} \end{equation} with $\kappa\ge 0$ denoting the concentration or \textit{dispersion parameter}. For the particular case where the initial coordinates $\theta_0$ and $\phi_0$ are equal to zero, the Fisher distribution reduces to \begin{equation} f(\theta,\kappa) = \frac{\kappa}{4\pi\sinh\kappa} \sin\theta\exp{\left(\kappa\cos\theta\right)} \ . \label{eq: eq_fisher_distr2} \end{equation} This distribution is rotationally symmetric around the initial mean direction, which coincides with the $z$-axis. The greater the value of $\kappa$, the higher the concentration of the distribution around this axis. The distribution is unimodal for $\kappa>0$ and is uniform on the sphere for $\kappa=0$. Examples of generated artificial 3D DFNs with different $\kappa$ values are given in Figs.~\ref{fig: fig_AFN_DFN_model} a) and b). Note that only when fractures are highly connected, the system behaves like a continuous medium. Since the interest lies in the movement of a fluid phase in the network, the fracture density $\rho_{\text{fr}}$ of a DFN, being the number of fractures per volume, has to be high enough to build a spanning cluster over the entire domain. The connectivity of the DFN can be resembled by the dimensionless density or the so-called concentration $\rho^{'}$. Huseby \textit{et al.} \cite{Huseby1997} have shown, that the concentration $\rho^{'}$ equals the average number of intersections per fracture, $\rho^{'}=\left\langle\overline{N_\text{int}}\right\rangle$. The concentration is thereby a decreasing function of $\kappa$ \cite{Khamforoush2007}. Hence, a much smaller average number of intersections per fracture is expected for the highly anisotropic DFN (see Fig.~\ref{fig: fig_AFN_DFN_model} b)) than for the nearly isotropic one (see Fig.~\ref{fig: fig_AFN_DFN_model} a)). Furthermore, Khamforoush \textit{et al.} \cite{Khamforoush2008} noted that the asymptotic values (for the infinite system $L\rightarrow\infty$) of percolation thresholds (of OP) for anisotropic DFNs ($p_c\equiv \rho^{'}_c$) are in \begin{equation} 2.1 \leq \rho^{'}_{c,\infty,x,A} ,~ \rho^{'}_{c,\infty,y,A} \leq 2.3 \qquad \text{and} \qquad 2.3 \leq \rho^{'}_{c,\infty,z,A} \leq 2.44 \ . \label{eq: eq_rho_crit_2} \end{equation} $\rho^{'}_{c,\infty}$ denote critical concentrations in the range of $0\leq\kappa\leq 50$ with the mean direction of the Fisher distribution chosen to be the $z$-axis. \subsection{Realistic fracture network models} \label{subsec: subsec_WLB_DFN} In addition to the artificial networks described in the previous section it was thought useful to consider more realistic models conditioned on field data. Within the study, network models of the Excavation Damage Zone around tunnels in Opalinus Clay (not described here) and a large-scale model of the fracture system within a fractured marl were considered. The large-scale model is a simplified version of that described in Mazurek \textit{et al.} \cite{Mazurek1998} and Nagra \cite{Nagra1997}. It was selected as a model likely to show a well-connected 3D fracture system suitable for comparison with the analytic models. The model is conditioned on field data acquired from deep boreholes in the Valanginian Marl at Wellenberg in Central Switzerland during Nagra's investigations there in the early 1990s \cite{Nagra1997}. The model includes fracture ``sets'' corresponding to the inventory of water conducting features (WCFs) identified in the boreholes. The WCFs included cataclastic faults zones over a wide range of length scales and smaller structures including thin discrete shear zones, joints and isolated limestone beds. The full model also included a small number of larger-scale limestone banks that had been identified but these were not in the stochastic model used here. The length distribution of the cataclastic zones was based on the observed thickness and roughly followed a power-law form with exponent between $2.3$ and $2.6$. Feature orientation was based on observations from core and showed a strong relationship to the folding axis of the rock but includes discordant sets resulting in a well connected network. Hydraulic aperture distributions were calculated from the observed transmissivity measured by pumping tests and fluid logging analyses. The Wellenberg DFN model shown in Fig.~\ref{fig: fig_AFN_DFN_model} c) is comparable to the 3D randomly generated artificial fracture networks and is built similarly in the form of a $L^3$ system with $L=100$~m. Large natural faults, spanning over hundreds of meters, are thereby tessellated into smaller fractures, represented by polygons with side length of $R_i=10$~m, using again the parallel plate model. The concentration in terms of the average number of intersections per fracture is $\rho^{'}\approx 15$ and thus much higher than for the artificially generated DFN. This leads to a higher connectivity of the DFN and hence only a small amount of water is assumed to be trapped in the invasion percolation simulation afterwards. \begin{figure}[H] \includegraphics[width=1.0\textwidth]{AFN_DFN_model} \caption[MIP on an isotropic artificial 3D DFN - breakthrough] {a) Nearly isotropic ($\kappa=1$) artificial 3D DFN with system size $L=10\ d_\text{max}$, where $d_\mathrm{max}$ is the diameter of the circumscribed circle of the largest fracture in the DFN, containing $4000$ fractures. b) Highly anisotropic ($\kappa=50$) artificial 3D DFN with the same size and number of fractures. c) Sample of the Wellenberg DFN with system size $L=100$~m containing $12684$ fractures.} \label{fig: fig_AFN_DFN_model} \end{figure} \section{Physical modifications for IP} \label{sec: sec_physical_modifications} IP in its simplest form fully invades an entire fracture, once the aperture threshold is reached and at least one intersecting fracture is filled. For fluid invasion, previous experiments have shown that the detailed behavior at fracture intersections are essential \cite{Sarkar2004}. We use a modification to explicitly represent intersections and to implement rules at fracture intersections to mimic capillary barrier type behavior for the invading gas in form of an adjusted entry aperture $h^{\text{E}}_\text{adj}$. Given that some fractures of the fracture network models can be much bigger than others and that maybe only a small portion of them will be invaded, an additional modification considering the path length inside the fracture for the hydraulic aperture $h^{\text{H}}$ of a fracture is suggested. Consider an invaded fracture $i$ from the invasion front, connected to a non-invaded fracture $j$, that is a feasible candidate for invasion. The resistance of fracture $j$ is given by its entry aperture $h^{\text{E}}_\text{j}$, which has to be at least as high as the current threshold value to allow for invasion. In the following, the aperture value of fracture $j$ will be adjusted with respect to the effects mentioned above: \begin{itemize} \item The length of the \textit{line of intersection} $d_{\text{cl}}$ between fracture $i$ and $j$, which results from the connectivity calculation, is divided by the diameter of the circumscribed circle $d_{j}$ taken as the maximal width of fracture $j$ (see Fig.~\ref{fig: fig_MIP_adjustments}). Hence the discriminating factor concerning the line of intersection is \begin{equation} \text{n}_{\text{C}} = \displaystyle\frac{d_{\text{cl}}}{d_{j}} \ . \label{eq: eq_intersection_factor} \end{equation} The corresponding factor for the invasion of entry fractures at the entry zone of the fluid is $n_{\text{C}}=1$. \item To consider the \textit{inclination} of the fractures with respect to the flow direction, the aperture is corrected with a discriminating factor $\cos{(\phi)}$ (where $0 \leq\phi\leq\pi/2$) as proposed by Sarkar \textit{et al.} \cite{Sarkar2004}. The angle $\phi$ is either the contact angle between fracture $i$ and $j$ at the invading front (see Fig.~\ref{fig: fig_MIP_adjustments}) or the angle between the global inflow direction and the inclination of the considered entry fracture at the entry zone. This results for both cases in the discriminating factor of \begin{equation} \text{n}_{\text{I}} = \cos{\left(C_{\text{I}} \phi\right)} \ , \label{eq: eq_inclination_factor} \end{equation} with the adjustment parameter $C_{\text{I}}$ between $0$ and $1$, to adjust the strength of the inclination effect. Since $\cos{\left(\phi\right)}=0$ for $\phi=\pi/2$, the discriminating factor vanishes for fractures perpendicular to each other and flow between them would be prohibited. This is anticipated by setting $C_{\text{I}}<1$. In the other extreme, no inclination adjustment is obtained by setting $C_{\text{I}}=0$. \item The \textit{path length} adjustment is done using the path length discriminating factor \begin{equation} \text{n}_{\text{P}} = \displaystyle\frac{d_{i}}{d_{\text{pl}}} \ , \label{eq: eq_pathlength_factor} \end{equation} where $d_{\text{pl}}$ is the path length in fracture $i$ from prior (invaded) entry node to the exit node to fracture $j$. $d_{i}$ is
C specifications to be directly targeted into Xilinx programmable devices without the need to manually create RTL. A Gentle Introduction to Bilateral Filtering and its Applications “Fixing the Gaussian Blur”: the Bilateral Filter Sylvain Paris – Adobe. With a few quick and easy clicks of the mouse, you can use it to blur images, soften your photos and create a mysterious and alluring atmosphere in your photography. The convolution with OpenCV in Python is applied using the cv2. Here is what you will learn: Install and set up Python. The Gaussian blur can be seen as a refinement of the basic box blur — in fact, both techniques fall in the category of weighted average blurs. Gaussian Filter is used to blur the image. This manual will instead focus on how to use python to automate and extend Krita. This wallpaper has been tagged with the following keywords: blue, blur, gaussian, 1920x1200, 18769. 0 Strength (Radius) of the blur. Conclusion. 8 pixels here. The next regularization just smooths the image with a gaussian blur. Dishonored: Delilah. The only difference between the both is that the GaussianBlur effect uses a Gaussian convolution kernel to blur the nodes. medianBlur and cv2. Click any image below to go to the full-screen interactive version. I've included below a very flexible, separable Gaussian blur shader in GLSL. You can vote up the examples you like or vote down the ones you don't like. Link | Reply. (log) Frequency response of the Gaussian blur kernel (LPF) (log) Frequency response of the blurred image (log) Frequency response of the inverse kernel (HPF) Frequency response of the output image. No, the solution is quite easy, OpenGL gives us the ability to "blur" textures. In PowerPoint, use File > Insert to put the picture on a slide. Change the opacity of this shadow layer to 55%. The following image is screenshot of the Image Edge Detection sample application in action: Edge Detection. Fotoramio blur image tool. We will write: f ∼GP(m,k), (1). Lots of posts on google for multi-pass but we're using UE4 which doesn't support multi-pass materials. 6 Gaussian filtering A Gaussian kernel gives less weight to pixels further from the center of the window This kernel is an approximation of a Gaussian function: 0 0 0 0 0 0 0 0 0 0. The following python code can be used to add Gaussian noise to an image:. In practice, this is done by discrete convolution of the image and a mask. Use Trello to collaborate, communicate and coordinate on all of your projects. Chiang and Boult [10] present a super-resolution algorithm including image restoration with a local blur estimation. In our example, we will apply a polynomial mapping to bring our data to a 3D dimension. 5, and returns the filtered image in B. Larger values give less detail. This happens only on 32 bits per channel. The Gaussian Blur lens applied to this image can be hidden (left) and displayed (right) from the Object Manager docker. Restore Edge Sharpness. Working with downsampled data sounds ideal in terms of storage and performance. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales—see scale space representation and scale space implementation. I originally looked at the Wikipedia pseudocode and tried to essentially rewrite that in Python, but that was more. 8), which features a variety of performance and workflow enhancements. And the result is a very simple blur not smoothed. GaussianBlur, cv2. Gaussian collaborator Dr. We're here to save the day. In the case of a 3D Gaussian Distribution import numpy def fig2data (fig ): """ @brief Convert a Matplotlib figure to a 4D It may be necessary to blur (i. The left plot at the picture below shows a 3D plot. Radial, Gaussian Or Motion Blur – 9 Types Of Blur Effect In After Effects Explained Each and every editor will, in his lifetime, face a difficulty in picking the right blur for his/her scene. Data Augmentation. Here we will learn to apply the following function on an image using Python OpenCV: Bitwise Operations and Masking, Convolution & Blurring, Sharpening - Reversing the image blurs, Thresholding (Binarization), Dilation, Erosion, Opening/Closing, Edge detection and Image gradients,. In this Photoshop tutorial I will demonstrate how to quickly and easily create a live background blur effect, like the one that is used in iOS 7. All effects are located in the javafx. Hi, I am just working on a project where I need a good looking fast blur effect. Performs a High-Quality gaussian blur on the result. Open 2D Kernel Density plot dialog by. Introduction. So, here is a very simple program with basically the same result. Here the last parameter of gaussian_filter() is the standard deviation. Why these two gaussian blur sequences are so. BCC Motion Blur. Catmull-Rom Catmull-Rom keeps sharp contrast edges crisp. Conclusion In this paper a new no-reference image quality measure for blurred images in frequency domain is proposed and the results are compared with two of the best known image sharpness/blur measures JNB and CPBD. So for a 9×9 filter kernel you then have 18 texture samples in 2 passes. What else could we ask for? Well, the box filter is not a very good blur kernel, we would like to use some better convolution kernels. If radius is zero, then a suitable radius is automatically selected based on sigma. _, thresh = cv2. You optionally can perform the filtering using a GPU (requires Parallel Computing Toolbox™). You can also blur portions of your image to. Agree to Rasterize the layer, then soften the edges by around 5px. BCC Motion Blur. OpenCV Tutorial - Python API Image Processing OpenCV Python - Setup with Anaconda IDE OpenCV Python - Read and Display Image OpenCV Python - Save Image OpenCV Python - Get Image Size OpenCV Python - Resize Image OpenCV Python - Read Image with Transparency Channel OpenCV Python - Edge Detection OpenCV Python - Gaussian Blur. Go to Filter > Blur > Gaussian Blur. 8 pixels here. In our example, we will apply a polynomial mapping to bring our data to a 3D dimension. The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. Fourier Transform of a Gaussian Kernel is another Gaussian Kernel. Here is the code using the Gaussian blur:. In Gaussian Blur operation, the image is convolved with a Gaussian filter instead of the box filter. 3D Slicer (Slicer) is a free, multi-platform, open source, integrated software for visualization and image computing. All you have to specify is the size of the Gaussian kernel with which your image should be convolved. blur() GaussianBlur() medianBlur() bilateralFilter() Theory Note The explanation below belongs to the book Computer Vision: Algorithms and Applications by Richard Szeliski and to LearningOpenCV. order int or sequence of ints, optional. Image from this website “convolution is a mathematical operation on two functions (f and g) to produce a third function, that is typically viewed as a modified version of one of the original functions, giving the integral of the pointwise multiplication of the two functions as a function of the amount that one of the original functions is translated” — Wiki Page. Apply the Gaussian Blur filter as a Smart Filter and add just enough to soften the large background trees (6-8 Radius). You will use, modify, and extend a program to blur a black and white image using a simple 3 x 3 matrix. Mitch Preserve the highs, but give an almost out-of-focus blur while smoothing sharp edges. The original image after undistort and Gaussian blur (frame) The image with the purple circles
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Hamiltonians that are quadratic in the fermionic ladder operators.""" import warnings import numpy from scipy.linalg import schur from openfermion.ops import PolynomialTensor from openfermion.ops._givens_rotations import ( fermionic_gaussian_decomposition, givens_decomposition_square, swap_columns, swap_rows) class QuadraticHamiltonianError(Exception): pass class QuadraticHamiltonian(PolynomialTensor): r"""Class for storing Hamiltonians that are quadratic in the fermionic ladder operators. The operators stored in this class take the form .. math:: \sum_{p, q} (M_{pq} - \mu \delta_{pq}) a^\dagger_p a_q + \frac12 \sum_{p, q} (\Delta_{pq} a^\dagger_p a^\dagger_q + \text{h.c.}) + \text{constant} where - :math:`M` is a Hermitian `n_qubits` x `n_qubits` matrix. - :math:`\Delta` is an antisymmetric `n_qubits` x `n_qubits` matrix. - :math:`\mu` is a real number representing the chemical potential. - :math:`\delta_{pq}` is the Kronecker delta symbol. We separate the chemical potential :math:`\mu` from :math:`M` so that we can use it to adjust the expectation value of the total number of particles. Attributes: chemical_potential(float): The chemical potential :math:`\mu`. """ def __init__(self, hermitian_part, antisymmetric_part=None, constant=0.0, chemical_potential=0.0): r""" Initialize the QuadraticHamiltonian class. Args: hermitian_part(ndarray): The matrix :math:`M`, which represents the coefficients of the particle-number-conserving terms. This is an `n_qubits` x `n_qubits` numpy array of complex numbers. antisymmetric_part(ndarray): The matrix :math:`\Delta`, which represents the coefficients of the non-particle-number-conserving terms. This is an `n_qubits` x `n_qubits` numpy array of complex numbers. constant(float, optional): A constant term in the operator. chemical_potential(float, optional): The chemical potential :math:`\mu`. """ n_qubits = hermitian_part.shape[0] # Initialize combined Hermitian part if not chemical_potential: combined_hermitian_part = hermitian_part else: combined_hermitian_part = ( hermitian_part - chemical_potential * numpy.eye(n_qubits)) # Initialize the PolynomialTensor if antisymmetric_part is None: super(QuadraticHamiltonian, self).__init__( {(): constant, (1, 0): combined_hermitian_part}) else: super(QuadraticHamiltonian, self).__init__( {(): constant, (1, 0): combined_hermitian_part, (1, 1): 0.5 * antisymmetric_part, (0, 0): -0.5 * antisymmetric_part.conj()}) # Add remaining attributes self.chemical_potential = chemical_potential @property def combined_hermitian_part(self): """The Hermitian part including the chemical potential.""" return self.n_body_tensors[1, 0] @property def antisymmetric_part(self): """The antisymmetric part.""" if (1, 1) in self.n_body_tensors: return 2. * self.n_body_tensors[1, 1] else: return numpy.zeros((self.n_qubits, self.n_qubits), complex) @property def hermitian_part(self): """The Hermitian part not including the chemical potential.""" return (self.combined_hermitian_part + self.chemical_potential * numpy.eye(self.n_qubits)) @property def conserves_particle_number(self): """Whether this Hamiltonian conserves particle number.""" discrepancy = numpy.max(numpy.abs(self.antisymmetric_part)) return numpy.isclose(discrepancy, 0.0) def add_chemical_potential(self, chemical_potential): """Increase (or decrease) the chemical potential by some value.""" self.n_body_tensors[1, 0] -= (chemical_potential * numpy.eye(self.n_qubits)) self.chemical_potential += chemical_potential def ground_energy(self): """Return the ground energy.""" orbital_energies, _, constant = ( self.diagonalizing_bogoliubov_transform()) return numpy.sum(orbital_energies[ numpy.where(orbital_energies < 0.0)[0]]) + constant def majorana_form(self): r"""Return the Majorana represention of the Hamiltonian. Any quadratic Hamiltonian can be written in the form .. math:: \frac{i}{2} \sum_{j, k} A_{jk} f_j f_k + \text{constant} where the :math:`f_i` are normalized Majorana fermion operators: .. math:: f_j = \frac{1}{\sqrt{2}} (a^\dagger_j + a_j) f_{j + N} = \frac{i}{\sqrt{2}} (a^\dagger_j - a_j) and :math:`A` is a (2 * `n_qubits`) x (2 * `n_qubits`) real antisymmetric matrix. This function returns the matrix :math:`A` and the constant. """ hermitian_part = self.combined_hermitian_part antisymmetric_part = self.antisymmetric_part # Compute the Majorana matrix using block matrix manipulations majorana_matrix = numpy.zeros((2 * self.n_qubits, 2 * self.n_qubits)) # Set upper left block majorana_matrix[:self.n_qubits, :self.n_qubits] = numpy.real(-0.5j * ( hermitian_part - hermitian_part.conj() + antisymmetric_part - antisymmetric_part.conj())) # Set upper right block majorana_matrix[:self.n_qubits, self.n_qubits:] = numpy.real(0.5 * ( hermitian_part + hermitian_part.conj() - antisymmetric_part - antisymmetric_part.conj())) # Set lower left block majorana_matrix[self.n_qubits:, :self.n_qubits] = numpy.real(-0.5 * ( hermitian_part + hermitian_part.conj() + antisymmetric_part + antisymmetric_part.conj())) # Set lower right block majorana_matrix[self.n_qubits:, self.n_qubits:] = numpy.real(-0.5j * ( hermitian_part - hermitian_part.conj() - antisymmetric_part + antisymmetric_part.conj())) # Compute the constant majorana_constant = (0.5 * numpy.real(numpy.trace(hermitian_part)) + self.n_body_tensors[()]) return majorana_matrix, majorana_constant def diagonalizing_bogoliubov_transform(self, spin_sector=None): r"""Compute the unitary that diagonalizes a quadratic Hamiltonian. Any quadratic Hamiltonian can be rewritten in the form .. math:: \sum_{j} \varepsilon_j b^\dagger_j b_j + \text{constant}, where the :math:`b^\dagger_j` are a new set fermionic creation operators that satisfy the canonical anticommutation relations. The new creation operators are linear combinations of the original ladder operators. In the most general case, creation and annihilation operators are mixed together: .. math:: \begin{pmatrix} b^\dagger_1 \\ \vdots \\ b^\dagger_N \\ \end{pmatrix} = W \begin{pmatrix} a^\dagger_1 \\ \vdots \\ a^\dagger_N \\ a_1 \\ \vdots \\ a_N \end{pmatrix}, where :math:`W` is an :math:`N \times (2N)` matrix. However, if the Hamiltonian conserves particle number then creation operators don't need to be mixed with annihilation operators and :math:`W` only needs to be an :math:`N \times N` matrix: .. math:: \begin{pmatrix} b^\dagger_1 \\ \vdots \\ b^\dagger_N \\ \end{pmatrix} = W \begin{pmatrix} a^\dagger_1 \\ \vdots \\ a^\dagger_N \\ \end{pmatrix}, This method returns the matrix :math:`W`. Args: spin_sector (optional str): An optional integer specifying a spin sector to restrict to: 0 for spin-up and 1 for spin-down. Should only be specified if the Hamiltonian includes a spin degree of freedom and spin-up modes do not interact with spin-down modes. If specified, the modes are assumed to be ordered so that spin-up orbitals come before spin-down orbitals. Returns: orbital_energies(ndarray) A one-dimensional array containing the :math:`\varepsilon_j` diagonalizing_unitary (ndarray): A matrix representing the transformation :math:`W` of the fermionic ladder operators. If the Hamiltonian conserves particle number then this is :math:`N \times N`; otherwise it is :math:`N \times 2N`. If spin sector is specified, then `N` here represents the number of spatial orbitals rather than spin orbitals. constant(float) The constant """ n_modes = self.combined_hermitian_part.shape[0] if spin_sector is not None and n_modes % 2: raise ValueError( 'Spin sector was specified but Hamiltonian contains ' 'an odd number of modes' ) if self.conserves_particle_number: return self._particle_conserving_bogoliubov_transform(spin_sector) else: # TODO implement this if spin_sector is not None: raise NotImplementedError( 'Specifying spin sector for non-particle-conserving ' 'Hamiltonians is not yet supported.' ) return self._non_particle_conserving_bogoliubov_transform( spin_sector) def _particle_conserving_bogoliubov_transform(self, spin_sector): n_modes = self.combined_hermitian_part.shape[0] if spin_sector is not None: n_sites = n_modes // 2 def index_map(i): return i + spin_sector*n_sites spin_indices = [index_map(i) for i in range(n_sites)] matrix = self.combined_hermitian_part[ numpy.ix_(spin_indices, spin_indices)] orbital_energies, diagonalizing_unitary_T = numpy.linalg.eigh( matrix) else: matrix = self.combined_hermitian_part if _is_spin_block_diagonal(matrix): up_block = matrix[:n_modes//2, :n_modes//2] down_block = matrix[n_modes//2:, n_modes//2:] up_orbital_energies, up_diagonalizing_unitary_T = ( numpy.linalg.eigh(up_block)) down_orbital_energies, down_diagonalizing_unitary_T = ( numpy.linalg.eigh(down_block)) orbital_energies = numpy.concatenate( (up_orbital_energies, down_orbital_energies)) diagonalizing_unitary_T = numpy.block([ [up_diagonalizing_unitary_T, numpy.zeros((n_modes//2, n_modes//2))], [numpy.zeros((n_modes//2, n_modes//2)), down_diagonalizing_unitary_T]]) else: orbital_energies, diagonalizing_unitary_T = numpy.linalg.eigh( matrix) return orbital_energies, diagonalizing_unitary_T.T, self.constant def _non_particle_conserving_bogoliubov_transform(self, spin_sector): majorana_matrix, majorana_constant = self.majorana_form() # Get the orthogonal transformation that puts majorana_matrix # into canonical form canonical, orthogonal = antisymmetric_canonical_form(majorana_matrix) orbital_energies = canonical[ range(self.n_qubits), range(self.n_qubits, 2 * self.n_qubits)] constant = -0.5 * numpy.sum(orbital_energies) + majorana_constant # Create the matrix that converts between fermionic ladder and # Majorana bases normalized_identity = (numpy.eye(self.n_qubits, dtype=complex) / numpy.sqrt(2.)) majorana_basis_change = numpy.eye( 2 * self.n_qubits, dtype=complex) / numpy.sqrt(2.) majorana_basis_change[self.n_qubits:, self.n_qubits:] *= -1.j majorana_basis_change[:self.n_qubits, self.n_qubits:] = normalized_identity majorana_basis_change[self.n_qubits:, :self.n_qubits] = 1.j * normalized_identity # Compute the unitary and return diagonalizing_unitary = majorana_basis_change.T.conj().dot( orthogonal.dot(majorana_basis_change)) return orbital_energies, diagonalizing_unitary[:self.n_qubits], constant def diagonalizing_circuit(self): r"""Get a circuit for a unitary that diagonalizes this Hamiltonian This circuit performs the transformation to a basis in which the Hamiltonian takes the diagonal form .. math:: \sum_{j} \varepsilon_j b^\dagger_j b_j + \text{constant}. Returns ------- circuit_description (list[tuple]): A list of operations describing the circuit. Each operation is a tuple of objects describing elementary operations that can be performed in parallel. Each elementary operation is either the string 'pht' indicating a particle-hole transformation on the last fermionic mode, or a tuple of the form :math:`(i, j, \theta, \varphi)`, indicating a Givens rotation of modes :math:`i` and :math:`j` by angles :math:`\theta` and :math:`\varphi`. """ _, transformation_matrix, _ = self.diagonalizing_bogoliubov_transform() if self.conserves_particle_number: # The Hamiltonian conserves particle number, so we don't
from actual data, from running or overall average, and as well from the additional channels. ## Combustion PV-graph The p-V graph (Pressure over Volume) can show actual and averaged pressure data. Image 62: p-V graph shows actual and averaged pressure data ## Standard display types Cycle based results such as MaxPressure are calculated for every cycle. We get a single value every two revolutions for 4 stroke engines, and one value for every revolution in 2 stroke engines. Cycle based results can be shown in various displays. The common displays for cycle-based results are: Digital, Analogue, Bar, and Recorder. Image 63: Standard display collection ## How to Store the Measurement Data? You have several options to start and stop storing: • manually, and • automatically with various settings of the trigger conditions. When using overall averaged data (single value results of statistic), read the information in this chapter carefully. It is important to keep in mind when the statistics calculation starts and stops! Otherwise the overall statistic results will not match the stored data! ## Manual Start-Stop storing If you want to manually start-stop storing, select the Storing type: always fast. Find the Storing settings on the location: Measure -> Ch. setup -> Storing. You can start storing your data directly in the setup screen by pressing 'Store'. Or you can first go to Measure to first take a look at the live data and then start storing whenever you like. Image 64: Choose the storing type To stop storing, just press the Stop button. Pressing Pause will stop storing the data. When you are in pause mode you have 2 options: • press Resume to continue storing, and • press Stop to end the measurement and close the data file. In pause mode, the overall statistics calculation is not interrupted. When you stop the measurement, the overall statistic values are stored in the data file. The illustration below points out this difference more clearly. When you look at the graph you can see 2 marked ranges: • The Stored data time: this is the time from the start of storing (when we have pressed the Store button) and when we have pressed the Pause button). • The Average calculation time: this is the full time shown in the graph: after pressing Pause, the measurement still continues (and the statistics will still be calculated), but the measurement data will not be stored in the data-file. But when we press Stop at the end, the value of the statistics will be written to the data-file. The overall averaged result (the green channel: Cyl/Ave/P1/MaxP) does not match the expected average of the data values (blue channel: P1MaxP) within the Stored data time range (because the calculation continued after pressing the Stop button). Image 65: Average calculation and stored data time When you open the data-file in Analyse mode, the recorder will only contain the data of the Stored data time range and the average value will not match the expected value (average will be calculated from store start to stop). Pause will stop storing the data but will not stop the overall statistic calculation. Use the Stop button to terminate your measurement, if averaged values must match the data file. Storing will always reset the overall statistic calculation! When reloading this data file it seems now that average calculation is wrong because you can't see anymore the complete data set that was used for the average calculation. To correct this it is possible to recalculate the complete the CEA-mathematics out of the stored data file again. Image 66: Press Offline math Press Offline math, enter the CEA module setup, and change the calculation state of the module from Calculated to Offline. Image 67: Enter the CEA module setup Go back to Review and you can Recalculate the CEA mathematics. The overall averaged pressure channel now matches the stored pressure data. Press Save to overwrite the original stored CEA data inside this data file. Image 68: Recalculate results With Offline math you can modify existing Math modules and/or add new Math modules. For example inside the CA-Module you can add channels like Heat-Release which were not stored during acquisition. ## Storing a defined number of cycles DewesoftX offers various trigger conditions for starting and stopping the acquisition. When you want to store a fixed number of cycles, then you can use the Stop storing after XXX CEA cycles feature. In the example below, storing will be stopped after 100 cycles. Image 69: Store a defined number of cycles ## Start-Stop on channel condition The aim of triggered storing is to store on external events. The start and the stop trigger can be any channel. Additionally pre-trigger and post-trigger time can be defined. Dewesoft X offers various trigger conditions for starting and stopping the acquisition. As described in chapter Manual Start-Stop storing we need to take care of the calculation method of the overall statistics values to get the expected results. For triggered start and stop storing, you must select the Storing type: fast on trigger. Now we have the possibilities to define Start storing and Stop storing conditions. Image 70: Triggered start and stop storing Let's make one example when storing should be started if the maximum pressure is above 100 bar and should be stopped after 100 cycles. Start storing condition type is a simple edge on channel P1/MaxP. Stop storing condition is channel Cycle count. As Mode we need Delta amplitude to stop after 100 cycles. Image 71: Start trigger condition Image 72: Stop trigger condition This will already work (e.g. it stores only the data of the 100 cycles), but we also need to take care of the overall statistics. With the current settings, the data-file will remain open after the 100 cycles and the overall statistics will remain active. To avoid this, we can enable Stop storing after 1 trigger: This means that the storing will be stopped after 100 cycles and the DewesoftX data file will be closed (and will contain the overall statistics for the 100 cycles). When the next trigger occurs a completely new data-file will be created. With the settings above, the average channels are reset at ARM of measurement. So the Average channels will include the cycle value before the trigger event occurs as well. If average values are required for further analysis, they must be recalculated in post-processing like described before. ## How to Analyze Combustion Data? In the Analysis mode of Dewesoft X you can load a data file and: • review the data, • modify or add math modules, and • print the complete screen to generate a report. For analyzing recorded cycles the yellow courser can be moved to browse through the cycles. Image 73: Use the yellow cursor to move and browse through the cycles Similar to the Measurement mode you can modify or add new Visual controls or Displays. All these modifications can be stored to the data file with Save file changes You can also load the measurement screen layout and formulas from another data file with Load display and offline math. By pressing Edit on the right top corner a context menu is opened which gives you the possibility to copy the image or as well the data shown on the actual display to the clipboard or to a file. Image 74: Saving file changes Image 75: Save to file option which allows you copying data and images If further analysis is needed, various export data formats are supported. ## Angle-domain data To export angle-domain data we have to change the Data presentation to CEA data. With this setting all output channels from the CEA module can be exported as angle-domain data. Image 77: For exporting the angle-domain data, set the Data presentation on CEA data The results can be categorized into 4 groups: • Cycle data as Vector: Angle domain data like pressure, integrated heat-release, additional channels... • Averaged Cycle data as Vector: One angle vector for the complete measurement like the average pressure. • Once per Cycle data as Scalar: Cycle based data like max. Pressure (value and position), I50, MEP values... •
$X$ and $Y$ in the notation of that lemma chosen as $X:=Y:=Y_0$ ) to first obtain \begin{align*} \frac{d}{dt}\langle \Pi_t u(t), v(t) \rangle_{Y_0, Y_0^*} = \langle (\Pi_tu(t))', v(t) \rangle_{Y_0, Y_0^*} + \langle \Pi_tu(t), v'(t) \rangle_{Y_0, Y_0^*}, \end{align*} which upon integrating and using the compact support of $\varphi$ leads to \begin{align*} \int_0^T\langle (\Pi_tu(t))', v(t) \rangle_{Y_0, Y_0^*} &= -\int_0^T \langle \Pi_tu(t), v'(t) \rangle_{Y_0, Y_0^*}.\qedhere \end{align*} \end{proof} Finally, we are able to prove the main result. \begin{proof}[Proof of Theorem \ref{thm:equivalence}] Suppose $u\in\mathcal{W}^{p,q}(X_0, Y_0)$, then immediately $\phi^X_{(\cdot)} u(\cdot)\in L^p_X$, so it remains to prove that this function has a weak time derivative in $L^q_Y$. Let $\eta\in\mathcal{D}_{Y^*}$, then \begin{align*} \int_0^T \big\langle \phi_t^X u(t), \dot \eta(t)\big\rangle_{X(t), \, X^*(t)} &= \int_0^T \big\langle \phi_t^X u(t), (\phi_{-t}^Y)^*\big((\phi_t^Y)^* \eta(t)\big)'\big\rangle_{Y(t), \, Y^*(t)} \\ &= \int_0^T \big\langle \Pi_t u(t), \big((\phi_t^Y)^* \eta(t)\big)'\big\rangle_{Y_0,\, Y_0^*} \\ &= - \int_0^T \left\langle \bar{\Pi}_tu'(t), \left(\phi_{t}^{Y}\right)^* \eta(t)\right\rangle_{Y_0,\, Y_0^*} + \langle \hat{\Lambda}(t) u(t), \left(\phi_{t}^{Y}\right)^* \eta(t)\rangle_{Y_0^{**},Y_0^*} \tag{by Proposition \ref{prop:newCrit1}}\\ &= - \int_0^T \left\langle \phi_t^{Y} \big( \bar{\Pi}_tu'(t)\big), \eta(t)\right\rangle_{Y(t),\, Y^*(t)} - \int_0^T \lambda(t; \phi_t^X u(t), \eta(t)), \end{align*} from where we conclude that $t\mapsto \phi_t^X u(t)$ has a weak time derivative as desired. For the converse direction, we begin by fixing $u\in \W^{p,q}(X,Y)$. By definition, for any $\eta\in\mathcal{D}_{Y^*}$, \begin{align*} \int_0^T \big\langle \dot u(t), \eta(t)\big\rangle_{Y(t), Y^*(t)} = -\int_0^T \big\langle u(t), \dot \eta(t)\big\rangle_{X(t), X^*(t)} - \int_0^T \lambda(t; u(t), \eta(t)), \end{align*} which we can pullback, arguing as in the previous paragraph and rearrange to obtain \begin{align*} \int_0^T \left\langle \phi_{-t}^Y \dot u(t), \left(\phi_{t}^{Y}\right)^* \eta(t)\right\rangle_{Y_0, Y_0^*} + \Big\langle \hat{\Lambda}(t)\phi_{-t}^X &u(t), \left(\phi_{t}^{Y}\right)^* \eta(t)\Big\rangle_{Y_0^{**}, Y_0^*} = - \int_0^T \left\langle \Pi_t\left(\phi_{-t}^X u(t)\right), \big(\left(\phi_{t}^{Y}\right)^* \eta(t)\big)'\right\rangle_{Y_0, Y_0^*}. \end{align*} Letting $\varphi := \left(\phi_{(\cdot)}^{Y}\right)^* \eta \in \mathcal{D}((0,T); Y_0^*)$ and using assumption \eqref{ass:rangeOfJ}, this is equivalent to \begin{align*} \int_0^T \left\langle \phi_{-t}^Y \dot u(t) + \mathcal{J}_{Y_0}^{-1} \hat\Lambda(t)\phi_{-t}^X u(t), \varphi(t)\right\rangle_{Y_0, Y_0^*} = - \int_0^T \left\langle \Pi_t\left(\phi_{-t}^X u(t)\right), \varphi'(t) \right\rangle_{Y_0, Y_0^*} , \end{align*} from where we conclude that \begin{align*} \big(\Pi_{t} \phi^X_{-t} u(t) \big)' = \phi_{-t}^Y \dot u(t) + \mathcal{J}_{Y_0}^{-1} \hat\Lambda(t) \phi_{-t}^X u(t) \end{align*} with $\Pi\phi^X_{-(\cdot)} u\in \mathcal{W}^{p,p \land q}(X_0, Y_0)$. By Proposition \ref{prop:evolvingreverse} it now follows that $\phi_{-(\cdot)}^X u\in \mathcal{W}^{p,q}(X_0, Y_0)$ The equivalence of norms is a result of the uniform boundedness of the flow maps and their inverses, and from the assumptions on $\hat \Lambda$ and $\bar \Pi$. \end{proof} \section{The Gelfand triple $X(t)\subset H(t)\subset X^*(t)$ setting}\label{sec:gelfandtriple} We now specialise the theory and results of \S \ref{sec:generalCase} to the important case of a Gelfand triple (recall \S \ref{sec:gelfandtripleBackground}) \begin{align*} X(t) \xhookrightarrow{d} H(t) \hookrightarrow X^*(t) \end{align*} for all $t \in [0,T]$, that is, $X(t)$ is a reflexive Banach space continuously and densely embedded into a Hilbert space $H(t)$ which has been identified with its dual via the Riesz map. This setup arises frequently in the study of evolutionary variational problems and several concrete examples will be given in \S \ref{sec:examples} and \S \ref{sec:application}. In the context of \S \ref{sec:generalCase}, we are taking $Y(t) := X^*(t)$ with the inclusion of $X(t)$ into $Y(t)$ given through compositions of the maps involved in the Gelfand triple. Naturally, we wish to make use of the theory developed in the previous sections and the basic assumptions that one needs (namely, Assumption \ref{ass:generalcaseChanges}) translated into this Gelfand triple framework are as follows. \begin{ass}\label{ass:gelfandTripleCase} For all $t \in [0,T]$, assume the existence of maps \[\phi_t^H\colon H_0 \to H(t), \qquad \phi_t^X := \phi_t^H|_{X_0} \colon X_0 \to X(t)\] such that \[(H(t), \phi_t^H)_{t \in [0,T]} \quad\text{and}\quad (X(t), \phi_t^X)_{t \in [0,T]} \text{ are compatible pairs.}\] We assume the measurability condition \eqref{ass:measurabilityOfDual}, i.e., \[t \mapsto \norm{\phi_{-t}^*f}{X^*(t)} \text{ is measurable for all $f \in X_0^*$}.\] Furthermore, suppose that \begin{itemize} \item[(i)] for fixed $u\in H_0$, \[t\mapsto \|\phi_t^H u\|_{H(t)}^2 \text{ is continuously differentiable};\] \item[(ii)] for fixed $t\in [0,T]$, \[(u,v)\mapsto \dfrac{\partial }{\partial t}(\phi_t^H u, \phi_t^H v)_{H(t)} \text{ is continuous,} \] and there exists $C>0$ such that, for almost all $t\in [0,T]$ and for any $u, v\in H_0$, \begin{align}\label{eq:lambdaGF} \left|\dfrac{\partial }{\partial t}(\phi_t^H u, \phi_t^H v)_{H(t)}\right| \leq C \|u\|_{H_0} \|v\|_{H_0}. \end{align} \end{itemize} \end{ass} It follows that \[(X^*(t), (\phi_{-t}^X)^*)_{t\in [0,T]}\text{ is a compatible pair.}\] Under the final assumption above, the map $\hat{\Lambda}(t)\colon X_0 \to X_0^*$, defined in Definition \ref{defn:generalOperators}, is in fact such that $\hat{\Lambda}(t)\colon H_0\to H_0^*$ is bounded and linear with \begin{align*} \langle\hat{\Lambda}(t)u, v\rangle_{H_0^*, H_0} = \hat{\lambda}(t; u, v) \quad \forall u, v\in H_0. \end{align*} \begin{remark}\label{rem:changesGelfandTriple} Parts (i) and (ii) of Assumption \ref{ass:gelfandTripleCase} say that, with\footnote{See Remark \ref{rem:observationsHatBetc} (i).} \begin{equation}\label{eq:PiTAsHMap} \Pi_t = (\phi_t^H)^A \phi_t^H, \quad \pi(t;u,v) = (\phi_t^H u, \phi_t^H v)_{H(t)}, \end{equation} the map $(u,v)\mapsto \hat\lambda(t; u, v) = \partial \pi(t; u, v)\slash \partial t \text{ is continuous}$ and there exists $C>0$ such that, for almost all $t\in [0,T]$ and for any $u, v\in H_0$, \begin{align*} |\hat\lambda(t; u,v)| \leq C \|u\|_{H_0} \|v\|_{H_0}. \end{align*} \end{remark} Taking into view the Hilbert structure, the definition of the weak time derivative in \eqref{eq:genweakderiv} becomes the following. \begin{definition}[Weak time derivative] We say $u\in L^1_X$ has a \emph{weak time derivative} $v\in L^1_{X^*}$ if \begin{align* \int_0^T (u(t), \dot \eta(t) )_{H(t)} \;\mathrm{d}t &= -\int_0^T \langle v(t), \eta(t)\rangle_{X^*(t), X(t)} \;\mathrm{d}t - \int_0^T \lambda(t; u(t), \eta(t)) \;\mathrm{d}t \quad \forall \eta\in \mathcal{D}_{X}. \end{align*} \end{definition} It is convenient to state Proposition \ref{prop:temamType} applied to this setting. \begin{prop}[Characterisation of the weak time derivative]\label{prop:temamTypeGT} Let $u \in L^p_X$ and $g \in L^q_{X^*}$. Then $\dot u =g$ if and only if \begin{equation* \frac{d}{dt}(u(t), \phi_{t}^H v)_{H(t)} = \langle g(t), \phi_{t}^X v \rangle_{X^*(t), X(t)} + \lambda(t; u(t), \phi_{t}^H v) \quad \forall v \in X_0. \end{equation*} \end{prop} \subsection{Differentiating the inner product: transport theorem} We now specialise Theorem \ref{thm:transport} to this setting. We first obtain the extra regularity $\W(X, X^*)\hookrightarrow C^0_H$ as a consequence of the evolving space equivalence property, and then use it to obtain a general statement. \begin{theorem}[Transport theorem in the Gelfand triple setting]\label{thm:transportGelfandTriple} Let $p \in [1,\infty]$ and suppose that there exists an evolving space equivalence between $\mathcal{W}^{p,p'}(X_0, X_0^*)$ and $\W^{p,p'}(X, X^*)$. Then \begin{itemize} \item[(i)] the embedding $\W^{p,p'}(X, X^*)\hookrightarrow C^0_H$ is continuous; \item[(ii)] given $u, v\in \W^{p,p'}(X, X^*)$, the map \begin{align}\label{eq:TTgelfand1} t\mapsto (u(t), v(t))_{H(t)} \end{align} is absolutely continuous and we have, for almost all $t\in [0,T]$, \begin{align} \dfrac{d}{dt} (u(t), v(t))_{H(t)} = \big\langle \dot u(t), v(t)\big\rangle_{X^*(t), X(t)} + \big\langle u(t), \dot v(t)\rangle_{X(t), X^*(t)} + \lambda(t; u(t), v(t))\label{eq:transportTheoremGelfandTriple}. \end{align} \end{itemize} \end{theorem} \begin{proof} The proof of (i) follows from \begin{align*} u\in \W^{p,p'}(X,X^*) \Longleftrightarrow \phi_{-(\cdot)}^X u\in \mathcal W^{p,p'}(X_0, X_0^*) \Longrightarrow \phi_{-(\cdot)}^X u\in C\left([0,T]; H_0\right) \Longleftrightarrow u\in C^0_H, \end{align*} where we have used the assumption, Theorem \ref{lem:standardTransportTheoremDualityForGelfand} and the fact that $\phi_t^H|_{X_0} = \phi_t^X$. We now turn to the proof of (ii). The fact that \eqref{eq:TTgelfand1} is an element of $L^1(0,T)$ and that \eqref{eq:transportTheoremGelfandTriple} is the weak time derivative of \eqref{eq:TTgelfand1} follows as in the proof of Theorem \ref{thm:transport}, so it suffices now to check that the right-hand side of \eqref{eq:transportTheoremGelfandTriple} is also in $L^1(0,T)$. Due to (i) and the stronger assumption \eqref{eq:lambdaGF} we may conclude with \begin{equation*} \int_0^T |\lambda(t; u(t), v(t))| \;\mathrm{d}t \leq C \int_0^T \|u(t)\|_{H(t)} \|v(t)\|_{H(t)} \;\mathrm{d}t \leq C\, T \, \|u\|_{C^0_H} \|v\|_{C^0_H}.\qedhere \end{equation*} \end{proof} Let us now study criteria for the spaces $\W(X,X^*)$ and $\mathcal{W}(X_0,Y_0)$ to be equivalent like in \S \ref{sec:criteriaESE}. \subsection{Criteria for evolving space equivalence} The evolving space equivalence criteria of Theorem \ref{thm:equivalence} tailored to the situation under consideration is as follows. It is worth pointing out that these conditions are considerably easier to check in practice than the ones given in \cite[Theorem 2.33]{AlpEllSti15a}. \begin{theorem}[Criteria for evolving space equivalence in the Gelfand triple setting]\label{thm:ESEGelfandTriple} Let Assumption \ref{ass:gelfandTripleCase} hold. If for all $t \in [0,T]$, \begin{align} &\Pi_t\colon X_0 \to X_0 \text{ is bounded uniformly in $t$}\label{ass:gtTtrange},\\ &\Pi_t^{-1}\colon X_0 \to X_0 \text{ exists and is bounded uniformly in $t$}\label{ass:gtTtInvertible},\\ &\Pi_{(\cdot)}^{-1} u \in \mathcal{M}(0,T;X_0) \text{ for all $u \in X_0$} \label{ass:gtmeasurabilityOnTtAndInv},\\ &\Pi^{-1}\colon \mathcal{W}^{p,q}(X_0, X_0) \to \mathcal{W}^{p, p\land q}(X_0, X_0) \label{ass:gtremainsInSpace}, \end{align} then $\W^{p,q}(X,X^*)$ and $\mathcal{W}^{p,q}(X_0,X_0^*)$ are equivalent. \medskip \noindent More precisely, with $\bar \Pi_t$ as in \eqref{eq:defnOfBarPiGT}, \begin{itemize} \item[(i)] under \eqref{ass:gtTtrange}, if $u\in \mathcal{W}^{p,q}(X_0,X_0^*)$, then $\phi_{(\cdot)}^X u\in \W^{p,q}(X,X^*)$ and \begin{align} \partial^\bullet \phi_t^X u(t) = (\phi_{-t}^X)^* \bar \Pi_t u'(t).\label{eq:equiv1GT} \end{align} \item[(ii)] under \eqref{ass:gtTtrange}--\eqref{ass:gtremainsInSpace}, if $u\in \W^{p,q}(X,X^*)$, then $\phi_{-(\cdot)}^X u\in \mathcal W^{p,q}(X_0,X_0^*)$ and \begin{align} \big(\phi_{-t}^X u(t)\big)' = \bar \Pi_t^{-1} (\phi_{t}^X)^* \dot u(t).\label{eq:equiv2GT} \end{align} \end{itemize} \end{theorem} \begin{proof} The idea is to verify the assumptions of Theorem \ref{thm:equivalence}. Since we are in the reflexive setting, assumption \eqref{ass:rangeOfJ} is automatic and Remark \ref{rem:daggerVsDual} applies and we do not need to distinguish between $\Pi^\dagger$ and $\Pi^*$. Assumption \eqref{ass:gtTtrange} implies that the existence of the dual $\Pi^\#_t\colon X_0^* \to X_0^*$ to $\Pi_t$ considered as an operator $\Pi_t\colon X_0 \to X_0$ which is defined (as usual)
of constraints that are maximally uniquely solvable w.r.t.\ the causal ordering graph, both before and after a soft intervention, Theorem \ref{thm:soft interventions} shows that such a soft intervention does not have an effect on variables that cannot be reached by a directed path from that constraint in the causal ordering graph, while it may have an effect on other variables.\footnote{Our result generalizes Theorem 6.1 in \citet{Simon1953} for linear self-contained systems of equations. The proof of our theorem is similar.} \begin{restatable}{theorem}{softinterventions} \label{thm:soft interventions} Let $\mathcal{M}=\tuple{\B{\X}, \B{X}_W, \B{\Phi}, \BG}$ be a system of constraints with coarse decomposition $\mathrm{CD}(\BG)=\tuple{T_I,T_C,T_O}$. Suppose that $\mathcal{M}$ is maximally uniquely solvable w.r.t.\ the causal ordering graph $\mathrm{CO}(\BG)$ and let $\B{X}^*=\B{g}(\B{X}_W)$ be a solution of $\mathcal{M}$. Let $f\in (T_C\cup T_O)\cap F$ and assume that the intervened system $\mathcal{M}_{\mathrm{si}(f, \phi'_f,c'_f)}$ is also maximally uniquely solvable w.r.t.\ $\mathrm{CO}(\BG)$. Let $\B{X}' = \B{h}(\B{X}_W)$ be a solution of $\mathcal{M}_{\mathrm{si}(f, \phi'_f,c'_f)}$. If there is no directed path from $f$ to $v\in (T_C\cup T_O) \cap V$ in $\mathrm{CO}(\BG)$ then $X^*_v=X'_v$ almost surely. On the other hand, if there is a directed path from $f$ to $v$ in $\mathrm{CO}(\BG)$ then $X^*_v$ may have a different distribution than $X'_v$, depending on the details of the model $\mathcal{M}$. \end{restatable} Example \ref{ex:bathtub soft interventions} shows that the presence of a directed path in the causal ordering graph for the equilibrium equations of the bathtub system implies a causal effect for almost all parameter values. This illustrates that non-effects and \emph{generic} effects can be read off from the causal ordering graph.\footnote{If a directed path from an equation vertex $f$ to a variable vertex $v$ implies that an intervention on $f$ changes the distribution of the solution component $X_v$ for almost all values (w.r.t.\ Lebesgue measure) of the parameters, then we say that there is a \emph{generic} causal effect of $f$ on $v$.} \begin{example} \label{ex:bathtub soft interventions} Recall the system of constraints for the filling bathtub in Section \ref{sec:application to bathtub}. Think of an experiment where the gravitational constant $g$ is changed so that it takes on a different value $g'$ without altering the other equations that describe the bathtub system. Such an experiment is, at least in theory, feasible. For example, it can be accomplished by accelerating the bathtub system or by moving the bathtub system to another planet. We can model the effect on the equilibrium distribution in such an experiment by a soft intervention targeting $f_P$ that replaces the constraint $\Phi_{f_P}$ by \begin{align} \tuple{X_{V(f_P)}\mapsto U_{w_1}(g'U_{w_2} X_{v_D} - X_{v_P}),\,\, 0,\,\, V(f_P) = \{v_D,v_P,w_1,w_2\}}. \end{align} Which variables are and which are not affected by this soft intervention? We can read off the effects of this soft intervention from the causal ordering graph in Figure \ref{fig:bathtub full causal ordering}. There is no directed path from $f_P$ to $v_K, v_I, v_P$ or $v_O$. Therefore, perhaps surprisingly, Theorem \ref{thm:soft interventions} tells us that the soft intervention targeting $f_P$ neither has an effect on the pressure $X_{v_P}$ at equilibrium nor on the outflow rate $X_{v_O}$ at equilibrium. Since there is a directed path from $f_P$ to $v_D$, the water level $X_{v_D}$ at equilibrium may be different after a soft intervention on $f_P$. If the gravitational constant $g$ is equal to zero, then the system of constraints for the bathtub is not maximally uniquely solvable w.r.t.\ the causal ordering graph (except if $U_{w_I} = 0$ almost surely). For all other values of the parameter $g$ the generic effects and non-effects of soft interventions on other constraints of the bathtub system can be read off from the causal ordering graph and are presented in Table \ref{tab:soft interventions cog}. \end{example} \begin{table} \caption{The effects of soft interventions on constraints in the causal ordering graph for the bathtub system in Figure \ref{fig:bathtub full causal ordering}.} \label{tab:soft interventions cog} \begin{center} \begin{tabular}{l l l} \toprule target & generic effect & non-effect \\ \midrule $f_K$ & $X_{v_K}$, $X_{v_P}$, $X_{v_D}$ & $X_{v_I}$, $X_{v_O}$ \\ $f_I$ & $X_{v_I}$, $X_{v_P}$, $X_{v_O}$, $X_{v_D}$ & $X_{v_K}$ \\ $f_P$ & $X_{v_D}$ & $X_{v_K}$, $X_{v_I}$, $X_{v_P}$, $X_{v_O}$ \\ $f_O$ & $X_{v_P}$, $X_{v_D}$ & $X_{v_K}$, $X_{v_I}$, $X_{v_O}$ \\ $f_D$ & $X_{v_P}$, $X_{v_O}$, $X_{v_D}$ & $X_{v_K}$, $X_{v_I}$ \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{The effects of perfect interventions} \label{sec:effects perfect interventions} A \emph{perfect intervention} acts on a variable and a constraint. Definition \ref{def:perfect intervention} shows that it replaces the targeted constraint by a constraint that sets the targeted variable equal to a constant. Note that this definition of perfect interventions is very general and allows interventions for which the intervened system of constraints is not maximally uniquely solvable w.r.t.\ the causal ordering graph. In this work, we will only consider the subset of perfect interventions that target clusters in the causal ordering graph, for which the intervened system is also maximally uniquely solvable w.r.t.\ the causal ordering graph. We consider an analysis of necessary conditions on interventions for the intervened system to be consistent beyond the scope of this work. \begin{definition} \label{def:perfect intervention} Let $\mathcal{M}=\tuple{\B{\X}, \B{X}_W, \B{\Phi}, \BG=\tuple{V,F,E} }$ be a system of constraints and let $\xi_v\in\X_v$. A \emph{perfect intervention $\mathrm{do}(f,v,\xi_v)$ targeting the variable $v\in V\setminus W$ and the constraint $f\in F$} results in the intervened system $\mathcal{M}_{\mathrm{do}(f,v,\xi_v)} = \tuple{\B{\X}, \B{X}_W, \B{\Phi}_{\mathrm{do}(f,v,\xi_v)}, \BG_{\mathrm{do}(f,v)}}$ where \begin{enumerate} \item $\B{\Phi}_{\mathrm{do}(f,v,\xi_v)} = \left(\B{\Phi}\setminus \Phi_f\right) \cup \{\Phi'_f\}$ with $\Phi'_f=\tuple{X_v \mapsto X_v, \xi_v, \{v\}}$, \item $\BG_{\mathrm{do}(f,v)}=\tuple{V,F,E'}$ with $E'=\{(i-j)\in E: i,j \neq f \}\cup \{(v-f)\}$. \end{enumerate} \end{definition} Perfect interventions on a set of variable-constraint pairs $\{(f_1,v_1), \ldots, (f_n,v_n)\}$ in a system of constraints are denoted by $\mathrm{do}(S_F,S_V,\B{\xi}_{S_V})$ where $S_F=\tuple{f_1,\ldots, f_n}$ and $S_V=\tuple{v_1,\ldots,v_n}$ are tuples. For a bipartite graph $\BG$ so that its subgraph induced by $(V\cup F)\setminus W$ is self-contained, Lemma \ref{lemma:self-contained after intervention} shows that the subgraph of the intervened bipartite graph $\BG_{\mathrm{do}(S_F,S_V)}$ induced by $(V\cup F)\setminus W$ is also self-contained when $S=(S_F\cup S_V)$ is a cluster in $\mathrm{CO}(\BG)$ with $S\cap W=\emptyset$. \begin{restatable}{lemma}{selfcontainedafterintervention} \label{lemma:self-contained after intervention} Let $\BG=\tuple{V,F,E}$ be a bipartite graph and $W\subseteq V$, so that the subgraph of $\BG$ induced by $(V\cup F)\setminus W$ is self-contained. Consider an intervention $\mathrm{do}(S_V,S_F)$ on a cluster $S= S_F\cup S_V$ with $S\cap W=\emptyset$ in the causal ordering graph $\mathrm{CO}(\BG)$. The subgraph of $\BG_{\mathrm{do}(S_F,S_V)}$ induced by $(V\cup F)\setminus W$ is self-contained. \end{restatable} Theorem \ref{theo:effects of perfect interventions on clusters} shows how the causal ordering graph can be used to read off the (generic) effects and non-effects of \emph{perfect interventions on clusters} in the complete and overcomplete sets of the associated bipartite graph under the assumption of unique solvability with respect to the complete and overcomplete sets in the causal ordering graph. \begin{restatable}{theorem}{perfectinterventions} \label{theo:effects of perfect interventions on clusters} Let $\mathcal{M}=\tuple{\B{\X}, \B{X}_W, \B{\Phi}, \BG=\tuple{V,F,E}}$ with coarse decomposition $\mathrm{CD}(\BG)=\tuple{T_I,T_C,T_O}$. Assume that $\mathcal{M}$ is maximally uniquely solvable w.r.t.\ $\mathrm{CO}(\BG)=\tuple{\V,\E}$ and let $\B{X}^*$ be a solution of $\mathcal{M}$. Let $S_F\subseteq (T_C\cup T_O)\cap F$ and $S_V\subseteq (T_C\cup T_O)\cap (V\setminus W)$ be such that $(S_F\cup S_V)\in\V$. Consider the intervened system $\mathcal{M}_{\mathrm{do}(S_F,S_V,\B{\xi}_{S_V})}$ with coarse decomposition $\mathrm{CD}(\BG_{\mathrm{do}(S_F,S_V)})=\tuple{T'_I,T'_C,T'_O}$. Let $\B{X}'$ be a solution of $\mathcal{M}_{\mathrm{do}(S_F,S_V,\B{\xi}_{S_V})}$. If there is no directed path from any $x\in S_V$ to $v\in (T_C\cup T_O)\cap V$ in $\mathrm{CO}(\BG)$ then $X^*_v=X'_v$ almost surely. On the other hand, if there is $x\in S_V$ such that there is a directed path from $x$ to $v$ in $\mathrm{CO}(\BG)$ then $X^*_v$ may have a different distribution than $X'_v$. \end{restatable} One way to determine whether a perfect intervention has an effect on a certain variable is to explicitly solve the system of constraints before and after the intervention and check which solution components are altered. In particular, when the distribution of a solution component is different for almost all parameter values, then we say that there is a generic effect. This way, we can establish the generic effects of a perfect intervention without solving the equations by relying on a solvability assumption. Example \ref{ex:bathtub perfect interventions} illustrates this notion of perfect intervention on the system of constraints for the filling bathtub that we first introduced in Example \ref{ex:bathtub intro} and shows how the generic effects and non-effects of perfect interventions on clusters can be read off from the causal ordering graph. \begin{table}[!htb] \caption{Solutions for system of constraints describing the bathtub system in Section \ref{sec:application to
# DGFF 3 – Gibbs-Markov property for entropic repulsion In the previous post, we saw that it isn’t much extra effort to define the DGFF with non-zero boundary conditions, by adding onto the zero-BC DGFF the unique (deterministic) harmonic function which extends the boundary values into the domain. We also saw how a Gibbs-Markov property applies, whereby the values taken by the field on some sub-region $A\subset D$ depend on the values taken on $D\backslash A$ only through values taken on $\partial A$. In this post, we look at how this property and some other methods are applied by Deuschel [1] to study the probability that the DGFF on a large box in $\mathbb{Z}^d$ is positive ‘everywhere’. This event can be interpreted in a couple of ways, all of which are referred to there as entropic repulsion. Everything which follows is either taken directly or paraphrased directly from [1]. I have tried to phrase this in a way which avoids repeating most of the calculations, instead focusing on the methods and the motivation for using them. Fix dimension $d\ge 2$ throughout. We let $P^0_N$ be the law of the DGFF on $V_N:=[-N,N]^d\subset \mathbb{Z}^d$ with zero boundary conditions. Then for any subset $A\subset \mathbb{Z}^d$, in an intuitively-clear abuse of notation, we let $\Omega^+(A):= \{ h_x\ge 0, x\in A\},$ be the event that some random field h takes only non-negative values on A. The goal is to determine $P^0_N ( \Omega^+(V_N))$. But for the purposes of this post, we will focus on showing bounds on the probability that the field is non-negative on a thin annulus near the boundary of $V_N$, since this is a self-contained step in the argument which contains a blog-friendly number of ideas. We set $(L_N)$ to be a sequence of integers greater than one (to avoid dividing by zero in the statement), for which $\frac{L_N}{N}\rightarrow 0$. We now define for each N, the annulus $W_N = \{v\in V_N: L_N\le d_{\mathbb{Z}^d}(v, V_N^c)\le 2L_N \}$ with radius $L_N$ set a distance $L_N$ inside the box $V_N$. We aim to control $P^N_0 (\Omega^+(W_N))$. This forms middle steps of Deuschel’s Propositions 2.5 and 2.9, which discuss $P^N_0(\Omega^+(V_{N-L_N}))$. Clearly there is the upper bound $P^N_0(\Omega^+(V_{N-L_N})) \le P^N_0(\Omega^+(W_N))$ (1) and a lower bound on $P^N_0(\Omega^+(V_{N-L_N}))$ is obtained in the second proposition by considering the box as a union of annuli then combining the bounds on each annulus using the FKG inequality. Upper bound via odds and evens After removing step (1), this is Proposition 2.5: $\limsup_{N\rightarrow \infty} \frac{L_N}{N^{d-1} \log L_N} \log P^N_0(\Omega^+(W_N)) < 0.$ (2) This is giving a limiting upper bound on the probability of the form $L_N^{-CN^{d-1}/L_N}$, though as with all LDP estimates, the form given at (2) is more instructive. Morally, the reason why it is unlikely that the field should be non-negative everywhere within the annulus is that the distribution at each location is centred, and even though any pair of values are positively correlated, this correlation is not strong enough to avoid this event being unlikely. But this is hard to corral into an upper bound argument directly. In many circumstances, we want to prove upper bounds for complicated multivariate systems by projecting to get an unlikely event for a one-dimensional random variable, or a family of independent variables, even if we have to throw away some probability. We have plenty of tools for tail probabilities in both of these settings. Since the DGFF is normal, a one-dimensional RV that is a linear combination (eg the sum) of all the field heights is a natural candidate. But in this case we would have thrown away too much probability, since the only way we could dominate is to demand that the sum $\sum_{x\in W_N}h^N_x\ge 0$, which obviously has probability 1/2 by symmetry. (3) So Deuschel splits $W_N$ into $W_N^o,W_N^e$, where the former includes all vertices with odd total parity in $W_N$ and the latter includes all the vertices with even total parity in the interior of $W_N$. (Recall that $\mathbb{Z}^d$ is bipartite in exactly this fashion). The idea is to condition on $h^N\big|_{W^o_N}$. But obviously each even vertex is exactly surrounded by odd vertices. So by the Gibbs-Markov property, conditional on the odd vertices, the values of the field at the even vertices are independent. Indeed, if for each $v\in W_N^e$ we define $\bar h_v$ to be the average of its neighbours (which is measurable w.r.t to the sigma-algebra generated by the odd vertices), then $\{h_v: v\in W_N^e \,\big|\, \sigma(h_w: w\in W_N^o)\},$ is a collection of independent normals with variance one, and where the mean of $h_v$ is $\bar h_v$. To start finding bounds, we fix some threshold $m=m_N\gg 1$ to be determined later, and consider the odd-measurable event $A_N$ that at most half of the even vertices v have $\bar h_v\ge m$. So $A_N^c\cap \Omega^+(W_N)$ says that all the odd vertices are non-negative and many are quite large. This certainly feels like a low-probability event, and unlike at (3), we might be able to obtain good tail bounds by projection into one dimension. In the other case, conditional on $A_N$, there are a large number of even vertices with conditional mean at most m, and so we can control the probability that at least one is negative as a product $(1-\varphi(m))^{\frac12 |W_N^e|}$. (4) Note that for this upper bound, we can completely ignore the other even vertices (those with conditional mean greater than m). So we’ll go back to $A_N^c \cap \Omega^+(W_N)$. For computations, the easiest one-dimensional variable to work with is probably the mean of the $\bar h_v$s across $v\in W_N^e$, since on $A_N^c\cap \Omega^+(W_N)$ this is at least $\frac{m}{2}$. Rather than focus on the calculations themselves involving $\bar S^e_N:= \frac{1}{|W_N^e|} \sum\limits_{v\in W_N^e} \bar h_v,$ let us remark that it is certainly normal and centered, and so there are many methods to bound its tail, for example $P^0_N \left( \bar S^e_N \ge \frac{m}{2} \right) \le \exp\left( \frac{-m^2}{8\mathrm{Var}(\bar S^e_N)} \right),$ (5) as used by Deuschel just follows from an easy comparison argument within the integral of the pdf. We can tackle the variance using the Green’s function for the random walk (recall the first post in this set). But before that, it’s worth making an observation which is general and useful, namely that $\bar S^e_N$ is the expectation of $S^e_N:= \sum{1}{|W_N^e|}\sum\limits_{v\in W_N^e} h_v$ conditional on the odds. Directly from the law of total variance, the variance of any random variable X is always larger than the variance of $\mathbb{E}[X|Y]$. So in this case, we can replace $\mathrm{Var}(\bar S^e_N)$ in (5) with $\mathrm{Var}(S^e_N)$, which can be controlled via the Green’s function calculation. Finally, we choose $m_N$ so that the probability at (4) matches the probability at (5) in scale, and this choice leads directly to (2). In summary, we decomposed the event that everything is non-negative into two parts: either there are lots of unlikely local events in the field between an even vertex and its odd neighbours, or the field has to be atypically large at the odd sites. Tuning the parameter $m_N$ allows us to control both of these probabilities in the sense required. Lower bound via a sparse sub-lattice To get a lower bound on the probability that the field is non-negative on the annulus, we need to exploit the positive correlations in the field. We use a similar idea to the upper bound. If we know the field is positive and fairly large in many places, then it is increasingly likely that it is positive everywhere. The question is how many places to choose? We are going to consider a sub-lattice that lives in a slightly larger region than $W_N$ itself, and condition the field to be larger than $m=m_N$ everywhere on this lattice. We want the lattice to be sparse enough that even if we ignore positive correlations, the chance of this happening is not too small. But we also want the lattice to be dense enough that, conditional on
similar idea to what was used to construct the \hyperlink{cilsineq}{\textbf{CILS}}, we conclude that it possible to fix \[ X_{ij} = 0, \mbox{ for all } i,j \in C, i < j. \] \end{remark} Given a solution $(\bar{x},\bar{X})$ of \textbf{CRel}, the following MIQP problem is a separation problem, which searches for a \hyperlink{cilsineq}{\textbf{CILS}\,} violated by $\bar{X}$. \hypertarget{miqp1}{ \[ \begin{array}{lll} z^*:=\, &\max_{\alpha,\beta,K} \trace(\bar{X}K) - \beta(\beta-1),&\multicolumn{1}{r}{(\mbox{\textbf{MIQP}\,}_1)}\\ \mbox{s.t.}&w'\alpha \geq c+1,\\ &\beta=e'\alpha -1,\\ &K(i,i)=0,&i=1, \ldots,n,\\ &K(i,j)\leq \alpha_i, & i,j=1, \ldots,n, \; i<j,\\ &K(i,j)\leq \alpha_j, & i,j=1, \ldots,n, \; i<j,\\ &K(i,j)\geq 0, & i,j=1, \ldots,n, \; i<j,\\ & K(i,j)\geq \alpha_i+\alpha_j-1, & i,j=1, \ldots,n, \; i<j,\\ &\alpha \in\{0,1\}^n, \; \beta\in \mathbb{R}, \; K\in \mathbb{S}^n.\\ \end{array} \] } If $\alpha^*,\beta^*,K^*$ solves \hyperlink{miqp1}{\textbf{MIQP}\,$_1$}, with $z^*>0$, the \hyperlink{cilsineq}{\textbf{CILS}\,} given by $\trace(K^*X)\leq \beta^*(\beta^* - 1)$ is violated by $\bar{X}$. The binary vector $\alpha^*$ defines the \hyperlink{ciineq}{\textbf{CI}\,} from which the cut is derived. The \hyperlink{ciineq}{\textbf{CI}\,} is specifically given by ${\alpha^*}^Tx\leq e^T{\alpha^*}-1$ and $\beta^*(\beta^*-1)$ determines the right-hand side of the \hyperlink{cilsineq}{\textbf{CILS}}. The inequality is multiplied by 2 because we consider the variable $K$ as a symmetric matrix, in order to simplify the presentation of the model. \begin{theorem}\label{rem_dom2} The valid inequality \hyperlink{cilsineq}{\textbf{CILS}\,} for \hyperlink{modelqkplifted}{\textbf{QKP}\,$_{\mbox{lifted}}$}, which is de\-ri\-ved from a valid \hyperlink{lciineq}{\textbf{LCI}\,} in the form \eqref{valid_ineq}, dominates any \hyperlink{cilsineq}{\textbf{CILS}\,} derived from a \hyperlink{ciineq}{\textbf{CI}\,} that can be lifted to the \hyperlink{lciineq}{\textbf{LCI}}. \end{theorem} \begin{proof} As $X$ is nonnegative, it is straightforward to verify that if $X$ satisfies a \hyperlink{cilsineq}{\textbf{CILS}\,} derived from a \hyperlink{lciineq}{\textbf{LCI}}, $X$ also satisfies any \hyperlink{cilsineq}{\textbf{CILS}\,} derived from a \hyperlink{ciineq}{\textbf{CI}\,} that can be lifted to the \hyperlink{lciineq}{\textbf{LCI}}. \end{proof} Any feasible solution of \hyperlink{miqp1}{\textbf{MIQP}\,$_1$} such that $\trace(\bar{X}K) > \beta(\beta-1)$ generates a valid inequality for \hyperlink{modelqkplifted}{\textbf{QKP}\,$_{\mbox{lifted}}$} that is violated by $\bar{X}$. Therefore, we do not need to solve \hyperlink{miqp1}{\textbf{MIQP}\,$_1$} to optimality to generate a cut. Moreover, to generate distinct cuts, we can solve \hyperlink{miqp1}{\textbf{MIQP}\,$_1$} several times (not necessarily to optimality), each time adding to it, the following ``no-good" cut to avoid the previously generated cuts: \begin{equation} \label{no-gooda} \sum_{i\in N} \bar{\alpha}(i) (1-\alpha(i)) \geq 1, \end{equation} where $\bar{\alpha}$ is the value of the variable $\alpha$ in the solution of \hyperlink{miqp1}{\textbf{MIQP}\,$_1$} when generating the previous cut. We note that, if $\alpha^*,\beta^*,K^*$ solves \hyperlink{miqp1}{\textbf{MIQP}\,$_1$}, then ${\alpha^*}'x\leq e'\alpha^* -1$ is a valid \hyperlink{ciineq}{\textbf{CI}\,} for $\hyperlink{modelqkp}{\textbf{QKP}}$, however it may not be a minimal cover. Aiming at generating stronger valid cuts, based in Theorem \ref{rem_dom2}, we might add to the objective function of \hyperlink{miqp1}{\textbf{MIQP}\,$_1$}, the term $-\delta e'\alpha$, for some weight $\delta>0$. The objective function would then favor minimal covers, which could be lifted to a facet-defining \hyperlink{lciineq}{\textbf{LCI}}, that would finally generate the \hyperlink{cilsineq}{\textbf{CILS}}. We should also emphasize that if the \hyperlink{cilsineq}{\textbf{CILS}\,} derived from a \hyperlink{ciineq}{\textbf{CI}\,} is violated by a given $\bar{X}$, then clearly, the \hyperlink{cilsineq}{\textbf{CILS}\,} derived from the \hyperlink{lciineq}{\textbf{LCI}\,} will also be violated by $\bar{X}$. Now, we also note that, besides defining one cover inequality in the lifted space considering all possible pairs of indexes in $C$, we can also define a set of cover inequalities in the lifted space, considering for each inequality, a partition of the indexes in $C$ into subsets of cardinality 1 or 2. In this case, the right-hand side of the inequalities is never greater than $\beta/2$. The idea is made precise below. \hypertarget{scilsineq}{ \begin{definition}[Set of cover inequalities in the lifted space, \textbf{SCILS}] \label{defscils} Let $C \subset N $ and $\beta<|C|$ as in inequality \eqref{valid_ineq}. Let \begin{enumerate} \item $C_{s}:=\{(i_1,j_1), \ldots,(i_{p},j_{p})\}$ be a partition of $C$, if $|C|$ is even. \item $C_{s}:=\{(i_1,j_1), \ldots,(i_{p},j_{p})\}$ be a partition of $C\setminus \{i_0\} \,\, \mbox{for each} \,\, i_0 \in C$, if $|C|$ is odd and $\beta$ is odd. \item $C_{s}:=\{(i_0,i_0),(i_1,j_1), \ldots,(i_p,j_p)\}$, where $\{(i_1,j_1), \ldots,(i_{p},j_{p})\}$ is a partition of $C\setminus \{i_0\} \,\, \mbox{for each} \,\, i_0 \in C$, if $|C|$ is odd and $\beta$ is even. \end{enumerate} In all cases, $i_k<j_k$ for all $k=1, \ldots,p$.\\ The inequalities in the \textbf{SCILS}\, derived from \eqref{valid_ineq} are given by \begin{equation} \label{cut2} \displaystyle \sum_{(i,j)\in C_{s}} X_{ij} \leq \left\lfloor \frac{\beta}{2} \right\rfloor, \end{equation} for all partitions $C_{s}$ defined as above. \end{definition} } \begin{theorem} \label{thmscils} If inequality \eqref{valid_ineq} is valid for $\hyperlink{modelqkp}{\textbf{QKP}}$, then the inequalities in the \hyperlink{scilsineq}{\textbf{SCILS}\,} \; \eqref{cut2} are valid for \hyperlink{modelqkplifted}{\textbf{QKP}\,$_{\mbox{lifted}}$}. \end{theorem} \begin{proof} The proof of the validity of \hyperlink{scilsineq}{\textbf{SCILS}\,} is based on the lifting relation $X_{ij}=x_ix_j$. We note that if the binary variable $x_i$ indicates whether or not the item $i$ is selected in the solution, the variable $X_{ij}$ indicates whether or not the pair of items $i$ and $j$, are both selected in the solution. \begin{enumerate} \item If $|C|$ is even, $C_s$ is a partition of $C$ in exactly $|C|/2$ subsets with two elements each, and therefore, if at most $\beta$ elements of $C$ can be selected in the solution, clearly at most $ \left\lfloor \frac{\beta}{2} \right\rfloor$ subsets of $C_s$ can also be selected. \item If $|C|$ and $\beta$ are odd, $C_s$ is a partition of $C\setminus \{i_0\} $ in exactly $|C-1|/2$ subsets with two elements each, where $i_0$ can be any element of $C$. In this case, if at most $\beta$ elements of $C$ can be selected in the solution, clearly at most $\frac{\beta-1}{2} \left(= \left\lfloor \frac{\beta}{2} \right\rfloor\right)$ subsets of $C_s$ can also be selected. \item If $|C|$ is odd and $\beta$ is even, $C_s$ is the union of $\{(i_0,i_0)\}$ with a partition of $C\setminus \{i_0\} $ in exactly $|C-1|/2$ subsets with two elements each, where $i_0$ can be any element of $C$. In this case, if at most $\beta$ elements of $C$ can be selected in the solution, clearly at most $\frac{\beta}{2}\left( = \left\lfloor \frac{\beta}{2} \right\rfloor\right)$ subsets of $C_s$ can also be selected. \end{enumerate} \end{proof} Given a solution $(\bar{x},\bar{X})$ of \textbf{CRel}, we now present a mixed linear integer programming (MILP) separation problem, which searches for an inequality in \hyperlink{scilsineq}{\textbf{SCILS}\,} that is most violated by $\bar{X}$. Let $A\in \{0,1\}^{n\times \frac{n(n+1)}{2}}$. In the first $n$ columns of $A$ we have the $n\times n$ identity matrix. In the remaining $n(n-1)/2$ columns of the matrix, there are exactly two elements equal to 1 in each column. All columns are distinct. For example, for $n=4$, $$ A:=\left(\begin{array}{cccccccccc} 1&0&0&0&1&1&1&0&0&0\\ 0&1&0&0&1&0&0&1&1&0\\ 0&0&1&0&0&1&0&1&0&1\\ 0&0&0&1&0&0&1&0&1&1\\ \end{array}\right). $$ The columns of $A$ represent all the subsets of items in $N$ with one or two elements. Let \hypertarget{milp2}{ \[ \begin{array}{lll} z^*:=\, &\max_{\alpha,v,K,y} \trace(\bar{X}K) - 2v,&\multicolumn{1}{r}{(\mbox{\textbf{MILP}\,}_2)}\\ \mbox{s.t.}&w'\alpha \geq c+1,\\ &K(i,i)=2y(i),&i=1, \ldots,n,\\ &\sum_{i=1}^n y(i) \leq 1, \\ &K(i,j) = \sum_{t=n+1}^{n(n+1)/2} A(i,t)A(j,t))y(t), & i,j=1, \ldots,n,i<j,\\ &v \geq (e'\alpha-1)/2 - 0.5,\\ &v \leq (e'\alpha-1)/2,\\ &y(t)\leq 1-A(i,t) + \alpha(i), & i=1, \ldots,n,\; t=1, \ldots,\frac{n(n+1)}{2},\\ &\alpha \leq Ay \leq \ \alpha +\left(\frac{n(n+1)}{2}\right)(1-\alpha),\\ &\alpha \in\{0,1\}^n, \; y\in\{0,1\}^{\frac{n(n+1)}{2}},\\ & v\in \mathbb{Z}, \; K\in \mathbb{S}^n. \end{array} \] } If $\alpha^*,v^*,K^*,y^*$ solves \hyperlink{milp2}{\textbf{MILP}\,$_2$}, with $z^*>0$, then the particular inequality in \hyperlink{scilsineq}{\textbf{SCILS}\,} given by \begin{equation} \label{partscils} \trace(K^*X)\leq 2v^* \end{equation} is violated by $\bar{X}$. The binary vector $\alpha^*$ defines the \hyperlink{ciineq}{\textbf{CI}\,} from which the cut is derived. As the \hyperlink{ciineq}{\textbf{CI}\,} is given by $\alpha^*x\leq e'\alpha^*-1$, we can conclude that the cut generated either belongs to case (1) or (3) in Definition \ref{defscils}. This fact is considered in the formulation of \hyperlink{milp2}{\textbf{MILP}\,$_2$}. The vector $y^*$ defines a partition $C_{s}$ as presented in case (3), if $\sum_{i=1}^n y(i) =1$, and in case (1), otherwise. We finally note that the number 2 in the right-hand side of \eqref{partscils} is due to the symmetry of the matrix $K^*$. We now may repeat the observations made for \hyperlink{miqp1}{\textbf{MIQP}\,$_1$}. Any feasible solution of \hyperlink{milp2}{\textbf{MILP}\,$_2$} such that $\trace(\bar{X}K) > 2v$ generates a valid inequality for \textbf{CRel}\,, which is violated by $\bar{X}$. Therefore, we do not need to solve \hyperlink{milp2}{\textbf{MILP}\,$_2$} to optimality to generate a cut. Moreover, to generate distinct cuts, we can solve \hyperlink{milp2}{\textbf{MILP}\,$_2$} several times (not necessarily to optimality), each time adding to it, the following suitable ``no-good" cut to avoid the previously generated cuts: \begin{equation} \label{no-goodb} \sum_{i=1}^{\frac{n(n+1)}{2}} \bar{y}(i) (1-y(i)) \geq 1, \end{equation} where $\bar{y}$ is the value of the variable $y$ in the solution of \hyperlink{milp2}{\textbf{MILP}\,$_2$}, when generating the previous cut. The \hyperlink{ciineq}{\textbf{CI}\,}\; ${\alpha^*}'x\leq e'\alpha^* -1$ may not be a minimal cover. Aiming at generating stronger valid cuts, we might add again to the objective function of \hyperlink{milp2}{\textbf{MILP}\,$_2$}, the term $-\delta e'\alpha$, for some weight $\delta>0$. The objective function would then favor minimal covers, which could be lifted to a facet-defining \hyperlink{lciineq}{\textbf{LCI}}. In this case, however, after computing the \hyperlink{lciineq}{\textbf{LCI}}, we have to solve \hyperlink{milp2}{\textbf{MILP}\,$_2$} again, with $\alpha$ fixed at values that represent the \hyperlink{lciineq}{\textbf{LCI}}, and $v$ fixed so that the right-hand side of the inequality
you want to chnage the title, it should be moved to retained the history and not copied...ummm...in words of one syllable, please? :) In case you prefer the preposition in in the name of the article, let me know. Thank you for your patience. :) -- Frous (talk) 21:44, 30 November 2010 (UTC) • If a title (sorry, two syllables) is wrong, move the article to the right title--don't copy it. Copying "moves" only the text, not the history of the article and its many versions and edits; copying it does not preserve who did what to the article, and thus erases other editors' contributions. A move preserves all that. Drmies (talk) 23:15, 30 November 2010 (UTC) Thanks for helping our friend. I reversed the copy before rushing out to a conference sessions. --Bduke (Discussion) 01:41, 1 December 2010 (UTC) Yep, thanks! But is the redirecting function reserved only for admins? I see no "move" button on the toolbar when I'm editing articles, so might want to consult you, Bduke, in case the [[3]] prefers "at". -- Frous (talk) 15:11, 1 December 2010 (UTC) It is in the menu that drops down when you move your mouse over the down pointing triangle in the toolbar at the top. --Bduke (Discussion) 21:11, 1 December 2010 (UTC) ## Nomination for merging of Template:Infobox Australian winery Template:Infobox Australian winery has been nominated for merging with Template:Infobox winery. You are invited to comment on the discussion at the template's entry on the Templates for discussion page. Thank you. WOSlinker (talk) 21:20, 3 December 2010 (UTC) ## Talk:Progressive Award Scheme I take your point as well - my personal thought on it is get consensus to merge (which there seems to be - I'm certainly happy with that), do the merge then I don't think there would be any objections to closing the RM as moot. If I'm honest I was thinking I'd be happy to leave your closure but notice you hadn't removed the movereq template so it would still be listed on RM and so as I was going to edit it anyway I chose to undo your close even though I wasn't too bothered either way. If you reclose it I will not oppose as it seems the common sense, ignore all rules, thing to do. Dpmuk (talk) 00:20, 17 December 2010 (UTC) ## /* Religion in Scouts Australia */ Brian, I was sitting back re-reading and editing the Scouts Religious section and feel it is probably now getting more space than it deserves. Religion plays little part in the day to day australian scouts activities and it will end up being the biggest section on the page. By highlighting religion and religious intolerance I may be scaring kids off while outside of wiki I am actively trying to recruit kids. The original entry where it stated that scouts was open to non religious faiths was wrong but that has now been modified. Can I propose deleting paragraphs 1, 2, 4, 5, 8 and 9? I am happy with the wording as is that says that people unable to make the promise can't be members. Anyone with any sense can see that this excludes atheists. --Ozscout (talk) 04:41, 30 December 2010 (UTC) I have no problem with that. Some of it is rather unencyclopedic anyway. However, I think you should raise it on the talk page. On your last provocative sentence, I have been discussing the issue of religion in Scouting across the world for many years, and you would be surprised how many people disagree with you. Personally, I can see how some atheists might in honesty, see the wording of the Australian Promise as a way in, but I am not one of them. I think is is a way in for many Australians who simply do not care less about religion, for or against. Again, I am not one of them. I think, like Richard Dawkins but less strongly, that belief in God is both false and often dangerous. --Bduke (Discussion) 06:45, 30 December 2010 (UTC) ## Invitation to join 10 Years Of Wikipedia in Melbourne Hi Brain, I am writing to you to invite 10 years celebration program in Melbourne. Earlier I wrote to you at publicofficerwikimedia.org.au. As I havent heard back I am writing here. I was wondering will you be there? Can you please write me back at [email protected]. Regards Arif. —Preceding unsigned comment added by 128.250.22.238 (talk) 03:02, 8 January 2011 (UTC) I replied to your e-mail. Hopefully we can post details of a meetup tonight or tomorrow morning. --Bduke (Discussion) 21:14, 8 January 2011 (UTC) hello brother i am a President's Scout of Bangladesh Scouts, i made the changes to the page because it will make the page nice, some of my friends suggesteed me to make the changes, & i am a authenticated person of Bangladesh Scouts --Tahmidazuwad (talk) 21:55, 18 January 2011 (UTC) ## thnx thnx bduke brother for the nice suggestion, i will follow you, & would you plz describe the steps, how i can license my images that i wanted to upload in wiki??? :D --Tahmidazuwad (talk) 03:23, 19 January 2011 (UTC) ## Help would you plz license it by yourself??? i am trying but i can not fix it — Preceding unsigned comment added by Tahmidazuwad (talkcontribs) 13:35, 19 January 2011 (UTC) ## Olave B-P Thank you for tweaking my efforts  ;-) RobinClay (talk) 14:11, 30 January 2011 (UTC) ## ArbCom Your name has been mentioned in recent evidence for an arbitration case filed on 2010-11-18. You were not originally named as a party, but I am sending this notice proforma to editors named in evidence, before the workshop period closes. If you wish to do so, enter your statement and any other material you wish to submit to the Arbitration Committee at Wikipedia:Arbitration/Requests/Case/Longevity/Workshop#General discussion, or elsewhere on that page or the case's four talk pages. Additionally, the following resources may be of use— Thanks, JJB 21:08, 3 February 2011 (UTC) ## Chemistry I have added a detailed list of pros and cons on the chemistry talk page why I think the picture with the reaction of water is more appropriate for the page than that of the flasks. I hope to get some feedback from you and the other editor who reverted the first version of the new figure. Next time I add a new figure, I will give a longer explanation up front. Thanks for the suggestion to discuss this on the discussion page. I'm looking forward to a good exchange, and am interested to hear was everyone has to say on this matter. --Theislikerice (talk) 22:31, 6 February 2011 (UTC) ## Old Fooians Hi Bduke Thanks for your suggested compromise at Wikipedia:Categories for discussion/Log/2011 January 24#Old_Edwardians. I'm glad to see that the discussion has now been closed as "rename to People educated at Foo". Thanks for being the one to settle this. I think that it would be now be appropriate to do as you suggested, and have a followup nomination for the others. I undertook at CFD to do that, but before going ahead I wanted to ask for your thoughts on my idea for how to go about it. My first thought is to leave aside the 9 original public schools, as defined by the Public Schools Act 1868. Some editors may feel that all of those schools are sufficiently well-known to justify retaining the the "Old Fooians" format, even if it is replaced elsewhere. So I think that they should be treated as a special case. Personally, the only "Old Fooian" term I'm wholly persuaded on keeping is the Old Etonians, but I may be persuadable on some or all of the others ... so I suggest leaving them all aside until the others are done (and possibly omitting them entirely). My second thought is to group the remainder in batches of
space she inhabits, then she knows that her space is Euclidean, and that propositions such as the Pythagorean theorem are physically valid in her universe. But the diagram in f/1 illustrating the proof of the Pythagorean theorem in Euclid's Elements (proposition I.47) is equally valid if the page is rolled onto a cylinder, 2, or formed into a wavy corrugated shape, 3. These types of curvature, which can be achieved without tearing or crumpling the surface, are not real to her. They are simply side-effects of visualizing her two-dimensional universe as if it were embedded in a hypothetical third dimension --- which doesn't exist in any sense that is empirically verifiable to her. Of the curved surfaces in figure f, only the sphere, 4, has curvature that she can measure; the diagram can't be plastered onto the sphere without folding or cutting and pasting. So the observation of curvature doesn't imply the existence of extra dimensions, nor does embedding a space in a higher-dimensional one so that it looks curvy always mean that there will be any curvature detectable from within the lower-dimensional space. g / An artificial horizon. h / 1. A ray of light is emitted upward from the floor of the elevator. The elevator accelerates upward. 2. By the time the light is detected at the ceiling, the elevator has changed its velocity, so the light is detected with a Doppler shift. i / Pound and Rebka at the top and bottom of the tower. j / The earth is flat --- locally. k / Spacetime is locally flat. ### 7.4.2 The equivalence principle #### Universality of free-fall Although light rays and gyroscopes seem to agree that space is curved in a gravitational field, it's always conceivable that we could find something else that would disagree. For example, suppose that there is a new and improved ray called the $$\text{StraightRay}^\text{TM}$$. The StraightRay is like a light ray, but when we construct a triangle out of StraightRays, we always get the Euclidean result for the sum of the angles. We would then have to throw away general relativity's whole idea of describing gravity in terms of curvature. One good way of making a StraightRay would be if we had a supply of some kind of exotic matter --- call it $$\text{FloatyStuff}^\text{TM}$$ --- that had the ordinary amount of inertia, but was completely unaffected by gravity. We could then shoot a stream of FloatyStuff particles out of a nozzle at nearly the speed of light and make a StraightRay. Normally when we release a material object in a gravitational field, it experiences a force $$mg$$, and then by Newton's second law its acceleration is $$a=F/m=mg/m=g$$. The $$m$$'s cancel, which is the reason that everything falls with the same acceleration (in the absence of other forces such as air resistance). The universality of this behavior is what allows us to interpret the gravity geometrically in general relativity. For example, the Gravity Probe B gyroscopes were made out of quartz, but if they had been made out of something else, it wouldn't have mattered. But if we had access to some FloatyStuff, the geometrical picture of gravity would fail, because the “$$m$$” that described its susceptibility to gravity would be a different “$$m$$” than the one describing its inertia. The question of the existence or nonexistence of such forms of matter turns out to be related to the question of what kinds of motion are relative. Let's say that alien gangsters land in a flying saucer, kidnap you out of your back yard, konk you on the head, and take you away. When you regain consciousness, you're locked up in a sealed cabin in their spaceship. You pull your keychain out of your pocket and release it, and you observe that it accelerates toward the floor with an acceleration that seems quite a bit slower than what you're used to on earth, perhaps a third of a gee. There are two possible explanations for this. One is that the aliens have taken you to some other planet, maybe Mars, where the strength of gravity is a third of what we have on earth. The other is that your keychain didn't really accelerate at all: you're still inside the flying saucer, which is accelerating at a third of a gee, so that it was really the deck that accelerated up and hit the keys. There is absolutely no way to tell which of these two scenarios is actually the case --- unless you happen to have a chunk of FloatyStuff in your other pocket. If you release the FloatyStuff and it hovers above the deck, then you're on another planet and experiencing genuine gravity; your keychain responded to the gravity, but the FloatyStuff didn't. But if you release the FloatyStuff and see it hit the deck, then the flying saucer is accelerating through outer space. The nonexistence of FloatyStuff in our universe is called the equivalence principle. If the equivalence principle holds, then an acceleration (such as the acceleration of the flying saucer) is always equivalent to a gravitational field, and no observation can ever tell the difference without reference to something external. (And suppose you did have some external reference point --- how would you know whether it was accelerating?) ##### Example 25: The artificial horizon The pilot of an airplane cannot always easily tell which way is up. The horizon may not be level simply because the ground has an actual slope, and in any case the horizon may not be visible if the weather is foggy. One might imagine that the problem could be solved simply by hanging a pendulum and observing which way it pointed, but by the equivalence principle the pendulum cannot tell the difference between a gravitational field and an acceleration of the aircraft relative to the ground --- nor can any other accelerometer, such as the pilot's inner ear. For example, when the plane is turning to the right, accelerometers will be tricked into believing that “down” is down and to the left. To get around this problem, airplanes use a device called an artificial horizon, which is essentially a gyroscope. The gyroscope has to be initialized when the plane is known to be oriented in a horizontal plane. No gyroscope is perfect, so over time it will drift. For this reason the instrument also contains an accelerometer, and the gyroscope is always forced into agreement with the accelerometer's average output over the preceding several minutes. If the plane is flown in circles for several minutes, the artificial horizon will be fooled into indicating that the wrong direction is vertical. #### Gravitational Doppler shifts and time dilation An interesting application of the equivalence principle is the explanation of gravitational time dilation. As described on p. 384, experiments show that a clock at the top of a mountain runs faster than one down at its foot. To calculate this effect, we make use of the fact that the gravitational field in the area around the mountain is equivalent to an acceleration. Suppose we're in an elevator accelerating upward with acceleration $$a$$, and we shoot a ray of light from the floor up toward the ceiling, at height $$h$$. The time $$\Delta t$$ it takes the light ray to get to the ceiling is about $$h/c$$, and by the time the light ray reaches the ceiling, the elevator has sped up by $$v=a\Delta t=ah/c$$, so we'll see a red-shift in the ray's frequency. Since $$v$$ is small compared to $$c$$, we don't need to use the fancy Doppler shift equation from subsection 7.2.8; we can just approximate the Doppler shift factor as $$1-v/c\approx 1-ah/c^2$$. By the equivalence principle, we should expect that if a ray of light starts out low down and then rises up
MAP configuration, suggesting the possibility of performing updates faster than recomputing from scratch. In this paper we present an algorithm for efficiently performing such updates under arbitrary changes to the model. Our algorithm is within a logarithmic factor of the optimal and is asymptotically never slower than re-computing from-scratch: if a modification to the model requires $m$ updates to the MAP configuration of $n$ random variables, then our algorithm requires $O(m\log{(n/m)})$ time; re-computing from scratch requires $O(n)$ time. We evaluate the practical effectiveness of our algorithm by considering two problems in genomic signal processing, CpG region segmentation and protein sidechain packing, where a MAP configuration must be repeatedly updated. Our results show significant speedups over recomputing from scratch. [ BibTex ] | [ PDF ] A Low Density Lattice Decoder via Non-parametric Belief Propagation Bickson, Ihler, Avissar, Dolev The recent work of Sommer, Feder and Shalvi presented a new family of codes called low density lattice codes (LDLC) that can be decoded efficiently and approach the capacity of the AWGN channel. A linear time iterative decoding scheme which is based on a message-passing formulation on a factor graph is given. In the current work we report our theoretical findings regarding the relation between the LDLC decoder and belief propagation. We show that the LDLC decoder is an instance of non-parametric belief propagation and further connect it to the Gaussian belief propagation algorithm. Our new results enable borrowing knowledge from the non-parametric and Gaussian belief propagation domains into the LDLC domain. Specifically, we give more general convergence conditions for convergence of the LDLC decoder (under the same assumptions of the original LDLC convergence analysis). We discuss how to extend the LDLC decoder from Latin square to full rank, non-square matrices. We propose an efficient construction of sparse generator matrix and its matching decoder. We report preliminary experimental results which show our decoder has comparable symbol to error rate compared to the original LDLC decoder. [ BibTex ] | [ PDF ] Bounding Sample Errors in Approximate Distributed Latent Dirichlet Allocation Ihler, Newman Latent Dirichlet allocation (LDA) is a popular algorithm for discovering structure in large collections of text or other data. Although its complexity is linear in the data size, its use on increasingly massive collections has created considerable interest in parallel implementations. Approximate distributed'' LDA, or AD-LDA, approximates the popular collapsed Gibbs sampling algorithm for LDA models while running on a distributed architecture. Although this algorithm often appears to perform well in practice, its quality is not well understood or easily assessed. In this work, we provide some theoretical justification of the algorithm, and modify AD-LDA to track an error bound on its performance. Specifically, we upper-bound the probability of making a sampling error at each step of the algorithm (compared to an exact, sequential Gibbs sampler), given the samples drawn thus far. We show empirically that our bound is sufficiently tight to give a meaningful and intuitive measure of approximation error in AD-LDA, allowing the user to understand the trade-off between accuracy and efficiency. [ BibTex ] | [ PDF ] Particle-Based Variational Inference for Continuous Systems Ihler, Frank, Smyth Since the development of loopy belief propagation, there has been considerable work on advancing the state of the art for approximate inference over distributions defined on discrete random variables. Improvements include guarantees of convergence, approximations that are provably more accurate, and bounds on the results of exact inference. However, extending these methods to continuous-valued systems has lagged behind. While several methods have been developed to use belief propagation on systems with continuous values, recent advances for discrete variables have not as yet been incorporated. In this context we extend a recently proposed particle-based belief propagation algorithm to provide a general framework for adapting discrete message-passing algorithms to inference in continuous systems. The resulting algorithms behave similarly to their purely discrete counterparts, extending the benefits of these more advanced inference techniques to the continuous domain. [ BibTex ] | [ PDF ] Bayesian detection of non-sinusoidal periodic patterns in circadian expression data Chudova, Ihler, Lin, Andersen, Smyth Motivation: Cyclical biological processes such as cell division and circadian regulation produce coordinated periodic expression of thousands of genes. Identification of such genes and their expression patterns is a crucial step in discovering underlying regulatory mechanisms. Existing computational methods are biased toward discovering genes that follow sine-wave patterns. Results: We present an analysis of variance (ANOVA) periodicity detector and its Bayesian extension that can be used to discover periodic transcripts of arbitrary shapes from replicated gene expression profiles. The models are applicable when the profiles are collected at comparable time points for at least two cycles. We provide an empirical Bayes procedure for estimating parameters of the prior distributions and derive closed-form expressions for the posterior probability of periodicity, enabling efficient computation. The model is applied to two datasets profiling circadian regulation in murine liver and skeletal muscle, revealing a substantial number of previously undetected non-sinusoidal periodic transcripts in each. We also apply quantitative real-time PCR to several highly ranked non-sinusoidal transcripts in liver tissue found by the model, providing independent evidence of circadian regulation of these genes. Availability: MATLAB software for estimating prior distributions and performing inference is available for download from http://www.datalab.uci.edu/resources/periodicity/. Contact: [email protected] [ BibTex ] | [ Link ] Estimating Replicate Time-Shifts Using Gaussian Process Regression Liu, Lin, Anderson, Smyth, Ihler Motivation: Time-course gene expression datasets provide important insights into dynamic aspects of biological processes, such as circadian rhythms, cell cycle and organ development. In a typical microarray time-course experiment, measurements are obtained at each time point from multiple replicate samples. Accurately recovering the gene expression patterns from experimental observations is made challenging by both measurement noise and variation among replicates' rates of development. Prior work on this topic has focused on inference of expression patterns assuming that the replicate times are synchronized. We develop a statistical approach that simultaneously infers both (i) the underlying (hidden) expression profile for each gene, as well as (ii) the biological time for each individual replicate. Our approach is based on Gaussian process regression (GPR) combined with a probabilistic model that accounts for uncertainty about the biological development time of each replicate. Results: We apply GPR with uncertain measurement times to a microarray dataset of mRNA expression for the hair-growth cycle in mouse back skin, predicting both profile shapes and biological times for each replicate. The predicted time shifts show high consistency with independently obtained morphological estimates of relative development. We also show that the method systematically reduces prediction error on out-of-sample data, significantly reducing the mean squared error in a cross-validation study. Availability: Matlab code for GPR with uncertain time shifts is available at http://sli.ics.uci.edu/Code/GPRTimeshift/ Contact: [email protected] [ BibTex ] | [ Link ] Learning with Blocks: Composite Likelihood and Contrastive Divergence Asuncion, Liu, Ihler, Smyth Composite likelihood methods provide a wide spectrum of computationally efficient techniques for statistical tasks such as parameter estimation and model selection. In this paper, we present a formal connection between the optimization of composite likelihoods and the well-known contrastive divergence algorithm. In particular, we show that composite likelihoods can be stochastically optimized by performing a variant of contrastive divergence with random-scan blocked Gibbs sampling. By using higher-order composite likelihoods, our proposed learning framework makes it possible to trade off computation time for increased accuracy. Furthermore, one can choose composite likelihood blocks that match the model's dependence structure, making the optimization of higher-order composite likelihoods computationally efficient. We empirically analyze the performance of blocked contrastive divergence on various models, including visible Boltzmann machines, conditional random fields, and exponential random graph models, and we demonstrate that using higher-order blocks improves both the accuracy of parameter estimates and the rate of convergence. [ BibTex ] | [ PDF ] Particle Filtered MCMC-MLE with Connections to Contrastive Divergence Asuncion, Liu,
1.3 Inner product spaces and hilbert spaces Page 1 / 1 Review of inner products and inner product spaces. Inner products We have defined distances and norms to measure whether two signals are different from each other and to measure the “size” of a signal. However, it is possible for two pairs of signals with the same norms and distance to exhibit different behavior - an example of this contrast is to pick a pair of orthogonal signals and a pair of non-orthogonal signals, as shown in [link] . To obtain a new metric that distinguishes between orthogonal and non-orthogonal we use the inner product , which provides us with a new metric of “similarity”. Definition 1 An inner product for a vector space $\left(X,R,+,·\right)$ is a function $⟨·,·⟩:X×X\to R$ , sometimes denoted $\left(·|·\right)$ , with the following properties: for all $x,y,z\in X$ and $a\in R$ , 1. $⟨x,y⟩$ = $\overline{⟨y,x⟩}$ (complex conjugate property), 2. $⟨x+y,z⟩$ = $⟨x,z⟩+⟨y,z⟩$ (distributive property), 3. $⟨\alpha x,y⟩$ = $\alpha ⟨x,y⟩$ (scaling property), 4. $⟨x,x⟩\ge 0$ and $⟨x,x⟩=0$ if and only if $x=0$ . A vector space with an inner product is called an inner product space or a pre-Hilbert space. It is worth pointing out that properties (2-3) say that the inner product is linear, albeit only on the first input. However, if $R=\mathbb{R}$ , then the properties (2-3) hold for both inputs and the inner product is linear on both inputs. Just as every norm induces a distance, every inner product induces a norm: ${||x||}_{i}=\sqrt{⟨x,x⟩}$ . Hilbert spaces Definition 2 An inner product space that is complete under the metric induced by the induced norm is called a Hilbert space . Example 1 The following are examples of inner product spaces: 1. $X={\mathbb{R}}^{n}$ with the inner product $〈x,y〉={\sum }_{i=1}^{n}{x}_{i}{y}_{i}={y}^{T}x$ . The corresponding induced norm is given by ${||x||}_{i}=\sqrt{〈x,x〉}=\sqrt{{\sum }_{i=1}^{n}{x}_{i}^{2}}={||x||}_{2}$ , i.e., the ${\ell }_{2}$ norm. Since $\left({\mathbb{R}}^{n},\parallel ·{\parallel }_{2}\right)$ is complete, then it is a Hilbert space. 2. $X=C\left[T\right]$ with inner product $〈x,y〉={\int }_{T}x\left(t\right)y\left(t\right)dt$ . The corresponding induced norm is ${||x||}_{i}=\sqrt{{\int }_{T}x{\left(t\right)}^{2}dt}={||x||}_{2}$ , i.e., the ${L}_{2}$ norm. 3. If we allow for $X=C\left[T\right]$ to be complex-valued, then the inner product is defined by $〈x,y〉={\int }_{T}x\left(t\right)\overline{y\left(t\right)}dt$ , and the corresponding induced norm is ${||x||}_{i}=\sqrt{{\int }_{T}x\left(t\right)\phantom{\rule{3.33333pt}{0ex}}\overline{x\left(t\right)}dt}=\sqrt{{\int }_{T}{|x\left(t\right)|}^{2}dt}={||x||}_{2}$ . 4. $X={\mathbb{C}}^{n}$ with inner product $〈x,y〉={\sum }_{i=1}^{n}{x}_{i}\overline{{y}_{i}}={y}^{H}x$ ; here, ${x}^{H}$ denotes the Hermitian of $x$ . The corresponding induced norm is ${||x||}_{i}=\sqrt{{\sum }_{i=1}^{n}{|{x}_{i}|}^{2}}={||x||}_{2}$ . Theorem 1 (Cauchy-Schwarz Inequality) Assume $X$ is an inner product space. For each $x,y\in X$ , we have that $|〈x,y〉|\le {||x||}_{i}{||y||}_{i}$ , with equality if ( $i$ ) $y=ax$ for some $a\in R$ ; ( $ii$ ) $x=0$ ; or ( $iii$ ) $y=0$ . Proof: We consider two separate cases. • if $y=0$ then $⟨x,y⟩=\overline{⟨y,x⟩}=\overline{⟨0·y,x⟩}=\overline{0}\overline{⟨y,x⟩}=0⟨x,y⟩=0={\parallel x\parallel }_{i}{\parallel y\parallel }_{i}$ . The proof is similar if $x=0$ . • If $x,y\ne 0$ then $0\le ⟨x-ay,x-ay⟩=⟨x,x⟩-a⟨y,x⟩-\overline{a}⟨x,y⟩+a\overline{a}⟨y,y⟩$ , with equality if $x-ay=0$ , i.e., $x=ay$ for some $a\in R$ . Now set $a=\frac{⟨x,y⟩}{⟨y,y⟩}$ , and so $\overline{a}=\frac{⟨y,x⟩}{⟨y,y⟩}$ . We then have $\begin{array}{cc}\hfill 0& \le ⟨x,x⟩-\frac{⟨x,y⟩}{⟨y,y⟩}⟨y,x⟩-\frac{⟨y,x⟩}{⟨y,y⟩}⟨x,y⟩+\frac{⟨x,y⟩}{⟨y,y⟩}\frac{⟨y,x⟩}{⟨y,y⟩}⟨y,y⟩\hfill \\ & \le ⟨x,x⟩-\frac{\overline{⟨x,y⟩}⟨x,y⟩}{{||y||}^{2}}={||x||}^{2}-\frac{{|⟨x,y⟩|}^{2}}{{||y||}^{2}}.\hfill \end{array}$ This implies $\frac{{|⟨x,y⟩|}^{2}}{{||y||}^{2}}\le {||x||}^{2}$ , and so since all quantities involved are positive we have $|⟨x,y⟩|\le ||x||·||y||$ . Properties of inner products spaces In the previous lecture we discussed norms induced by inner products but failed to prove that they are valid norms. Most properties are easy to check; below, we check the triangle inequality for the induced norm. Lemma 1 If ${\parallel x\parallel }_{i}=\sqrt{⟨x,x⟩}$ , then ${\parallel x+y\parallel }_{i}\le {\parallel x\parallel }_{i}+{\parallel y\parallel }_{i}$ . From the definition of the induced norm, $\begin{array}{cc}\hfill {∥x,+,y∥}_{i}^{2}& =〈x,+,y,,,x,+,y〉,\hfill \\ & =〈x,,,x〉+〈x,,,y〉+〈y,,,x〉+〈y,,,y〉,\hfill \\ & ={∥x∥}_{i}^{2}+〈x,,,y〉+\overline{〈x,,,y〉}+{∥y∥}_{i}^{2}\hfill \\ & ={∥x∥}_{i}^{2}+2\mathrm{real}\left(〈x,,,y〉\right)+{∥y∥}_{i}^{2}.\hfill \end{array}$ At this point, we can upper bound the real part of the inner product by its magnitude: $\mathrm{real}\left(〈x,,,y〉\right)\le |〈x,,,y〉|$ . Thus, we obtain $\begin{array}{cc}\hfill {∥x,+,y∥}_{i}^{2}& \le {∥x∥}_{i}^{2}+2|〈x,,,y〉|+{∥y∥}_{i}^{2},\hfill \\ & \le {∥x∥}_{i}^{2}+2{∥x∥}_{i}{∥y∥}_{i}+{∥y∥}_{i}^{2},\hfill \\ & \le {\left({∥x∥}_{i}+∥{y}_{i}∥\right)}^{2},\hfill \end{array}$ where the second inequality is due to the Cauchy-Schwarz inequality. Thus we have shown that ${∥x,+,y∥}_{i}\le {∥x∥}_{i}+{∥y∥}_{i}$ . Here's an interesting (and easy to prove) fact about inner products: Lemma 2 If $〈x,,,y〉=0$ for all $x\in X$ then $y=0$ . Proof: Pick $x=y$ , and so $〈y,,,y〉=0$ . Due to the properties of an inner product, this implies that $y=0$ . Earlier, we considered whether all distances are induced by norms (and found a counterexample). We can ask the same question here: are all norms induced by inner products? The following theorem helps us check for this property. Theorem 2 (Parallelogram Law) If a norm $∥·∥$ is induced by an inner product, then ${∥x,+,y∥}^{2}+{∥x,-,y∥}^{2}=2\left({∥x∥}^{2}+{∥y∥}^{2}\right)$ for all $x,y\in X$ . This theorem allows us to rule out norms that cannot be induced. Proof: For an induced norm we have ${∥x∥}^{2}=〈x,,,x〉$ . Therefore, $\begin{array}{cc}\hfill {∥x,+,y∥}^{2}+{∥x,-,y∥}^{2}& =〈x,+,y,,,x,+,y〉+〈x,-,y,,,x,-,y〉,\hfill \\ & =〈x,,,x〉+〈x,,,y〉+〈y,,,x〉+〈y,,,y〉+〈x,,,x〉-〈x,,,y〉-〈y,,,x〉+〈y,,,y〉,\hfill \\ & =2〈x,,,x〉+2〈y,,,y〉,\hfill \\ & =2\left({∥x∥}^{2}+{∥y∥}^{2}\right).\hfill \end{array}$ Example 2 Consider the normed space $\left(C\left[T\right],{L}_{\infty }\right)$ , and recall that ${∥x∥}_{\infty }={sup}_{t\in T}|x\left(t\right)|$ . If this norm is induced, then the Parallelogram law would hold. If not, then we can find a counterexample. In particular, let $T=\left[0,2\pi \right]$ , $x\left(t\right)=1$ , and $y\left(t\right)=cos\left(t\right)$ . Then, we want to check if ${∥x,+,y∥}^{2}+{∥x,-,y∥}^{2}=2\left({∥x∥}^{2}+{∥y∥}^{2}\right)$ . We compute: $\begin{array}{cc}\hfill {∥x∥}_{\infty }& =1,\hfill \\ \hfill {∥y∥}_{\infty }& =1,\hfill \\ \hfill {∥x,+,y∥}_{\infty }& =∥1,+,cos,\left(,t,\right)∥=\underset{t\in T}{sup}|1+cos\left(t\right)|=1+1=2,\hfill \\ \hfill {∥x,-,y∥}_{\infty }& =∥1,-,cos,\left(,t,\right)∥=\underset{t\in T}{sup}|1-cos\left(t\right)|=1-\left(-1\right)=2.\hfill \end{array}$ Plugging into the two sides of the Parallelogram law, $\begin{array}{cc}\hfill {2}^{2}+{2}^{2}& =2\left({1}^{2}+{1}^{2}\right),\hfill \\ \hfill 8& =4,\hfill \end{array}$ and the Parallelogram law does not hold. Thus, the ${L}_{\infty }$ norm is not an induced norm. where we get a research paper on Nano chemistry....? what are the products of Nano chemistry? There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others.. learn Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level learn da no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts Bhagvanji Preparation and Applications of Nanomaterial for Drug Delivery revolt da Application of nanotechnology in medicine what is variations in raman spectra for nanomaterials I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor Nasa has use it in the 60's, copper as water purification in the moon travel. Alexandre nanocopper obvius Alexandre what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam Any one who tell me about Preparation and application of Nanomaterial for drug Delivery Hafiz what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe how did you get
0 & 1 \end{smallmatrix} \right) \left( \begin{smallmatrix} u & 0 \\ 0 & 1 \end{smallmatrix} \right) \mapsto \left( \begin{smallmatrix} 1 & r \\ 0 & 1 \end{smallmatrix} \right) \left( \begin{smallmatrix} 1 & 0 \\ 0 & u^{-1} \end{smallmatrix} \right) \in \Aff_-(R) \] is an isomorphism. The equalities on the finiteness length follow from the fact that $\phi$ is a quasi-isometry invariant~\cite[Corollary~9]{AlonsoFPnQI}. \end{proof} \begin{lem} \label{BorelGLSL} For any commutative ring $R$ with unity, the Borel subgroups $\mathbf{B}_n(R) \leq \GL_n(R)$ and $\mathbf{B}_n^\circ(R) \leq \SL_n(R)$ have the same finiteness length, which in turn is no greater than $\phi(\mathbf{B}_2^{\circ}(R))$. \end{lem} \begin{proof} Though stated for arbitrary rings, the proof of the lemma is essentially Bux's proof in the $S$-arithmetic case in positive characteristic~\cite[Remark~3.6]{Bux04}. Again if $\mathbb{G}_m(R)$ is \emph{not} finitely generated, then $\phi(\mathbf{B}_n(R)) = \phi(\mathbf{B}_n^\circ(R)) = \phi(\mathbf{B}_2^{\circ}(R)) = 0$, so that we go back to our standing assumption that $\mathbb{G}_m(R)$ is finitely generated. Consider the central subgroups $\mathbf{Z}_n(R) \leq \mathbf{B}_n(R)$ and $\mathbf{Z}_n^\circ(R) \leq \mathbf{B}_n^\circ(R)$ given by \[ \mathbf{Z}_n(R) = \set{\Diag(u,\ldots,u) \mid u \in R^\times} = \set{u\mathbf{1}_n \mid u \in R^\times} \cong \mathbb{G}_m(R) \] \[ \mbox{and } \, \, \, \mathbf{Z}_n^\circ(R) = \mathbf{Z}_n(R) \cap \mathbf{B}_n^\circ(R) = \set{u\mathbf{1}_n \mid u \in R^\times \mbox{ and } u^n = 1}, \] respectively. Using the determinant map and passing to projective groups we obtain the following commutative diagram of short exact sequences. \begin{center} \begin{tikzpicture} \node (Bn) {$\mathbf{B}_n(R)$}; \node (Bn0) [left=of Bn] {$\mathbf{B}_n^\circ(R)$}; \node (R*) [right=of Bn] {$\mathbb{G}_m(R)$}; \node (Z0) [above=of Bn0] {$\mathbf{Z}_n^\circ(R)$}; \node (Z) [above=of Bn] {$\mathbf{Z}_n(R)$}; \node (R*n) [above=of R*] {$\mathrm{pow}_n(\mathbb{G}_m(R))$}; \node (PBn0) [below=of Bn0] {$\mathbb{P}\mathbf{B}_n^\circ(R)$}; \node (PBn) [below=of Bn] {$\mathbb{P}\mathbf{B}_n(R)$}; \node (finite) [below=of R*] {$\frac{\mathbb{G}_m(R)}{\mathrm{pow}_n(\mathbb{G}_m(R))}$}; \draw[->>] (Bn) to node [auto] {det} (R*); \draw[->>] (Bn) to (PBn); \draw[->>] (R*) to (finite); \draw[->>] (PBn) to (finite); \draw[->>] (Z) to (R*n); \draw[->>] (Bn0) to (PBn0); \draw[arrows = {Hooks[right]->}] (R*n) to (R*); \draw[arrows = {Hooks[right]->}] (PBn0) to (PBn); \draw[arrows = {Hooks[right]->}] (Z0) to (Z); \draw[arrows = {Hooks[right]->}] (Z0) to (Bn0); \draw[arrows = {Hooks[right]->}] (Z) to (Bn); \draw[arrows = {Hooks[right]->}] (Bn0) to (Bn); \end{tikzpicture} \end{center} In the above, $\mathrm{pow}_n(u\mathbf{1}_n) = u^n$. Since $\mathbb{G}_m(R)$ is finitely generated abelian, we have that the groups of the top row and right-most column have finiteness lengths equal to $\infty$, whence by Lemma~\bref{obviousboundsonphi} we get \[ \phi(\mathbb{P}\mathbf{B}_n^\circ(R)) = \phi(\mathbf{B}_n^\circ(R)) \leq \phi(\mathbf{B}_n(R)) = \phi(\mathbb{P}\mathbf{B}_n(R)) \geq \phi(\mathbb{P}\mathbf{B}_n^\circ(R)). \] Since the group $\mathbb{G}_m(R)/\mathrm{pow}_n(\mathbb{G}_m(R))$ of the bottom right corner is a (finitely generated) torsion abelian group, it is finite, from which $\phi(\mathbb{P}\mathbf{B}_n(R)) = \phi(\mathbb{P}\mathbf{B}_n^\circ(R))$ follows, thus yielding the first claim of the lemma. Finally, any $\mathbf{B}_n(R)$ retracts onto $\mathbf{B}_2(R)$ via the map \begin{center} \begin{tikzpicture} \node (Bn) {$ \mathbf{B}_n(R) = \left(\begin{smallmatrix} * & * & * & \cdots & * \\ 0 & * & * & \ddots & \vdots \\ 0 & 0 & * & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & * \\ 0 & \cdots & \cdots & 0 & * \end{smallmatrix}\right) $}; \node (B2) [right=of Bn] {$ \left(\begin{smallmatrix} * & * & 0 & \cdots & 0 \\ 0 & * & 0 & \ddots & \vdots \\ 0 & 0 & 1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & 0 & 1 \end{smallmatrix}\right) \cong \mathbf{B}_2(R)$, }; \draw[->>] (Bn) to (B2); \end{tikzpicture} \end{center} which yields the second claim. \end{proof} To prove the desired inequality $\phi(\mathfrak{X}(R) \rtimes \mathcal{H}(R)) \leq \phi(\mathbf{B}_2^{\circ}(R))$, we shall use well-known matrix representations of classical groups. We warm-up by considering the simpler case where the given classical group $\mathcal{G}$ containing $\mathfrak{X} \rtimes \mathcal{H}$ is the general linear group itself, which will set the tune for the remaining cases. (Recall that $\mathcal{H}$ is a maximal torus of $\mathcal{G}$.) \begin{pps} \label{RTGLn} Theorem~\bref{apendice} holds if $\mathcal{G} = \GL_n$. \end{pps} \begin{proof} Here we take a matrix representation of $\mathcal{G} = \GL_n$ such that the given soluble subgroup $\mathfrak{X} \rtimes \mathcal{H}$ is upper triangular. In this case, the maximal torus $\mathcal{H}$ is the subgroup of diagonal matrices of $\GL_n$, i.e. \[ \mathcal{H}(R) = \mathbf{D}_n(R) = \prod\limits_{i=1}^n D_i(R) \leq \GL_n(R), \] and $\mathfrak{X}$ is identified with a subgroup of elementary matrices in a single fixed position, say $ij$ with $i < j$. That is, \[ \mathfrak{X}(R) = \mathbf{E}_{ij}(R) = \spans{\set{e_{ij}(r) \mid r \in R}} \leq \GL_n(R). \] Recall that the action of the torus $\mathcal{H}(R) = \mathbf{D}_n(R)$ on the unipotent root subgroup $\mathfrak{X}(R) = \mathbf{E}_{ij}(R)$ is given by the diagonal relations~\bbref{SteinbergRelGLn}. But such relations also imply the decomposition \begin{align*} \mathbf{E}_{ij}(R) \rtimes \mathbf{D}_n(R) & = \spans{\mathbf{E}_{ij}(R), D_i(R), D_j(R)} \times \prod\limits_{i \neq k \neq j} D_k(R) \\ & \cong \mathbf{B}_2(R) \times \mathbb{G}_m(R)^{n-2} \end{align*} because all diagonal subgroups $D_k(R)$ with $k \neq i,j$ act trivially on the elementary matrices $e_{ij}(r)$. Since we are assuming $\mathbb{G}_m(R)$ to be finitely generated, it follows from Lemmata~\doublebref{obviousboundsonphi}{obphi4} and~\bref{BorelGLSL} that \[ \phi(\mathfrak{X}(R) \rtimes \mathcal{H}(R)) = \min\set{\phi(\mathbf{B}_2(R)), \phi(\mathbb{G}_m(R)^{n-2})} = \phi(\mathbf{B}_2^{\circ}(R)). \] \end{proof} It remains to investigate the situation where the classical group $\mathcal{G}$ in the statement of Theorem~\bref{apendice} is a universal Chevalley--Demazure group scheme. Write $\mathcal{G} = \mathcal{G}_\Phi^{{\rm sc}}$, with underlying root system $\Phi$ associated to the given maximal torus $\mathcal{H} \leq \mathcal{G}_\Phi^{{\rm sc}}$ and with a fixed set of simple roots ${\Delta} \subset \Phi$. One has \[ \mathcal{H}(R) = \prod\limits_{\alpha \in \Delta} \mathcal{H}_\alpha(R), \] and $\mathfrak{X}$ is the unipotent root subgroup associated to some (positive) root $\eta \in \Phi^+$, that is, \[ \mathfrak{X}(R) = \mathfrak{X}_\eta(R) = \spans{\,\set{x_\eta(r) \mid r \in R}\,}. \] The proof proceeds by a case-by-case analysis on $\Phi$ and $\eta$. Instead of diving straight into all possible cases, some obvious reductions can be done. \begin{lem} \label{obviousreductionforRT} If Theorem~\bref{apendice} holds whenever $\mathcal{G}$ is a universal Chevalley--Demazure group scheme $\mathcal{G}_\Phi^{{\rm sc}}$ of rank at most four and $\mathfrak{X} = \mathfrak{X}_\eta$ with $\eta \in \Phi^+$ simple, then it holds for any universal Chevalley--Demazure group scheme. \end{lem} \begin{proof} Write $\mathfrak{X}(R) = \mathfrak{X}_\eta(R)$ and $\mathcal{H}(R) = \prod_{\alpha \in \Delta} \mathcal{H}_\alpha(R)$ as above. By~\bbref{SteinbergReflection} we can find an element $w$ in the Weyl group $W$ of $\Phi$ and a corresponding element $\omega \in \mathcal{G}_\Phi^{{\rm sc}}(R)$ such that $w(\eta) \in \Phi^+$ is a simple root and \[ \omega (\mathfrak{X}_\eta(R) \rtimes \mathcal{H}(R)) \omega^{-1} \cong \mathfrak{X}_{w(\eta)}(R) \rtimes \mathcal{H}(R). \] (The conjugation aboves takes place in the overgroup $\mathcal{G}_\Phi^{{\rm sc}}(R)$.) We may thus assume $\eta \in \Phi^+$ to be simple. From the Steinberg relations~\bbref{SteinbergRel} we have that \begin{align*} \mathfrak{X}(R) \rtimes \mathcal{H}(R) = \left( \mathfrak{X}_\eta(R) \rtimes \left( \underset{\spans{\eta, \alpha} \neq 0}{\prod\limits_{\alpha \in \Delta}} \mathcal{H}_\alpha(R) \right) \right) \times \underset{\spans{\eta, \beta} = 0}{\prod\limits_{\beta \in \Delta}} \mathcal{H}_\beta(R), \end{align*} yielding $\phi(\mathfrak{X}(R) \rtimes \mathcal{H}(R)) = \phi(\mathfrak{X}_\eta(R) \rtimes \mathcal{H}^\circ(R))$ by Lemma~\doublebref{obviousboundsonphi}{obphi4}, where \[ \mathcal{H}^\circ(R) = \underset{\spans{\eta, \alpha} \neq 0}{\prod\limits_{\alpha \in \Delta}} \mathcal{H}_\alpha(R). \] Inspecting all possible Dynkin diagrams, it follows that the number of simple roots $\alpha \in \Delta$ for which $\spans{\eta, \alpha} \neq 0$ is at most four. The lemma follows. \end{proof} Thus, in view of Proposition~\bref{RTGLn} and Lemma~\bref{obviousreductionforRT}, the proof of Theorem~\bref{apendice} will be complete once we establish the following. \begin{pps} \label{RTcasebycase} Theorem~\bref{apendice} holds whenever $\mathcal{G}$ is a universal Chevalley--Demazure group scheme $\mathcal{G}_\Phi^{{\rm sc}}$ with \[ \Phi \in \set{\text{A}_1, \text{A}_2, \text{C}_2, \text{G}_2, \text{A}_3, \text{B}_3, \text{C}_3, \text{D}_4} \] and $\mathfrak{X} \rtimes \mathcal{H}$ is of the form \[ \mathfrak{X} \rtimes \mathcal{H} = \mathfrak{X}_\eta \rtimes \left( \underset{\spans{\eta, \alpha} \neq 0}{\prod\limits_{\alpha \in \Delta}} \mathcal{H}_\alpha \right) \mbox{ with } \eta \in \Phi^+ \mbox{ simple.} \] \end{pps} \begin{proof} The idea of the proof is quite simple. In each case, we find a matrix group $G(\Phi,\eta,R)$ satisfying $\phi(G(\Phi,\eta,R)) = \phi(\mathbf{B}_2^{\circ}(R))$ and which fits into a short exact sequence \[ \mathfrak{X}_\eta(R) \rtimes \mathcal{H}(R) \hookrightarrow G(\Phi,\eta,R) \twoheadrightarrow Q(\Phi,\eta,R) \] where $Q(\Phi,\eta,R)$ is finitely generated abelian. In fact, $G(\Phi,\eta,R)$ can often be taken to be $\mathfrak{X}_\eta(R) \rtimes \mathcal{H}(R)$ itself so that $Q(\Phi,\eta,R)$ is trivial in many cases. The proposition then follows from Lemma~\doublebref{obviousboundsonphi}{obphi1}. To construct the matrix groups $G(\Phi,\eta,R)$ above, we use mostly Ree's matrix representations of classical groups~\cite{Ree} as worked out by Carter in~\cite{Carter}. (Recall that the case of Type $\text{B}_n$ was cleared by Dieudonn\'e~\cite{Dieudonne} after left open in Ree's paper.) In the exceptional case $\text{G}_2$ we follow Seligman's identification from~\cite{SeligmanLieAlgII}. We remark that Seligman's numbering of indices agrees with that of Carter's for $\text{G}_2$ as a subalgebra of $\text{B}_3$. \underline{Type A:}
the class file with the KVM, it has to be preverified using the preverify command that is included in the CLDC reference implementation. To preverify the class file and write the preverified version to the same directory as the original source code, use the following command: preverify -classpath %CLDC_PATH%\common\api\classes;tmpclasses -d . ora.ch2.HelloWorld The -classpath command-line option indicates the directories in which the preverify command should look for class files, both the core Java libraries and the class file to be preverified, while the -d option is used to control where the preverified class file will be written. The directory names supplied with the -classpath option should be separated by semicolons on the Windows platform, colons in the case of Linux or Solaris. Notice that the compiler requires a source filename, but preverify needs a fully qualified Java class name (with its parts separated by periods) instead. In the case of an application that consists of more than one class file, all class files must be preverified, although not necessarily at the same time. There are two ways to arrange for preverify to operate on more than one class file at a time. The most obvious way is to list all of the classes on the command line: preverify -classpath %CLDC_PATH%\common\api\classes;tmpclasses -d . ora.ch2.HelloWorld ora.ch2.Help Alternatively, if you supply one or more directory names on the command line, preverify recursively searches them and processes every class file and each ZIP and JAR file that it finds: preverify -classpath %CLDC_PATH%\common\api\classes -d . tmpclasses Notice that in this case, there was no need to include tmpclasses in the -classpath argument because its presence is inferred from the fact that it is the directory to be searched. 23 J2ME in a Nutshell The complete set of command-line options recognized by the preverify command can be found in Chapter 8. Finally, you can run the example using the kvm command: kvm -classpath . ora.ch2.HelloWorld which produces some very familiar output: Hello, KVM world Notice that the -classpath option identified only the directory search path needed to find the class file for ora.ch2.HelloWorld. There is no need to specify where the core libraries are located, because the KVM knows where to find them.3 2.2 The CLDC Class Libraries CLDC addresses a wide range of platforms that do not have sufficient memory resources to support the full range of packages and classes provided by J2SE. Because CLDC is a configuration rather than a profile, it cannot have any optional features. Therefore, the packages and classes that it specifies must have a small enough footprint that they can be hosted by devices that meet only the minimum requirements of the CLDC specification. The CLDC class library is very small -- it is composed of a package containing functionality that is specific to J2ME (called javax.microedition.io), along with a selection of classes from the following packages in the core J2SE platform:4 • java.io • java.lang • java.util All J2ME configurations and profiles include packages or classes from J2SE. When J2ME incorporates software interfaces from J2SE, it must follow several rules: • The names of the packages or classes must be the same, wherever possible. It would not be acceptable, for example, to completely reimplement the java.lang package in a package called javax.microedition.lang if the API in the java.lang package can be used. • The semantics of classes and methods that are carried over into J2ME must be identical to those with the same name in J2SE. • It is not possible to add public or protected fields or methods to a class that is shared between J2SE and J2ME. Because of these rules, J2ME packages and classes will always be a subset of the packages and classes of the same name in J2SE, and the J2ME behavior will not be surprising to developers familiar with J2SE. Furthermore, J2ME configurations and profiles are not allowed to add extra functionality in packages and classes that they share with J2SE, so 3 In fact, the core libraries are built into the KVM using a technique known as "ROMizing," which will be covered in Section 2.4.1, later in this chapter. 4 Among other things that have been omitted due to resource constraints, CLDC does not include any support for internationalization of applications and the formatting of dates and numbers according to locale-specific conventions. If you need to write an application that is locale-sensitive, you will need to do all the hard work yourself. 24 J2ME in a Nutshell upward compatibility from J2ME to J2SE is preserved. However, it is permissible to exclude from J2ME those fields, methods, and classes that are deprecated in J2SE and this has been done by the Java Community Process expert group responsible for the CLDC specification. You'll find complete information on which classes from J2SE are included in CLDC and how this set compares to other J2ME configurations and profiles in Chapter 10. Detailed information on the individual classes in the reference chapters can be found in Part II of this book. The following sections describe the most important aspects of each of the CLDC packages that distinguish them from their counterparts in J2SE. 2.2.1 The java.lang Package The CLDC java.lang package has only half of the classes of its J2SE counterpart and some classes that are included are not complete implementations. The major points of interest are covered in the following sections. 2.2.1.1 The Object class The CLDC java.lang.Object class has no finalize( ) method because CLDC virtual machines do not implement finalization. Furthermore, the clone( ) method has been removed along with the java.lang.Cloneable interface. There is, therefore, no generic way to clone an object in a CLDC VM. 2.2.1.2 Number-related classes As noted earlier, floating point operations are not supported by the CLDC VM and, as a consequence, the J2SE java.lang.Float and java.lang.Double classes, are not part of the core library set. The other number classes (Byte, Integer, Long, and Short) are provided, but their J2SE base class, java.lang.Number, is not included. The numeric classes are, therefore, derived from Object instead of Number. Another difference worthy of note is that the java.lang.Comparable interface does not exist in CLDC, so CLDC numbers cannot be directly compared in the same way that their J2SE counterparts are. 2.2.1.3 Reflective features The exclusion of all VM support for reflection means that all methods in java.lang.Class that are connected with this feature have been removed. It is still possible, however, to perform limited operations on classes whose types are not known at compile time by using the forName( ) and newInstance( ) methods. 2.2.1.4 System properties The CLDC profile defines only a very small set of system properties that does not include any of those available with J2SE. The properties that an implementation is required to provide are listed in Table 2-2.5 5 Note that, at the time of writing, there is no consistency in the way that the default encoding is represented. The KVM returns the default encoding as ISO8859_1, which is the value required in the CLDC specification document, whereas the MIDP reference implementation returns ISO-8859-1. 25 J2ME in a Nutshell Table 2-2. System Properties Defined by CLDC Property Name Meaning Example The name of the J2ME configuration that the platform microedition.configuration CLDC-1.0 supports, together with its version number. The default character encoding that the device supports. Devices are not required to provide any extra encodings, but microedition.encoding ISO8859_1 vendors are free to do so. There is, however, no way to find out which encodings are available. The name of the platform or device. The default KVM microedition.platform J2ME implementation returns the value null for this property. The J2ME profiles that the device supports, separated by microedition.profiles spaces.
that would accomodate a dedicated camera server. Quote: Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it. Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github. Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system. 14706   Thu Jun 27 20:48:22 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking And finally, a network is trained! Result summary (TLDR :-P) : No memory was used. Model trained. Results were garbage. Will tune hyperparameters now. Code pushed to github. More details of the experiment: Aim: 1. To train a network to check that training occurs and get a feel for what the learning might be like. 2. To set up the necessary framework to perform mulitple experiments and record results in a manner facilitating comparison. 3. To track beam spot motion. What I did: 1. Set up a network that learns a framewise mapping as described in here. 2. Training data: 0.9 x 1791 frames. Validation data: 0.1 x 1791 frames. Test data (only prediction): all the 1791 frames 3. Hyperparameters: Attachment #1 4. Did no tuning of hyperparameters. 5. Compiled and fit the models and saved the results. What I saw 1. Attachment #2: data fed to the network after pre-processing - median blur + crop 2. Attachment #3: learning curves. 3. Attachment #4: true and predicted motion. Nothing great. What I think is going wrong- 1. No hyperparameter tuning. This was only a first pass but is being reported as it will form the basis of all future experiments. 2. Too little data. 3. Maybe wrong architecture. Well, what now? 1. Tune hyperparmeters (try to get the network to overfit on the data and then test on that. We'll then know for sure that all we probably need is more data?) 2. Currently the network has around 200k parameters. Maybe reduce that. 3. Set up a network that takes as input (one example corresponding to one forward pass)  a bunch of frames and predicts a vector of position values that can be used as continuous data). Quote: I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well. Quote: Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it. Experiment file: train.py batch_size: 32 dropout_probability: 0.8 eta: 0.0001 filter_size: 19 filter_type: median initializer: xavier num_epochs: 50 activation_function: relu dense_layer_units: 64 ... 10 more lines ... Attachment 2: frame0.pdf Attachment 3: Learning_curves.png Attachment 4: Motion.png 14726   Thu Jul 4 18:19:08 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking The quoted elog has figures which indicate that the network did not learn (train or generalize) on the used data. This is a scary thing as (in my experience) it indicates that something is fundamentally wrong with either the data or model and learning will not happen despite how hyperparameters are tuned. To check this, I ran the training experiment for nearly 25 hyperparameter settings (results here)with the old data and was able to successfully overfit the data. Why is this progress? Well, we know that we are on the right track and the task is to reduce overfitting. Whether, that will happen through more hyperparameter tuning, data collection or augmentation remains to be seen. See attachments for more details. Why is the fit so perfect at the start and bad later? Well, that's because the first 90% of the test data is  the training data I overfit to and the latter the validation data that the network has not generalized well to. Quote: And finally, a network is trained! Result summary (TLDR :-P) : No memory was used. Model trained. Results were garbage. Will tune hyperparameters now. Code pushed to github. More details of the experiment: Aim: 1. To train a network to check that training occurs and get a feel for what the learning might be like. 2. To set up the necessary framework to perform mulitple experiments and record results in a manner facilitating comparison. 3. To track beam spot motion. What I did: 1. Set up a network that learns a framewise mapping as described in here. 2. Training data: 0.9 x 1791 frames. Validation data: 0.1 x 1791 frames. Test data (only prediction): all the 1791 frames 3. Hyperparameters: Attachment #1 4. Did no tuning of hyperparameters. 5. Compiled and fit the models and saved the results. What I saw 1. Attachment #2: data fed to the network after pre-processing - median blur + crop 2. Attachment #3: learning curves. 3. Attachment #4: true and predicted motion. Nothing great. What I think is going wrong- 1. No hyperparameter tuning. This was only a first pass but is being reported as it will form the basis of all future experiments. 2. Too little data. 3. Maybe wrong architecture. Well, what now? 1. Tune hyperparmeters (try to get the network to overfit on the data and then test on that. We'll then know for sure that all we probably need is more data?) 2. Currently the network has around 200k parameters. Maybe reduce that. 3. Set up a network that takes as input (one example corresponding to one forward pass)  a bunch of frames and predicts a vector of position values that can be used as continuous data). Quote: I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well. Quote: Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired
maximum Kinetic Energy given by the energy difference between the incident. As discussed in the comments to your question, you can merge cells to span several rows or several columns. Divide-and-conquer algorithms The divide-and-conquer strategy solves a problem by: 1. Sep 3, 2017 - Equal or Unequal Fractions? Look at each set of shapes and color the shape that has equal parts. Examples of How to Find Unit Rate or Unit Price. I am just blocked by it. have approximately a normal distribution with mean equal to 0 and standard deviation equal to 1. Hello everybody! In this video I will be remaking how to fold a square into 3 equal parts. Column A is the starting number and B is the ending Rows are for each day of the month, depending on the month the rows will be different 27-31 I can divide 15,000 / # of days (in this case 31) = 483 numbers in each group A B 1 1000 - 1483 (I can. Each part should have an equal number of elements. UN Web TV is available 24 hours a day with selected LIVE programming of United Nations meetings and events as well as with pre-recorded video features and documentaries on various global issues. It specifies the maximum number of parts into which the input string should be divided. Label them 1/12, 2/12, 3/12, 4/12, 5/12, 6/12, 7/12, 8/12, 9/12, 10/12, 11/12, and 12/12. Take a look at the graph below. xml' is compressed into a file by appending extension '. a is equal to b, (expressed as a = b) 3. So, to find the coordinates that divide the segment with endpoints (–4,1) and (8,7) into three equal parts, first find the point that’s one-third of the distance from (–4,1) to the other endpoint, and then find the point that’s two-thirds of the distance from (–4,1) to the other endpoint. Equivalent fractions are two, or more, fractions which have the same value but which are different in form. – iCanLearn Feb 21 '13 at. A company manufactures parts for military aircrafts. Draw a large square on paper. An icon used to represent a menu that can be toggled by interacting with this icon. 3000x2000 or 1800x1200) you can then crop into 6 equal parts (i. Add two matchsticks to the layout below to divide the area into two parts. Division, though, opens up a new territory: by the same rule, for instance 10 40 / 10 43 = 10 –3 = 0. What I need is the center of the bin not the upper and lower ends. Cluster Sampling: A population is divided into clusters and a few of these (often randomly selected) clusters are exhaustively sampled. An odd number for example; 455 divided into 4 cells should return 114, 114, 114, 113. The picture shows the problem being divided up into smaller and smaller pieces (first an array of size 8, then two halves each of size 4, etc). Equivalent Fractions- Students will be able to recognize fractions of equal value and that name the same amount. For example, the NE 1/4, SW 1/4, sec. Publisher registers into database, or wait for spider. First, we work out the total number of parts into which the concrete is divided: 3+2+1 = 6 parts altogether. Saskatchewan - most of it Show. Mitosis is the process of forming (generally) identical daughter cells by replicating and dividing the original chromosomes, in effect making a cellular xerox. Class limits are divided into two categories: lower class limit and upper class limit. A vast array of experiments is therefore possible, since the electrons can be scattered or ionised into any direction throughout space, and they can emerge from the reaction zone with any energy from threshold (0eV of Kinetic Energy),through equal energy sharing to a maximum Kinetic Energy given by the energy difference between the incident. Hello everybody! In this video I will be remaking how to fold a square into 3 equal parts. Give each group a set of counters or coins. This function allows you to combine text from different cells into one cell. 5 * Cl * r * V^2 * A Similarly, the drag equation relates the aircraft drag D to a drag coefficient Cd: D =. unsplit reverses the effect of split. 4051 mm FIGURE 9. The entire range of data values is divided equally into however many categories have been chosen. I am curiously if it is theoretically possible to obtain a division which fulfills at least conditions below: 1. This unknown area is split into six congruent sections (equal in every way - like the triangles are "equal" in every way - including area, length of side and length of arc). We are trying to get our unknown number, x, on the left side of the equation, all by itself. d) All of the information in the problem must be included in the diagram. So, resistance of each part = 1 0 R Then all 10 wires are connected in parallel. The 11 numbered “teeth” will always divide the distance between the teeth marked “0” into 10 equal parts. It's like sampling the data into different groups with equal probability of attributes. In this example, we split in groups of 3 characters :. Click on "Divide slice" option; A box of "Divide slice" will open. Paul at Dudecraft has made an. In this case, the range of the amplitude of the analog signal varies between 0 to 8 V and the uniform quantizer has 16 intervals. We'll call the resistor closest to the input voltage (V in) R 1, and the resistor closest to ground R 2. Cluster sampling is used extensively by governmental and private research organizations. 50 for 5 pounds of. As an example, we have a file named primary_data_file. Share Ratio PES 2014 Pro FULL Minnie Download PES 2014 now go to the field, and this is the time of Rev. Now you will find that, image is of equal size. If you need to process a huge picture of over 100 megapixels, cut it into more parts or you need a. To get started, all you have to do is set up your teacher account. It is not clear whether you mean the area of one arc is Pi (not actually split Pie, although pronouced the same), or all six sections taken together is Pi. Hydraulic radius (HR or just R) is the ratio of the cross-sectional area divided by the wetted perimeter. 1:- A piece of wire of resistance R is cut into five equal parts. 573 --- mutt/ChangeLog:3. The % variability in scores is given by the R 2 value. It is any one of the numbers or values in a series dividing the distribution of the individuals in the series into ten groups of equal frequency. 10 7 / 10 4 = 10 3 which agrees with 7 – 4 = 3. ‪Fraction Matcher‬. 5 exclusive. Machine learning algorithms were employed for predicting the feed conversion efficiency (FCE), using the blood parameters and average daily gain (ADG) as predictor variables in buffalo heifers. Students will divide rectangles into equal parts and color according to the instructions. Divided America is a multimedia series that explores the divisions confronting Americans in their communities and their politics. This function allows you to combine text from different cells into one cell. The test scores for a class of 147 students are computed. Find the range (highest value - lowest value). For example, the NE 1/4, SW 1/4, sec. Division, though, opens up a new territory: by the same rule, for instance 10 40 / 10 43 = 10 –3 =
simple cases where data satisfy conditions of anonymity, but when a diversity of values is present, the complexity of the solutions increases exponentially. As RDF data can be also represented as a graph, anonymization graph approaches have been explored in this work. The simplicity of the graph structure assumption makes the current approaches not adequate for the Semantic Web, where heterogeneous nodes and relations are present. Some criteria of anonymization, such as k-degree, can be adopted to the Semantic Web, but the solutions to satisfy these criteria have to be modified according to the complexity of the RDF structure. Most of the works in RDF documents, databases and graphs anonymization assume that the classification of the data required to satisfy the conditions of anonymity, is provided by expert user. However, the scenario of the Semantic Web complicates the task of classification, since it is difficult to understand the detailed characteristics of external datasets, and assume all the background knowledge possessed by adversaries. Table 4 shows our analysis in this regard. Note that none of the works on database and graph anonymization satisfies the criteria of complexity of data (heterogeneous nodes and relations). Moreover, the classification on the data is mainly provided by the proposals and there is no information about how it was performed. We assume that the process to classify the data has been manual. Thus, a new anonymization approach able to cope all requirements is needed to provide an appropriate protection of sensitive information for the Semantic Web. Before describing how our approach addresses these requirements, the following section introduces some common terminologies and definitions of anonymization in the context of RDF. ## 4 Terminologies and Definitions For the Semantic Web, RDF is the common format to describe resources, which are abstractions of entities (documents, persons, companies, etc.) of the real world. RDF uses triples in the form of (subject, predicate, object) expressions also named statements, to provide relationships among resources. The following elements compose the RDF triples: • An IRI, which is an extension of the Uniform Resource Identifier (URI) scheme to a much wider repertoire of characters from the Universal Character Set (Unicode/ISO 10646), including Chinese, Japanese, and Korean character sets [12]. • A Blank Node, representing a local identifier used in some concrete RDF syntaxes or RDF store implementations. A blank node can be associated with an identifier (rdf:nodeID) to be referenced in the local document, which is generated manually or automatically. • A Literal Node, representing values as strings, numbers, and dates. According to the definition in [9], it consists of two or three parts: • A lexical form, being a Unicode string, which should be in Normal Form C7 to assure that equivalent strings have a unique binary representation. • A datatype IRI, being an IRI identifying a datatype that determines how the lexical form maps to an object value. • A non-empty language tag as defined by “Tags for Identifying Languages” [1], if and only if the datatype IRI is http://www.w3.org/1999/02/22-rdf-syntax-ns#langString. Table 5 Description of sets Set Description I A set of IRIs is defined as: I = $\left\{{i}_{1},{i}_{2},...,{i}_{l}\right\}\mid \mathrm{\forall }{i}_{i}\in I$ is an IRI. L A set of literal nodes is defined as: L= $\left\{{l}_{1},{l}_{2},...,{l}_{m}\right\}\mid \mathrm{\forall }{l}_{i}\in L$ is a literal node. BN A set of blank nodes is defined as: $BN=\left\{b{n}_{1},b{n}_{2},...,b{n}_{n}\right\}\mid \mathrm{\forall }b{n}_{i}\in BN$ is a Blank Node. Table 5 describes the sets of RDF’s elements that we use in our approach description. After the definition of sets of RDF’elements, we formally describe a triple in Definition 1. Definition 1 Triple (t): A Triple, denoted as t, is defined as an atomic structure consisting of a 3-tuple with a Subject (s), a Predicate (p), and an Object (o), denoted as t :< s,p,o >, where: sIBN represents the subject to be described, that can be an IRI or a blank node; pI is a predicate defined as an IRI in the form namespace prefix:pre dicate name, where namespace prefix is a local identifier of the IRI, in which the predicate (predicate name) is defined. The predicate (p) is also known as the property of the triple. oIBNL describes the object, that can be an IRI or a blank node. From our motivating scenario, we can observe several triples with different RDF resources, properties, and literals: • t1 : <genid:S1,rdf:type,dbo:School> • t2 : <genid:S1,rdfs:label,"Hartlepool College of Further Education"> • t3 : <genid:S1,prop:latitude,1.4545> • t4 : <genid:S1,prop:longitude,0.40244> A set of triples defines an RDF document, by encoding the triples, using a predefined serialization format complying with the RDF W3C standards, such as RDF/XML, Turtle, N3, etc. According to the structure of triples, RDF document can be represented as an RDF Graph, since the structure allows node-edge-node relations. An RDF graph is defined in Definition 2. Definition 2 RDF Graph ( G): An RDF graph of an RDF document is denoted as Gd(N,E), where each triple ti from d is represented as a node-edge-node link. Therefore, G nodes (N), denoted as ni, represent subjects and objects, and G edges (E), denoted as ej, represent corresponding predicates: ${n}_{i}\in \bigcup _{{t}_{i}.s\cup {t}_{i}.o}$ and ${e}_{j}\in \bigcup _{{t}_{i}.p}$ [49]. The following subsection presents the formal concepts used in this work. ### 4.1 Problem Definition As we show in the motivating scenario, there are cases in which sensitive information can be disclosed through the data published from different sources on the Web (due to data intersection). Thus, the data to be published, denoted as D, should be protected before, in order to avoid compromising the disclosure or production of sensitive information. The available information on the Web is called background knowledge. It can be provided automatically or semi-automatically by the expert user and can contain simple or complex resources (e.g., one RDF resource, RDF graph, text files). The background knowledge is formally defined in Definition 3. Definition 3 Background Knowledge (BK): It is a set of IRIs, considered as nodes and denoted as BK: {n1, n2, ..., ni | $\mathrm{\forall }{n}_{i}$ is a IRI}. In this work, we assume that the intersection between D and BK can disclose or produce sensitive information, hence identifiers and quasi-identifiers appear in D due to the connection among its subjects and objects. We rename both concepts to keys, defined in Definition 4, since they allow the disclosure of sensitive information. Definition 4 Keys (K): Keys are identifiers and quasi-identifiers, denoted as . We formally define our assumption concerning the intersection between D and BK datasets in Assumption 1. Assumption 1 Key Detection (Intersection) (IN): The intersection between a set of triples T and a set of IRIs I is defined as a set of nodes (subjects and objects of triples) that belong to the RDF graph of T (GT), denoted as IN, where each node of IN has another similar one in I. The similarity among the two nodes is measured by a similarity function (simFunc), whose value is equal or greater than an established threshold. $IN:T\sqcap I=\bigcup _{\left\{{n}_{i}\in {G}_{T}\mid sim\left({n}_{i}\in T,{n}_{j}\in I,\alpha ,\beta ,\gamma \right)\ge threshold\right\}}$ Where: • $\sqcap$ is an operator that defines the intersection between triples and IRIs; • – ni is a subjects or object that belong to T; • – nj is a IRI that belong to I; • – sim is the similarity function defined in Definition 5. • The similarity function between two nodes is defined in Definition 5. Definition 5 Similarity function (simFunc): The similarity between two nodes is defined as a float value, denoted as simFunc that takes into account three different aspects of the nodes: (i) syntactic; (ii) semantic; and (iii) context analysis, such that: pg64a Where: • – niIBNL and njI; • – Syntactic similarity is a function which considers the syntactic aspect of the node, whose values are in [0, 1]; • – Semantic similarity is a function which considers the semantic aspect of the nodes, whose values are in [0, 1]; • – Context similarity is a function which considers the incoming and outgoing relations of the
the steel-making industry, the automobile shops, the telephone, even to the new, scientific, highly developed forms of agriculture. Few of them find their way to the railroad. This is one of the most alarming symptoms of the great sick man of American business—his apparent[Pg 95] utter inability to draw fresh, red blood to his veins.[6] A few of the roads—a very few indeed—have made distinct efforts to build up a personnel for future years by intelligent educational means. The Southern Pacific and the Union Pacific have made interesting studies and permanent efforts along these lines. But most of the railroads realize that it is the wage question—the long, hard road to a decent pay envelope in their service, as compared with the much shorter pathways in other lines of American industry—that is their chief obstacle in this phase of their railroad problem. It has been suggested, and with wisdom, that the railroad should begin to make a more careful study and analysis of its entire labor situation than it has ever[Pg 96] before attempted. Today it is giving careful, scientific, detailed attention to every other phase of its great problems. One road today has twenty-seven scientific observers—well trained and schooled to their work—making a careful survey of its territory, with a view to developing its largest traffic possibilities. And some day a railroad is to begin making an audit of its labor—to discover for itself in exact fact and figures, the cost of living for a workman in Richmond or South Bend or Butte or San Bernardino. Upon that it will begin to plat its minimum wage-increase. Suppose the railroad was to begin with this absolute cost of living as a foundation factor. It would quickly add to it the hazard of the particular form of[Pg 97] labor in which its employee was engaged expressed in dollars and cents—a factor easily figured out by any insurance actuary. To this again would be added a certain definite sum which might best be expressed, perhaps, as the employee’s profit from his work; a sum which, in ordinary cases at least, would or should represent the railroad’s steady contribution to his savings-bank account. To these three fundamental factors there would probably have to be added a fourth—the bonus which the railroad was compelled to offer in a competitive labor market for either a man or a type of men which it felt that it very much needed in its service. Only upon some such definite basis as this can a railroad’s pay-roll ever be made scientific and economic—and therefore permanent. An instant ago and I was speaking of bonuses. The very word had, until recently, a strange sound in railroad ears. The best section foreman on a line may receive a cash prize for his well-maintained stretch of track; I should like to hear of a station agent like Blinks who knows that his well-planned and persistent effort to build up the freight and passenger business at his station, is to be rewarded by a definite contribution from the pay-chest of the railroad which employs him. Up to very recently there apparently has not been a single railroad which has taken up this question of bonus payments for extra services given. To the abounding credit of the Atchison, Topeka, and Santa Fé Railway and its president, Edward Payson Ripley, let it be said that they have just agreed to pay the greater proportion of their employees receiving less than $2,000 a year[Pg 98] a bonus of ten per cent of the year’s salary for 1916—a payment amounting all told to$2,750,000. The employees so benefited must have been employed by the Santa Fé for at least two years and they must not be what is called “contract labor.” By that the railroad means chiefly the men of the four great brotherhoods whose services are protected by very exact and definite agreements or contracts. The men of the brotherhoods are hardly in a position to expect or to demand a bonus of any sort. And it also is worthy of record that practically every union man, big or little, has placed himself on record against bonus plans of every sort. I hope that the example of the Santa Fé is to be followed by the other railroads of the country.[7] It is stimulating and encouraging; it shows that the big sick man of American business apparently is not beyond hope of recovery. For, in my own mind, the bonus system is, beyond a doubt, the eventual solution of the whole involved question of pay as it exists today[Pg 99] and will continue to exist in the minds of both employer and employee. Our progressive and healthy forms of big industry of the United States have long since come to this bonus plan of paying their employees. The advances made by the steel companies and other forms of manufacturing enterprise, by great merchandising concerns, both wholesale and retail, and by many of the public utility companies, including certain traction systems, are fairly well known. It is a step that, when once taken, is never retraced. The bonus may be paid in various ways—in cash or in the opportunity to subscribe either at par or at a preferred figure, to the company’s stock or bonds. But there is little variation as to the results. And the workmen who benefit directly by these bonus plans become and remain quite as enthusiastic over them as the men who employ them and whose benefit, of necessity, is indirect. A good many railroaders have said that we have reached and long since passed the point of efficiency by increasing our standard of car and train sizes. Mr. Emerson is not new in that deduction. But he puts the case so clearly in regard to the confusing double basis in the pay of the trainmen—the vexed point that is before the Supreme Court of the United States as this book is being completed, because the Adamson so-called eight-hour day omitted the mileage factor, to the eternal annoyance of those same trainmen—that I cannot forbear quoting his exact words: Piece rates to trainmen should be abolished. The work of trainmen should be classified. There should be short hours and correspondingly high pay for men working under great strain. There should be heavy penalties attached for overtime, although it does not follow that the man who puts in the overtime should receive the penalty. Society wants him to protest against overtime, because it may be both dangerous to the public and detrimental to the worker. The worker should not be bribed to encourage it. It is evident that pay by the hour with penalties for overtime would encourage lighter and faster trains. Lighter and faster[Pg 102] trains would increase the roads’ capacity as well as car and locomotive mileage. Capital expenses would drop. The savings made would be available to increase wages and to pay higher bills for material and to pay better dividends. Beyond this there is little more to be said—at least pending the decision of the highest court in the land. But no matter how the Supreme Court may find in this vexatious matter, the fact remains that the union man in railroad employ will continue to be paid upon this complicated and unfit double method of reckoning—clumsy, totally inadequate (built up through the years by men who preferred compromise) and complicate an intelligent and definite solution of a real problem. Some day, some railroader is going to solve the question; and, in my own humble opinion, a genuine solution, worked from the human as well as the purely economic angle is going to rank with the bonus and other indications of an advanced interest on the part of railroad executives
_{1}\cdots \mu _{\ell }}$ must be orthogonal to $u^{\mu }$ which implies that it can only be constructed from combinations of elementary projection operators, $% \Delta ^{\mu \nu}$. This already constrains the rank of the tensor, $% \ell + m+m^{\prime } $, to be an even number. Finally, it must satisfy the following property: \begin{equation} \Delta _{\mu _{1}\cdots \mu _{\ell }}^{\mu _{1}^{\prime }\cdots \mu _{\ell }^{\prime }}\Delta _{\alpha _{1}^{\prime }\cdots \alpha _{m}^{\prime }}^{\alpha _{1}\cdots \alpha _{m}}\Delta _{\beta _{1}^{\prime }\cdots \beta _{m^{\prime }}^{\prime }}^{\beta _{1}\cdots \beta _{m^{\prime }}}\left( \mathcal{N}_{rnn^{\prime }}\right) _{\alpha _{1}\cdots \alpha _{m}\beta _{1}\cdots \beta _{m^{\prime }}}^{\mu _{1}\cdots \mu _{\ell }}=\left( \mathcal{N}_{rnn^{\prime }}\right) _{\alpha _{1}^{\prime }\cdots \alpha _{m}^{\prime }\beta _{1}^{\prime }\cdots \beta _{m^{\prime }}^{\prime }}^{\mu _{1}^{\prime }\cdots \mu _{\ell }^{\prime }}. \label{great property} \end{equation} For our purposes it is sufficient to calculate terms that are of second order in inverse Reynolds number, i.e., the terms $\mathcal{R}$, $\mathcal{R}% ^{\mu }$, and $\mathcal{R}^{\mu \nu }$. Therefore, we only need to consider the cases $\ell =0$, $\ell =1$, and $\ell =2$. Since the actual deduction of the nonlinear collision integrals is complicated, this task is relegated to Appendix \ref{collision_tensors} and here we shall only give the final results. The scalar nonlinear collision integral from Eq.\ (\ref{NonLin_collint}) is given by \begin{align} N_{r-1}&\equiv \sum_{m^{\prime }=0}^{\infty }\sum_{m=0}^{m^{\prime }}\sum_{n=0}^{N_{m}}\sum_{n^{\prime }=0}^{N_{m^{\prime }}}\left( \mathcal{N}% _{rnn^{\prime }}\right) _{\alpha _{1}\cdots \alpha _{m}\beta _{1}\cdots \beta _{m^{\prime }}}\rho _{n}^{\alpha _{1}\cdots \alpha _{m}}\rho _{n^{\prime }}^{\beta _{1}\cdots \beta _{m^{\prime }}} \notag \\ &=\sum_{n=0}^{N_{0}}\sum_{n^{\prime }=0}^{N_{0}}\mathcal{C}_{rnn^{\prime }}^{0\left( 0,0\right) }\rho _{n}\rho_{n^{\prime }} + \sum_{m=1}^{\infty}\sum_{n=0}^{N_{m}}\sum_{n^{\prime }=0}^{N_{m}}\mathcal{C}% _{rnn^{\prime }}^{0\left( m,m\right) }\rho _{n}^{\alpha _{1}\cdots \alpha _{m}}\rho _{n^{\prime },\alpha _{1}\cdots \alpha _{m}}, \label{Nr_scalar} \end{align}% where $\mathcal{C}_{rnn^{\prime }}^{0\left( m,m\right) }$ is the special case $\ell =0$ of a more general coefficient \begin{align} \mathcal{C}_{rnn^{\prime }}^{\ell \left( m,m+\ell \right) }& =\frac{1}{% \left( 2m+2\ell +1\right) \nu }\int_{f}E_{\mathbf{k}}^{r-1}k^{\left\langle \mu _{1}\right. }\cdots k^{\left. \mu _{\ell }\right\rangle } \notag \\ & \times \left[ \mathcal{H}_{\mathbf{p}n}^{\left( m\right) }\,\mathcal{H}_{% \mathbf{p}^{\prime }n^{\prime }}^{\left( m+\ell \right) }\,p^{\left\langle \nu _{1}\right. }\cdots p^{\left. \nu _{m}\right\rangle }\,p_{\left\langle \mu _{1}\right. }^{\prime }\cdots p_{\mu _{\ell }}^{\prime }\,p_{\nu _{1}}^{\prime }\cdots p_{\left. \nu _{m}\right\rangle }^{\prime }\right. \notag \\ & \left. +\left( 1-\delta _{m,m+\ell}\right) \mathcal{H}_{\mathbf{p}^{\prime }n}^{\left( m\right) }\,\mathcal{H}_{\mathbf{p}n^{\prime }}^{\left( m+\ell \right) }\,p^{\prime \left\langle \nu _{1}\right. }\cdots p^{\prime \left. \nu _{m}\right\rangle }\,p_{\left\langle \mu _{1}\right. }\cdots p_{\mu _{\ell }}\,p_{\nu _{1}}\cdots p_{\left. \nu _{m}\right\rangle }\right. \notag \\ & \left. -\mathcal{H}_{\mathbf{k}n}^{\left( m\right) }\,\mathcal{H}_{\mathbf{% k}^{\prime }n^{\prime }}^{\left( m+\ell \right) }\,k^{\left\langle \nu _{1}\right. }\cdots k^{\left. \nu _{m}\right\rangle }\,k_{\left\langle \mu _{1}\right. }^{\prime }\cdots k_{\mu _{\ell }}^{\prime }\,k_{\nu _{1}}^{\prime }\cdots k_{\left. \nu _{m}\right\rangle }^{\prime }\right. \notag \\ & \left. -\left( 1-\delta _{m,m+\ell}\right) \mathcal{H}_{\mathbf{k}^{\prime }n}^{\left( m\right) }\,\mathcal{H}_{\mathbf{k}n^{\prime }}^{\left( m+\ell \right) }\,k^{\prime \left\langle \nu _{1}\right. }\cdots k^{\prime \left. \nu _{m}\right\rangle }\,k_{\left\langle \mu _{1}\right. }\cdots k_{\mu _{\ell }}k_{\nu _{1}}\cdots k_{\left. \nu _{m}\right\rangle }\right] . \label{C_lm} \end{align} Similarly, the nonlinear collision term for $\ell =1$ becomes,% \begin{align} N_{r-1}^{\mu }&\equiv \sum_{m^{\prime }=0}^{\infty }\sum_{m=0}^{m^{\prime }}\sum_{n=0}^{N_{m}}\sum_{n^{\prime }=0}^{N_{m^{\prime }}}\left( \mathcal{N}% _{rnn^{\prime }}\right) _{\alpha _{1}\cdots \alpha _{m}\beta _{1}\cdots \beta _{m^{\prime }}}^{\mu }\rho _{n}^{\alpha _{1}\cdots \alpha _{m}}\rho _{n^{\prime }}^{\beta _{1}\cdots \beta _{m^{\prime }}} \notag \\ &=\sum_{n=0}^{N_{0}}\sum_{n^{\prime }=0}^{N_{1}}\mathcal{C}_{rnn^{\prime }}^{1\left( 0,1\right) }\rho _{n}\rho_{n^{\prime }}^{\mu } + \sum_{m=1}^{\infty }\sum_{n=0}^{N_{m}}\sum_{n^{\prime }=0}^{N_{m+1}}\mathcal{% C}_{rnn^{\prime }}^{1\left( m,m+1\right) }\rho _{n}^{\alpha _{1}\cdots \alpha _{m}}\rho _{n^{\prime },\alpha _{1}\cdots \alpha _{m}}^{\mu }. \label{Nr_vector} \end{align}% where the coefficient $\mathcal{C}_{rnn^{\prime }}^{1\left( m,m+1\right) }$ is the $\ell =1$ case of the general coefficient\ $\mathcal{C}_{rnn^{\prime }}^{\ell \left( m,m+\ell \right) }$ \ introduced in Eq.\ (\ref{C_lm}). Finally, the rank-2 tensor terms are obtained taking $\ell =2$, \begin{align} N_{r-1}^{\mu \nu }& \equiv \sum_{m^{\prime }=0}^{\infty }\sum_{m=0}^{m^{\prime }}\sum_{n=0}^{N_{m}}\sum_{n^{\prime }=0}^{N_{m^{\prime }}}\left( \mathcal{N}_{rnn^{\prime }}\right) _{\alpha _{1}\cdots \alpha _{m}\beta _{1}\cdots \beta _{m^{\prime }}}^{\mu \nu }\rho _{n}^{\alpha _{1}\cdots \alpha _{m}}\rho _{n^{\prime }}^{\beta _{1}\cdots \beta _{m^{\prime }}} \notag \\ & =\sum_{m=0}^{\infty }\sum_{n=0}^{N_{m+2}}\sum_{n^{\prime }=0}^{N_{m}}% \mathcal{C}_{rnn^{\prime }}^{2\left( m,m+2\right) }\rho _{n}^{\alpha _{1}\cdots \alpha _{m}}\rho _{n^{\prime },\alpha _{1}\cdots \alpha _{m}}^{\mu \nu } \notag \\ & + \sum_{n=0}^{N_{1}}\sum_{n^{\prime}=0}^{N_{1}} \mathcal{D}_{rnn^{\prime }}^{2\left( 11\right) }\rho_{n}^{\left\langle \mu \right. }\rho _{n^{\prime }}^{\left. \nu \right\rangle } +\sum_{m=2}^{\infty }\sum_{n=0}^{N_{m}}\sum_{n^{\prime }=0}^{N_{m}}\mathcal{D}_{rnn^{\prime }}^{2\left( mm\right) }\rho _{n}^{\alpha _{2}\cdots \alpha _{m}\left\langle \mu \right. }\rho _{n^{\prime },\alpha _{2}\cdots \alpha _{m}}^{\left. \nu \right\rangle }, \label{Nr_tensor} \end{align}% where $\mathcal{C}_{rnn^{\prime }}^{2\left( m,m+2\right) }$ can be calculated from Eq.\ (\ref{C_lm}) and we introduced another coefficient, \begin{eqnarray} \mathcal{D}_{rnn^{\prime }}^{2\left( mm\right) } &=&\frac{1}{d^{\left( m\right) }\nu }\int_{f}E_{\mathbf{k}}^{r-1}k^{\left\langle \mu \right. }k^{\left. \nu \right\rangle } \notag \\ &\times &\left( \mathcal{H}_{\mathbf{p}n}^{\left( m\right) }\,\mathcal{H}_{% \mathbf{p}^{\prime }n^{\prime }}^{\left( m\right) }\,p_{\left\langle \mu \right. }p^{\beta _{q+1}}\cdots p^{\left. \beta _{m}\right\rangle }\,p_{\left\langle \nu \right. }^{\prime }p_{\beta _{q+1}}^{\prime }\cdots p_{\left. \beta _{m}\right\rangle }^{\prime }-\mathcal{H}_{\mathbf{k}% n}^{\left( m\right) }\,\mathcal{H}_{\mathbf{k}^{\prime }n^{\prime }}^{\left( m\right) }\,k_{\left\langle \mu \right. }k^{\beta _{q+1}}\cdots k^{\left. \beta _{m}\right\rangle }\,k_{\left\langle \nu \right. }^{\prime }k_{\beta _{q+1}}\cdots k_{\left. \beta _{m}\right\rangle }^{\prime }\right) . \label{DDDDD} \end{eqnarray} The normalization $d^{\left( m\right) }$ is complicated and is discussed in Appendix \ref{collision_tensors} together with other details of the derivation of the nonlinear collision term. \section{Transport coefficients in the 14--moment approximation} \label{BTE_14moment} In this section we calculate the previously introduced coefficients $% \mathcal{A}_{rn}^{\left( \ell \right) }$, $\mathcal{C}_{rnn^{\prime }}^{0\left( mm\right) }$, $\mathcal{C}_{rnn^{\prime }}^{1\left( m,m+1\right) }$,$\ \mathcal{C}_{rnn^{\prime }}^{2\left( m,m+2\right) }$, and $\mathcal{D}% _{rnn^{\prime }}^{2\left( m,m\right) }$ in the 14--moment approximation. As shown in Refs.\ \cite{Denicol:2012cn,Denicol:2012es}, this corresponds to the truncation $N_{0}=2,\,N_{1}=1,\,N_{2}=0$. This implies that the following irreducible moments appear: $\rho _{0}=-3\Pi /m^{2}$, $\rho _{1}=0$% , $\rho _{2}=0$, $\rho _{0}^{\mu }=n^{\mu }$, $\rho _{1}^{\mu }=0$, and $% \rho _{0}^{\mu \nu }=\pi ^{\mu \nu }$. As one can see, they are uniquely related to the dissipative quantities. Before proceeding and for the sake of later convenience, we re-express the coefficients $\mathcal{H}_{\mathbf{k}n}^{\left( \ell \right) } $ using Eqs.\ (\ref{H_kn}) and (\ref{P_kn}) as \begin{equation} \mathcal{H}_{\mathbf{k}n}^{\left( \ell \right) }\equiv \frac{W^{\left( \ell \right) }}{\ell !}\sum_{k=n}^{N_{\ell }}\sum_{r=0}^{k}a_{kr}^{(\ell )}a_{kn}^{(\ell )}E_{\mathbf{k}}^{r}=\sum_{r=n}^{N_{\ell }}A_{rn}^{\left( \ell \right) }E_{\mathbf{k}}^{r}+\sum_{r=0}^{n-1}A_{nr}^{\left( \ell \right) }E_{\mathbf{k}}^{r}, \label{H_kn_expanded} \end{equation}% where \begin{eqnarray} A_{rn}^{(\ell )} &=&\frac{W^{\left( \ell \right) }}{\ell !}% \sum_{k=r}^{N_{\ell }}a_{kr}^{(\ell )}a_{kn}^{(\ell )}. \label{A_l_rn} \end{eqnarray}% Note that, for $n=0$, the second sum in Eq.\ (\ref% {H_kn_expanded}) identically vanishes, which greatly simplifies the calculation of the collision integral. Furthermore, from the definition of the irreducible moments and using Eqs.\ (% \ref{expansion1}) -- (\ref{P_kn}) together with the orthogonality condition (% \ref{orthogonality1}) we obtain the following general result, \begin{eqnarray} \rho _{r}^{\mu _{1}\cdots \mu _{\ell }} &\equiv &\frac{\ell !}{\left( 2\ell +1\right) !!}\sum_{n=0}^{N_{\ell }}\rho _{n}^{\mu _{1}\cdots \mu _{\ell }}\int dKE_{\mathbf{k}}^{r}\left( \Delta ^{\alpha \beta }k_{\alpha }k_{\beta }\right) ^{\ell }\mathcal{H}_{\mathbf{k}n}^{\left( \ell \right) }f_{0\mathbf{% k}}\tilde{f}_{0\mathbf{k}} \notag \\ &=&\left( -1\right) ^{\ell }\ell !\sum_{n=0}^{N_{\ell }}\rho _{n}^{\mu _{1}\cdots \mu _{\ell }}\left( \sum_{r^{\prime }=n}^{N_{\ell }}A_{r^{\prime }n}^{\left( \ell \right) }J_{r+r^{\prime }+2\ell ,\ell }+\sum_{r^{\prime }=0}^{n-1}A_{n r^{\prime }}^{\left( \ell \right) }J_{r+r^{\prime }+2\ell ,\ell }\right) . \end{eqnarray}% where we used Eq.\ (\ref{H_kn_expanded}) in the last step.\textbf{\ }% Therefore, truncating the above general result in the 14--moment approximation we obtain \begin{eqnarray} \rho _{r} &\equiv &\gamma _{r}^{\Pi }\rho _{0}=-\frac{3}{m^{2}}\left( A_{00}^{\left( 0\right) }J_{r,0}+A_{10}^{\left( 0\right) }J_{r+1,0}+A_{20}^{\left( 0\right) }J_{r+2,0}\right) \Pi , \label{rho_r_recursive} \\ \rho _{r}^{\mu } &\equiv &\gamma _{r}^{n}\rho _{0}^{\mu }=-\left( A_{00}^{\left( 1\right) }J_{r+2,1}+A_{10}^{\left( 1\right) }J_{r+3,1}\right) n^{\mu }, \label{rho_mu_r_recursive} \\ \rho _{r}^{\mu \nu } &\equiv &\gamma _{r}^{\pi }\rho _{0}^{\mu \nu }=\left( 2A_{00}^{\left( 2\right) }J_{r+4,2}\right) \pi ^{\mu \nu }, \label{rho_mu_nu_r_recursive} \end{eqnarray}% where for $r=0$ we obviously have $\gamma _{0}^{\Pi }=\gamma _{0}^{n}=\gamma _{0}^{\pi }=1$. The coefficients $A_{20}^{\left( 0\right) }$, $% A_{10}^{\left( 1\right) }$, $A_{00}^{\left( 2\right) }$, as well as $% A_{00}^{\left( 0\right) }$, $A_{10}^{\left( 0\right) }$, $A_{20}^{\left( 0\right) }$ are calculated from Eq.\ (\ref{A_l_rn}) and listed in Appendix % \ref{exp_coefficients}. These linear relations between the moments are the main result of the 14--moment approximation, which was also obtained in Ref.\ \cite{Denicol:2012es}. It is straightforward to show using Eqs.\ (\ref{Arn_tensor}), (\ref{A_rn}), and (\ref{H_kn_expanded}) that the $\mathcal{A}_{rn}^{\left( \ell \right) }$ coefficients of the linear collision term can be expressed in terms of $% A_{rn}^{(\ell)}$. For $\ell=0$ where, in the 14--moment approximation, $% N_{0}=2$, the coefficient is \begin{align} \mathcal{A}_{r0}^{\left( 0\right) }& \equiv A_{20}^{\left( 0\right) }\frac{1% }{\nu }\int_{f}E_{\mathbf{k}}^{r-1}\left( E_{\mathbf{p}}^{2} +E_{\mathbf{p}% ^{\prime }}^{2} - E_{\mathbf{k}}^{2}-E_{\mathbf{k}^{\prime }}^{2}\right) =A_{20}^{\left( 0\right) }X_{\left( r-3\right) }^{\mu \nu \alpha \beta }u_{\mu }u_{\nu }u_{\alpha }u_{\beta }, \label{A0_r0} \end{align}% where the integrals proportional to $A_{00}^{\left( 0\right) }\int_{f} \left( 1+1 - 1-1\right) =0$ and $A_{10}^{\left( 0\right) }\int_{f}\left( E_{% \mathbf{p}}+E_{\mathbf{p}^{\prime }} - E_{\mathbf{k}}-E_{\mathbf{k}^{\prime }}\right) =0$ vanish due to particle number and energy conservation in binary collisions. Here, we introduced the following rank-4 tensor \begin{equation} X_{\left( r\right) }^{\mu \nu \alpha \beta }=\frac{1}{\nu }\int_{f}E_{% \mathbf{k}}^{r}k^{\mu }k^{\nu }\left( p^{\alpha }p^{\beta }+p^{\prime \alpha }p^{\prime \beta } - k^{\alpha }k^{\beta } - k^{\prime \alpha }k^{\prime \beta}\right) , \label{X_4rank_tensor} \end{equation}% which is symmetric upon the interchange of indices $\left( \mu ,\nu \right) $ and $\left( \alpha ,\beta \right) $, i.e., $X_{\left( r\right) }^{\mu \nu \alpha \beta }=X_{\left( r\right) }^{\left( \mu \nu \right) \left( \alpha \beta \right) }$, and it is also traceless in the latter indices, $X_{\left( r\right) }^{\mu \nu \alpha \beta }g_{\alpha \beta }=0$. Similarly, for $\ell =1$ we have \begin{align} \mathcal{A}_{r0}^{\left( 1\right) }& \equiv A_{10}^{\left( 1\right) }\frac{1% }{3\nu }\int_{f}E_{\mathbf{k}}^{r-1}k^{\left\langle \mu \right\rangle }\left( E_{\mathbf{p}}p_{\left\langle \mu \right\rangle }+E_{\mathbf{p}% ^{\prime }}\,p_{\left\langle \mu \right\rangle }^{\prime }-E_{\mathbf{k}% }\,k_{\left\langle \mu \right\rangle }-E_{\mathbf{k}^{\prime }}\,k_{\left\langle \mu \right\rangle }^{\prime }\right) =A_{10}^{\left( 1\right) }\frac{1}{3}X_{\left( r-2\right) }^{\mu \nu \alpha \beta }u_{\left( \mu \right. }\Delta _{\left. \nu \right) \left( \alpha \right. }u_{\left. \beta \right) }, \label{A1_r0} \end{align}% where $A_{00}^{\left( 1\right) }\int_{f}\left( p_{\left\langle \mu \right\rangle }+p_{\left\langle \mu \right\rangle }^{\prime }-k_{\left\langle \mu \right\rangle }-k_{\left\langle \mu \right\rangle }^{\prime }\right) =0$ vanishes due to 3-momentum conservation. Finally, for $\ell =2$ we obtain \begin{align} \mathcal{A}_{r0}^{\left( 2\right) }& \equiv A_{00}^{\left( 2\right) }\frac{1% }{5\nu }\int_{f}E_{\mathbf{k}}^{r-1}k^{\left\langle \mu \right. }k^{\left. \nu \right\rangle }\left( p_{\left\langle \mu \right. }p_{\left. \nu \right\rangle }+p_{\left\langle \mu \right. }^{\prime }p_{\left. \nu \right\rangle
($d_{IU}$) and the source and the destination ($d_{SU}$) are $d_{SI}=50$ $m$, $d_{IU}=15$ $m$ and $d_{SU}=\sqrt{d_{SI}^2+d_{IU}^2}\approx52.2$ $m$, respectively. According to \cite{M.Cui2019(WCL),E.Bjornson2014(TIT)}, the other parameters are set {\color{black}in Table I. Based on Table I}, the power attenuation coefficients of channel $\mathbf{h}_{IU}$ (or $\mathbf{h}_{RU}$), $\mathbf{h}_{SI}$ (or $\mathbf{h}_{SR}$) and $h_{SU}$ are derived by $\sqrt{\mu_{IU}}=\sqrt{\zeta_0(d_0/d_{IU})^{\alpha_{IU}}}$, $\sqrt{\mu_{SI}}=\sqrt{\zeta_0(d_0/d_{SI})^{\alpha_{SI}}}$ and $\sqrt{\mu_{SU}}=\sqrt{\zeta_0(d_0/d_{SU})^{\alpha_{SU}}}$ \cite{M.Cui2019(WCL)}. \begin{table} \renewcommand{\arraystretch}{1.3} \caption{{\color{black}Parameter configurations.}} \label{Table_Parameters} \centering {\color{black} \begin{small} \begin{tabular}{ccc} \hline Parameters & Definitions & Values\\ \hline Amplitude Reflection Coefficient & $\alpha$ & $1$\\ Signal Power & $P$ & $20$ dBm\\ Receiver Noise Power & $\sigma_w^2$ & $-80$ dBm\\ Path Loss & $\zeta_0$ & $-20$ dB\\ Reference Distance & $d_0$ & $1$ m\\ Path Loss Exponents & $\alpha_{IU}=\alpha_{SI}=\alpha_{SU}$ & $3$\\ Phase Shift in $\mathbf{h}_{IU}$ & $\varphi_{IU,i}$ & Random in $[0,2\pi]$\\ Phase Shift in $\mathbf{h}_{SI}$ & $\varphi_{SI,i}$ & Random in $[0,2\pi]$\\ Phase Shift in $h_{SU}$ & $\varphi_{SU}$ & $\frac{\pi}{4}$\\ Proportionality Coefficients of Distortion Noises & $\kappa_t=\kappa_r$ & $0.05^2$\\ Oscillator Quality & $\delta$ & $1.58\times 10^{-4}$\\ \hline \end{tabular} \end{small} } \end{table} During the comparisons with DF relay, $d_{SI}$, $d_{IU}$ and $d_{SU}$ are also regarded as the distances between the source and the DF relay, the DF relay and the destination, and the source and the destination, respectively, which still adhere to $d_{SU}=\sqrt{d_{SI}^2+d_{IU}^2}$. The proportionality coefficients can be changed for diverse observations, but still satisfy $\kappa_t=\kappa_r$. {\color{black}\subsection{Numerical Illustrations for \textbf{Theorem 1} and \textbf{Lemma 1}}} {\color{black} For further discussing and validating the theoretical analysis in Section III}, we carry out the simulations via the following steps: \textit{B-Step 1}: We calculate $\overline{R_{HWI}}(N)$ in (\ref{eq2-17}) {\color{black}and $\gamma_{HWI}(N)$ in (\ref{Utility Expression})}, and record the results with HWI from {\color{black} $N=1$ to $N=5000$}. \textit{B-Step 2}: We calculate $R(N)$ in (\ref{eq2-11}) {\color{black}and $\gamma(N)$ in (\ref{Utility Expression 2})}, and record the results without HWI from {\color{black} $N=1$ to $N=5000$}. \textit{B-Step 3}: {\color{black}We calculate the rate gap $\delta_R(N)$ in (\ref{eq2-18}) and the utility gap $\delta_{\gamma}(N)$ in (\ref{Utility Degradation}), and record the results from $N=1$ to $N=5000$.} \textit{B-Step 4}: We calculate and record the numerical results of $R_{HWI}(N)$ in (\ref{eq2-16}) from {\color{black} $N=1$ to $N=5000$}. Due to the randomness of the phase errors generated by the IRS, the ACR is averaged on 1000 Monte Carlo trials {\color{black}every 500 points}. The average ACRs and {\color{black}IRS utilities} as functions of $N$ from $N=1$ to $N=5000$ are described in Figure \ref{Fig-For_IRS_ACR_and_Utility}. It is indicated that: 1) the experimental results fit well with the theoretical ones from $N=1$ to $N=5000$, which verifies the tightness of (\ref{eq2-17}). 2) The average ACR with HWI is lower and increases more slowly than that without HWI, and the rate gap {\color{black}widens} as $N$ grows. {\color{black}This phenomenon implies that when $N$ grows, the HWI accumulates and begets more severe ACR degradation. 3) When $N$ becomes pretty large, the ACR with HWI verges on $\log_2\left(1+\frac{1}{\kappa_t+\kappa_r}\right)=7.6511$, which testifies the correctness of (\ref{R_HWI Upper Bound}). 4) The IRS utility with HWI is lower than that without HWI, which demonstrates that the HWI reduces the IRS utility as well. Besides, both the IRS utility and the utility gap descend as $N$ grows, which reveals that the influence of the HWI on the IRS utility becomes slighter when $N$ is larger.} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=3.2in]{For_Average_ACR-eps-converted-to.pdf}} \label{For_Average_ACR} \subfloat[]{\includegraphics[width=3.2in]{For_Utility-eps-converted-to.pdf}} \label{For_Utility} \hfil \caption{{\color{black} Average ACRs and IRS utilities as functions of $N$ with or without HWI. (a) Average ACRs with respect to $N$, the curves marked with "$\square$", "$\bigcirc$", "$\bigtriangledown$" and "$*$", represent the results obtained in \textit{B-Step 1} to \textit{B-Step 4}, respectively. (b) IRS utilities with respect to $N$, the curves marked with "$\square$", "$\bigcirc$" and "$\bigtriangledown$", represent the results obtained in \textit{B-Step 1} to \textit{B-Step 3}, respectively.}} \label{Fig-For_IRS_ACR_and_Utility} \end{figure*} $\ $ {\color{black} \subsection{Phase Shift Optimization}} {\color{black}For giving insights into the phase shift optimization approach in Section IV, we carry out the simulations through the following steps: \textit{C-Step 1}: We solve (P6) by adopting CVX Toolbox with SDPT3 Solver, and obtain the maximum average SNR from the solution of the OBF in (\ref{eq2-44}a). Based on this solution, we calculate and record the ACRs at $N=1,13,25,37$. \textit{C-Step 2}: We solve (P6) and obtain the optimized matrix $\mathbf{Y}$ and variable $\widetilde{\mu}$. Next, we extract the $\bm{\theta}^T$ in the $(N+1)$-th row of $\mathbf{X}=\widetilde{\mu}^{-1}\mathbf{Y}$. Then, we utilize $\bm{\theta}^T$ to reconstruct $\mathbf{X}$ according to (\ref{eq2-26}) and $\mathbf{Y}$ according to $\mathbf{Y}=\widetilde{\mu}\mathbf{X}$, and denote the reconstructed $\mathbf{X}$ and $\mathbf{Y}$ by $\mathbf{X}_r$ and $\mathbf{Y}_r$, respectively. Finally, we substitute $\mathbf{Y}_r$ into the OBF in (\ref{eq2-44}a) and obtain the average SNR, based on which we calculate and record the ACRs at $N=1,13,25,37$. \textit{C-Step 3}: Based on the extracted $\bm{\theta}^T$, we obtain the optimized IRS phase shift matrix $\mathbf{\Phi}$ according to $\mathbf{\Phi}=diag(\bm{\theta}^T)$. Then, we substitute $\mathbf{\Phi}$ into (\ref{original ACR with HWI}) and obtain the ACRs with HWI, which are averaged on 1000 Monte Carlo trials at $N=1,13,25,37$. \begin{figure}[!t] \includegraphics[width=3.2in]{For_Optimization-eps-converted-to.pdf} \centering \caption{{\color{black}Average ACRs as functions of $N$ with HWI. The curves marked with "$\Diamond$", "+" and "$\bigcirc$" represent the results obtained in \textit{C-Step 1} to \textit{C-Step 3}, respectively. The curves marked with "$\square$" and "$*$" are copied from Figure \ref{Fig-For_IRS_ACR_and_Utility} (a) for comparisons.}} \label{Fig-Optimization} \end{figure} The average ACRs as functions of $N$ with HWI are depicted in Figure \ref{Fig-Optimization}. Results in Figure \ref{Fig-Optimization} show that: 1) the curves obtained in \textit{C-Step 1} and \textit{C-Step 2} coincide, indicating that $\mathbf{Y}_r=\mathbf{Y}$. Moreover, we calculate the rank of $\mathbf{Y}_r$ and obtain $rank(\mathbf{Y}_r)=1$. Because $\mathbf{Y}_r$ is constructed by $\bm{\theta}^T$ in the $(N+1)$-th row of $\mathbf{X}=\widetilde{\mu}^{-1}\mathbf{Y}$ in the solution, $\bm{\theta}^T$ is testified to be the optimal IRS phase shift vector. 2) The curves obtained in \textit{C-Step 1} and \textit{C-Step 3} coincide, confirming that the mathematical derivations for $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$ in (\ref{eq2-35}) are correct. 3) The average ACRs with the optimized IRS phase shifts exceed the average ACRs with $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$, demonstrating that $\theta_i=-\left(\varphi_{IU,i}+\varphi_{SI,i}\right)$ is not the optimal phase shift as it does not take $h_{SU}$ into account.} $\ $ {\color{black} \subsection{Discussions on Channel Estimation Errors and Residual Phase Noises}} {\color{black}Because most IRS-aided communication systems suffer from channel estimation errors, and the optimized IRS phase shifts may generally be affected by residual phase noises, as narrated at the end of Section IV, we will probe into the influence of the two factors on the optimization performance. The channel estimation errors are set to be additive complex variables according to Eq. (2) in \cite{J.Zhang2020(CL)}, which follow the zero-mean complex Gaussian distribution with the variance of $\sigma_w^2$. More detailed information about the CSI uncertainty models and simulation parameters can be found in \cite{J.Zhang2020(CL)}. The residual phase noises $\theta_{pi}$ in $\bm{\theta}_p$, for $i=1,2,...,N$, are also set to be uniformly distributed on $[-\pi/2,\pi/2]$. \begin{figure}[!t] \includegraphics[width=3.2in]{For_Optimization_Other_Source_HWI-eps-converted-to.pdf} \centering \caption{{\color{black} Influences of the channel estimation errors and residual phase noises on the optimization results. The ACRs are derived by substituting $\mathbf{\Phi}=diag(\bm{\theta}^T)$ or $\mathbf{\Phi}=diag(\bm{\theta}^T\odot\bm{\theta}_p^T)$ into (\ref{original ACR with HWI}) and are averaged on 1000 Monte Carlo trials. "Imperfect CSI" means that there are channel estimation errors, while "Perfect CSI" represents the opposite.}} \label{Fig-For_Optimization_Other_Source_HWI} \end{figure} In the simulations, for investigating the average ACR with channel estimation errors, we first adopt the CSI with errors to construct $\mathbb{E}_{\mathbf{\Theta}_E}\left[\mathbf{\Xi}\right]$ and solve (P6), and then substitute $\mathbf{\Phi}=diag(\bm{\theta}^T)$ into (\ref{original ACR with HWI}) which contains the actual CSI. For investigating the average ACR with residual phase noises, we first solve (P6) and exert the influence of $\bm{\theta}_p$ on $\bm{\theta}^T$ by constructing $\bm{\theta}^T\odot\bm{\theta}_p^T$, and then substitute $\mathbf{\Phi}=diag(\bm{\theta}^T\odot\bm{\theta}_p^T)$ into (\ref{original ACR with HWI}). Figure \ref{Fig-For_Optimization_Other_Source_HWI} depicts the influences of the channel estimation errors and residual phase noises on the optimization results. It is demonstrated that: 1) both the channel estimation errors and the residual phase noises reduce the average ACR and degrade the optimization performance. 2) The residual phase noises impose more serious negative impact on the performance than the channel estimation errors, manifesting that the inherent hardware imperfection, synchronization offset and estimation accuracy limit in the real world, are key potential factors that affect the optimization performance. } $\ $ \subsection{Comparisons with DF Relay} In order to {\color{black}validate} the theoretical {\color{black}analysis} in Section V, we {\color{black}will} numerically compare the ACRs {\color{black}and the utilities} for the IRS-aided and the conventional multiple-antenna DF relay assisted wireless communication systems in the presence of HWI. {\color{black} Following Section V, we will compare the performances by varying $N$ and $P$. $\ $ \textit{1) Comparisons by varying $N$}: \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=3.2in]{For_N_ACR_Comparison_DF-eps-converted-to.pdf}} \label{Fig-For_N_ACR_Comparison_DF} \subfloat[]{\includegraphics[width=3.2in]{For_N_Utility_Comparison_DF-eps-converted-to.pdf}} \label{Fig-For_N_Utility_Comparison_DF} \hfil \caption{{\color{black}Comparisons with DF relay by varying
case for $m=3$, hence $n=7$ and $n'=6$. The Hasse diagrams of the posets $(\mathcal{P}_Q, \leq)$ and $(\mathcal{P}_{Q'}, \leq)$ are shown in Figure~\ref{fig:alt_Hasse_67}. Those nodes contained in $\mathcal{P}_Q$ but not in $\mathcal{P}_{Q'}$ are shaded gray above the downward diagonal. The elements in $\mathcal{F}$ are highlighted in blue below the downward diagonal and those of $\mathcal{F}'$ in red above the downward diagonal. \begin{figure} \centering \resizebox{.6\linewidth}{!}{ \begin{tikzpicture} \node[draw] (1) at (1,0) {$[1]$}; \node[draw] (3) at (4,0) {$[3]$}; \node[draw] (5) at (7,0) {$[5]$}; \node[rectangle with diagonal fill, diagonal top color=black!20, diagonal bottom color=white, diagonal from left to right, draw] (7) at (10,0) {$[7]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=blue!30, diagonal from left to right, draw] (1-2) at (0,2) {$[1,2]$}; \node[draw] (1-3) at (2.2,2) {$[1,3]$}; \node[draw] (3-4) at (3.3,2) {$[3,4]$}; \node[draw] (2-3) at (4.4,2) {$[2,3]$}; \node[draw] (3-5) at (5.5,2) {$[3,5]$}; \node[draw] (5-6) at (6.6,2) {$[5,6]$}; \node[draw] (4-5) at (7.7,2) {$[4,5]$}; \node[rectangle with diagonal fill, diagonal top color=black!20, diagonal bottom color=white, diagonal from left to right, draw] (5-7) at (8.8,2) {$[5,7]$}; \node[rectangle with diagonal fill, diagonal top color=black!20, diagonal bottom color=blue!30, diagonal from left to right, draw] (6-7) at (11,2) {$[6,7]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=blue!30, diagonal from left to right, draw] (1-4) at (1,4) {$[1,4]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=blue!30, diagonal from left to right, draw] (2-4) at (2.1,4) {$[2,4]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=white, diagonal from left to right, draw] (1-5) at (3.85,4) {$[1,5]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=white, diagonal from left to right, draw] (2-5) at (4.95,4) {$[2,5]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=white, diagonal from left to right, draw] (3-6) at (6.05,4) {$[3,6]$}; \node[rectangle with diagonal fill, diagonal top color=black!20, diagonal bottom color=white, diagonal from left to right, draw] (3-7) at (7.15,4) {$[3,7]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=blue!30, diagonal from left to right, draw] (4-6) at (8.9,4) {$[4,6]$}; \node[rectangle with diagonal fill, diagonal top color=black!20, diagonal bottom color=blue!30, diagonal from left to right, draw] (4-7) at (10,4) {$[4,7]$}; \node[rectangle with diagonal fill, diagonal top color=white, diagonal bottom color=blue!30, diagonal from left to right, draw] (1-6) at (3,6) {$[1,6]$}; \node[rectangle with diagonal fill, diagonal top color=black!20, diagonal bottom color=blue!30, diagonal from left to right, draw] (1-7) at (5.5,8) {$[1,7]$}; \node[rectangle with diagonal fill, diagonal top color=white, diagonal bottom color=blue!30, diagonal from left to right, draw] (2-6) at (5.5,6) {$[2,6]$}; \node[rectangle with diagonal fill, diagonal top color=black!20, diagonal bottom color=blue!30, diagonal from left to right, draw] (2-7) at (8,6) {$[2,7]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=blue!30, diagonal from left to right, draw] (2) at (2.5,-1) {$[2]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=blue!30, diagonal from left to right, draw] (4) at (5.5,-1) {$[4]$}; \node[rectangle with diagonal fill, diagonal top color=red!50, diagonal bottom color=blue!30, diagonal from left to right, draw] (6) at (8.5,-1) {$[6]$}; \draw[-angle 90,relative, shorten >=5pt, shorten <=5pt] (1) to (1-2); \draw[-angle 90,relative, shorten >=5pt, shorten <=5pt] (1) to (1-3); \draw[-angle 90,relative, shorten >=5pt, shorten <=5pt] (3) to (1-3); \draw[-angle 90,relative, shorten >=5pt, shorten <=5pt] (3) to (3-4); \draw[-angle 90,relative, shorten >=5pt, shorten <=5pt] (3) to (2-3); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3) to (3-5); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (5) to (3-5); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (5) to (5-6); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (5) to (4-5); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (5) to (5-7); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (7) to (5-7); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (7) to (6-7); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (1-3) to (1-4); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (1-3) to (1-5); \draw[fill=white,draw=white] (2.66,2.57) circle [radius=0.075]; \draw[fill=white,draw=white] (2.85,2.775) circle [radius=0.075]; \draw[fill=white,draw=white] (3.125,3.1) circle [radius=0.075]; \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3-4) to (1-4); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3-4) to (2-4); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (2-3) to (2-4); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (2-3) to (2-5); \draw[fill=white,draw=white] (4.675,3) circle [radius=0.075]; \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3-5) to (1-5); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3-5) to (2-5); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3-5) to (3-6); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3-5) to (3-7); \draw[fill=white,draw=white] (6.325,3) circle [radius=0.075]; \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (5-6) to (3-6); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (5-6) to (4-6); \draw[fill=white,draw=white] (7.9,3.1) circle [radius=0.075]; \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (4-5) to (4-6); \draw[fill=white,draw=white] (8.175,2.75) circle [radius=0.075]; \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (4-5) to (4-7); \draw[fill=white,draw=white] (8.35,2.575) circle [radius=0.075]; \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (5-7) to (3-7); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (5-7) to (4-7); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (1-5) to (1-6); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (1-5) to (1-7); \draw[fill=white,draw=white] (4.35,5.25) circle [radius=0.075]; \draw[-angle 90,relative,shorten >=5pt,shorten <=2pt] (2-5.north) to (2-6); \draw[fill=white,draw=white] (5.125,4.8) circle [radius=0.075]; \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (2-5.north) to (2-7); \draw[fill=white,draw=white] (5.5,4.625) circle [radius=0.075]; \draw[fill=white,draw=white] (5.875,4.8) circle [radius=0.075]; \draw[fill=white,draw=white] (6.65,5.25) circle [radius=0.075]; \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3-6.north) to (1-6); \draw[-angle 90,relative,shorten >=5pt,shorten <=2pt] (3-6.north) to (2-6); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3-7) to (1-7); \draw[-angle 90,relative,shorten >=5pt,shorten <=5pt] (3-7) to (2-7); \end{tikzpicture}} \caption{Hasse diagrams of alternating orientations of $A_6$ and $A_7$}\label{fig:alt_Hasse_67} \end{figure} \end{ex} \begin{rem} For the only case where the alternating orientation of this section coincides with the simple zigzag of Subsection~\ref{subseq:zigzag}, the maximum antichains of Theorem~\ref{thm:simplezigzag} and Theorem~\ref{thm:alternating} are identical. \end{rem} \begin{rem} The cases considered in this section should be extended to arbitrary orientations of quivers of type $A_n$, and even to the types $D_n$ and $E_{6,7,8}$. Computational experiments suggest that the combinatorics of general Dynkin cases are much more intricate than what we have previously seen. \end{rem} \section{Sperner theorems for subrepresentation posets in type \texorpdfstring{$A$}{A}} \subsection{A Sperner theorem for subrepresentations posets in type \texorpdfstring{$A_2$}{A2}} \label{sub:SpernerLinearSub} Let $V$ be a representation of a quiver $Q=(Q_0,Q_1)$ over a field $k$. We denote by $\mathcal{P}_V$ the set of all subrepresentations $U\subseteq V$. Assume that $U_1,U_2\in\mathcal{P}_V$. We say that $U_1\leq U_2$ if and only if $U_1$ is a subrepresentation of $U_2$. In this way, $(\mathcal{P}_V,\leq)$ becomes a partially ordered set. \begin{prop} \label{prop:grading} If every simple representation of $Q$ is $1$-dimensional, then the dimension function \linebreak$U\mapsto\operatorname{dim}_k(U)$ defines a grading of the subrepresentation poset $(\mathcal{P}_V,\leq)$. \end{prop} \begin{proof} The only minimal element in $\mathcal{P}_V$ is the zero representation which satisfies $\operatorname{dim}(0)=0$. Suppose that $U_2\in\mathcal{P}_V$ covers $U_1\in\mathcal{P}_V$, i.\,e. $U_1\leq U_2$ and there does not exist an element $U_3\in\mathcal{P}_V$ such that $U_1<U_3<U_2$. The third isomorphism theorem implies that the quotient representation $U_2/U_1$ is nonzero and that there does not exist a representation $0\subsetneqq U\subsetneqq U_2/U_1$. Thus $U_2/U_1$ is simple and the assumption implies $\operatorname{dim}_k(U_2)-\operatorname{dim}_k(U_1)=1$. \end{proof} Note that the condition of the previous proposition is satisfied if the quiver $Q$ does not contain oriented cycles which we assume from now on. Additionally, we assume that $k=\mathbb{F}_q$ is a finite field $\mathbb{F}_q$ with $q$ elements. For every natural number $n\in\mathbb{N}$ we define the \textit{Gaussian integer} as $[n]_q=(q^n-1)/(q-1)$. Note that $[n]_q=1+q+q^2+\ldots+q^{n-1}$ can be simplified to a polynomial in $q$ which specializes to $n$ when we plug in $q=1$. Moreover, $[n]_q$ is equal to the number of $1$-dimensional vector subspaces of $\mathbb{F}_q^n$. The polynomial $[n]_q!=[1]_q\cdot[2]_2\cdot\ldots\cdot[n]_q$ is called \textit{Gaussian factorial}. For a natural number $0\leq d\leq n$ the polynomial $\binom{n}{d}_q=[n]_q!/([d]_q!\cdot[n-d]_q!)$ is called \textit{Gaussian binomial coefficient}. Note that $\binom{n}{d}_q$ is equal to the number of $d$-dimensional vector subspaces of $\mathbb{F}_q^n$. Likewise the number of $(d+1)$-dimensional $k$-vector spaces $U$ such that $k^d\subseteq U\subseteq k^n$ is equal to $[n-d]_q$. Dually, the number of $(n-1)$-dimensional vector spaces $U$ such that $k^d\subseteq U\subseteq k^n$ is also equal to $[n-d]_q$. \begin{theorem} If $Q=(1\to 2)$ is the quiver of type $A_2$ and $V=P_1^a$ is a direct sum of copies of the indecomposable, projective representation $P_1$, then the subrepresentation poset $(\mathcal{P}_V,\leq)$ is Sperner. \end{theorem} \begin{proof} Note that the rank of the poset is equal to $\operatorname{dim}_k(V)=2a$. For brevity we write $\mathcal{P}_{i,V}$ instead of $(\mathcal{P}_{V})_{i}$ for $0\leq i\leq 2a$. A subrepresentation $X\subseteq V$ is given by two vector spaces $0\subseteq X_1\subseteq X_2\subseteq k^a$. We apply Stanley's Theorem \ref{thm:Stanley}. For
\section*{Introduction and Notations} Let $K$ be a field of characteristic $0$. Let $\overline{K}$ be an algebraic closure of $K$ and $G_K=\Gal(\overline{K}/K)$ its absolute Galois group. Let $E/K$ be an elliptic curve, with point at infinity $\mathcal{O}$. For a positive integer $n$, we consider the Galois representation $\rho_{E,n}$ corresponding to the action of $G_K$ on $E[n]$, the group of the $n$-torsion points of $E$, that is \[\rho_{E,n}:G_K\to\Aut(E[n])\simeq\mathrm{GL}_2(\mathbb{Z}/n\mathbb{Z}),\] following \cite[III.7]{AEC}. The kernel of the Galois representation $\rho_{E,n}$ is the set of $\sigma\in G_K$ such that $\sigma$ acts trivially on the $n$-torsion points, in other words, $\sigma$ fixes the extension of $K$ generated by the coordinates of the $n$-torsion points, denoted by $K(E[n])$. So $\mathrm{ker}\rho_{E,n}=\Gal(\overline{K}/K(E[n]))$, and it is a normal subgroup of $G_K$. Therefore, the extension $K(E[n])/K$ is Galois and $\Gal(K(E[n])/K)\simeq\mathrm{Im}\rho_{E,n}$. Now that $\rho_{E,n}(G_K)$ is realized as Galois group over $K$, we would like to have a polynomial whose Galois group is $\rho_{E,n}(G_K)$, \emph{i.e.} its roots generate $K(E[n])/K$. Reverter and Vila found such a polynomial for $n$ prime, and $\rho_{E,n}(G_K)$ surjective. Their theorem is the starting point of this article: \begin{theorem*}\label{Reverter-Vila}(\cite[Theorem 2.1]{Rvil}) Let $K$ be a field of characteristic $0$. Let $E$ be an elliptic curve over $K$, given by Weierstrass equation $E:y^2=f(x)$, and let $\ell$ be an odd prime. Suppose that the Galois representation \[\rho_{E,\ell}:G_K\to\Aut(E[\ell])\simeq\mathrm{GL}_2(\mathbb{F}_\ell)\]is surjective and let $P\in E[\ell]\setminus \{\mathcal{O}\}$. Then \begin{enumerate} \item The $\ell$-division polynomial is irreducible and its Galois group over $K$ is isomorphic to $\rho_{E,\ell}(G_K)/\{\pm\operatorname{\mathrm{id}}\}$. \item The characteristic polynomial $\chi_{E,\ell}$ of the multiplication by $x(P)+y(P)$ in $K(P):=K(x(P),y(P))$ is irreducible with Galois group isomorphic to $\rho_{E,\ell}(G_K)$. \end{enumerate} \end{theorem*} This theorem gives a polynomial realizing $\mathrm{GL}_2(\mathbb{Z}/\ell\mathbb{Z})$ as Galois group over $K$, for $\ell$ an odd prime. In this article, we generalized Reverter and Vila's result for any equation for $E$, any image of $\rho_{E,n}$, with $n$ an integer not necessary prime, and we consider a broader choice of functions in $K(E)$ than $x+y$. Since $n$ is not necessarily prime, in Subsection~\ref{primitive division polynomials} we define the $n$-th primitive division polynomial $\widetilde{\psi}_n$, which is a factor of the $n$-th division polynomial corresponding to the points of exact order $n$. The main result of the first section of this article is the following theorem. \begin{theorem*}(Theorems~\ref{galois psi},~\ref{pol réalisant im rho} and~\ref{transitive case}) Let $E$ be an elliptic curve over $K$ with Weierstrass equation $w_E(x,y)=0$. Let $u\in K(E)$ with degree $1$ in $x$ and $y$ such that $K(u,[-1]^*u)=K(x,y)$, and $n\geq3$. \begin{enumerate} \item The $n$-th primitive division polynomial $\widetilde{\psi}_n$ has Galois group isomorphic to $\rho_{E,n}(G_K)/\{\pm\operatorname{\mathrm{id}}\}$. If the action of $G_K$ on the points of order $n$ is transitive, then $\widetilde{\psi}_n$ is irreducible. \item The characteristic polynomial $\chi_{u,n}$ of multiplication by $u$ in the ring $K[X,Y]/(w_E,\widetilde{\psi}_n)$ has Galois group isomorphic to $\rho_{E,n}(G_K)$. Moreover, $\chi_{u,n}$ is irreducible if and only if $G_K$ acts transitively on the points of order $n$. \end{enumerate} \end{theorem*} The degree of $\chi_{u,n}$ is $2\deg\widetilde{\psi}_n$ which is less than or equal to $n^2-1$. So this theorem gives a way to construct polynomials of high degree with known Galois groups, which are subgroups of $\mathrm{GL}_2(\mathbb{Z}/n\mathbb{Z})$, for an arbitrary integer $n$. In Section~\ref{polynomial realizing im rho}, we prove this theorem step by step, together with a case study of the case $n=3$, in Subsection~\ref{case n=3}. Section~\ref{valuation} focuses on the arithmetic properties of the coefficients of $\chi_{u,n}$. First, we give a optimal lower bound for their valuations in Proposition~\ref{minimum valuation}, then discuss the computation of $\chi_{u,n}$, see Remark~\ref{two ways to calculate}. Section~\ref{examples} contains some examples, one of them, Example~\ref{ex serre curves}, based on a family of Serre curves given by \cite{Dserrecurves}. In this article we will use the following notations: for a polynomial $f$ in $K[X]$, we denote by $K(f)\subset\overline{K}$ the splitting field of $f$ over $K$, that is the extension of $K$ generated by the roots of $f$, and by $\Gal(f)$ the Galois group of $K(f)/K$. For $u\in K(E)$ and $A$ a subgroup of $E(\overline{K})$, we denote by $u(A)$ the set of the $u(P)$ such that $P\in A$. \section*{Acknowledgement} I thank my supervisors Samuele Anni and David Kohel. \section{Polynomial realizing $\rho_{E,n}(G_K)$}\label{polynomial realizing im rho} Let $E/K$ be an elliptic curve. For a positive integer $n$, we define \[E_n:=\{P\in E(\overline{K}) \text{ of order }n\}\subset E[n].\] \subsection{Primitive division polynomials}\label{primitive division polynomials} Let $(\psi_n)$ be the family of the $n$-division polynomials of $E$, for $n\in\mathbb{N}$; for a reference see \cite[Exercise 3.7]{AEC}. Studying the points of order $n$, where $n$ is not necessary prime, naturally leads to the following definition: \begin{definition}Let $E$ be an elliptic curve and $(\psi_n)$ the family of its division polynomials. We define the \emph{primitive division polynomial} $(\widetilde{\psi}_n)$ recursively by\begin{equation*}\psi_n=\underset{m\mid n}{\prod}\widetilde{\psi}_m.\end{equation*} \end{definition} \begin{remark}\label{pol primitif et pol div} Clearly $\psi_1=\widetilde{\psi}_1=1$ and, for $p$ prime and $k\geq1$, we have $\widetilde{\psi}_{p^k}=\frac{\psi_{p^k}}{\psi_{p^{k-1}}}$. In particular, $\widetilde{\psi}_p=\psi_p$ for $p$ prime. \end{remark} \begin{prop}\label{action sur ordre n}If $n\neq2$, the polynomial $\widetilde{\psi}_n$ is in $K[X]$ and its roots are the elements of $x(E_n)$. Moreover, if $n$ is a power of a prime $p$ then the leading coefficient of $\widetilde{\psi}_n$ is $p$. Otherwise, the polynomial $\widetilde{\psi}_n$ is monic. \end{prop} \begin{proof For $n$ odd, respectively even, the polynomial $\psi_n$, respectively $\frac{\psi_n}{\psi_2}$, is univariate with roots the elements of $x(E[n])$, respectively $x(E[n])\setminus x(E[2])$; see \cite[exercice 3.7.(a) and (d)]{AEC}. Then, by induction, for $n\geq3$, the polynomial $\widetilde{\psi}_n$ is univariate with roots the elements of $x(E_n)$. Now, the absolute Galois group $G_K$ acts on the $n$-torsion points, therefore on the points of order $n$. Indeed, let $P$ be a point of order $n$ and $\sigma$ in $G_K$. Suppose that $\sigma(P)$ has order $m<n$. Then, the point $P=\sigma^{-1}(\sigma(P))$ belongs to $E[m]$, which is a contradiction. Therefore, the factorisation \[\psi_n=\underset{m\mid n}{\prod}\widetilde{\psi}_m\]is defined over $K$. For a polynomial $f$, let $c(f)$ be its leading coefficient. We have \[c(\psi_n^2)=\underset{m\mid n}{\prod}c(\widetilde{\psi}_m^2).\] So, using that $c(\psi_n^2)=n^2$ for all $n$ and Remark~\ref{pol primitif et pol div}, we conclude, by induction that $c(\widetilde{\psi}_{p^k})=p$ for $p$ prime and $k$ a positive integer, and so that $c(\widetilde{\psi}_n)=1$ if $n$ is composite. \end{proof} \begin{remark} Since we know the degree of $\psi_m^2$ for all $m$, we can compute the degree of $\widetilde{\psi}_n$, as a polynomial in $x$, by induction. For example, if $n$ is the product of two distinct primes $p$ and $q$, then $\widetilde{\psi}_n$ has degree $(n^2-p^2-q^2+1)/2$. \end{remark} The polynomials $\psi_n$ and $\psi_2\psi_n$ are in $K[x]$ when $n$ is odd and even respectively, and $\psi_2$ is not. In particular, $K(\psi_n)$ is well-defined when $n$ is odd, but not when $n$ is even. \begin{definition} For $n$ an even integer, we define $K(\psi_n):=K(\psi_2\psi_n)$. \end{definition} \begin{lemma}For $n$ a positive integer, we have \[K(\psi_n)=K(x(E[n]))=K(x(E_n))=K(\tilde{\psi_n}).\] \end{lemma} \begin{proof} The last equality is given by Proposition~\ref{action sur ordre n}. By Proposition~\ref{action sur ordre n} and the factorization of $\psi_n$, the roots of $\psi_n$, for $n$ is odd, are the elements of $x(E[n])$. The same is true for $\psi_2\psi_n$ for $n$ even, noting that $x(E[2])$ are the roots of $\psi_2^2$, see \cite[exercice 3.7.(d)]{AEC}. Then we have the first equality. Finally, for the second equality, since $E_n\subset E[n]$ divides $\psi_n$, we obviously have $K(\widetilde{\psi}_n)\subset K(\psi_n)$. For the reverse inclusion, let $x(P)\in K(x(E[n]))$ with $P$ of order $m\mid n$. Then $n=km$ for some $k$, and $P=kQ$ for some point $Q$ of order $n$. From \cite[exercise 3.7.(d)]{AEC}, \[x(P)=x(kQ)=\frac{\phi_k(x(Q))}{\psi_k^2(x(Q))},\]where $\phi_k\in K[X]$. So $x(P)\in K(x(E_n))=K(\widetilde{\psi}_n)$. \end{proof} In fact, we do not need these formulas to prove that $K(x(E[n]))=K(x(E_n))$: see Lemma~\ref{égalité extension torsion et ordre} for a proof using only Galois theory. \subsection{Galois group of $\psi_n$}\label{groupe de galois de psi_n} For a positive integer $n$, let $\pi_n$ be the canonical projection \[\pi_n:\mathrm{GL}_2(\mathbb{Z}/n\mathbb{Z})\to \mathrm{GL}_2(\mathbb{Z}/n\mathbb{Z})/\{\pm\operatorname{\mathrm{id}}\},\]and define $\overline{\rho_{E,n}}=\pi_n\circ\rho_{E,n}$. If $\rho_{E,n}(G_K)$ does not contain $-\operatorname{\mathrm{id}}$, it is canonically isomorphic to $\overline{\rho_{E,n}}(G_K)$. \begin{lemma}\label{opposé y coordonnée}For $n\geq2$, we have $\ker\overline{\rho_{E,n}}=\Gal(\overline{K}/K(x(E_n)))$. \end{lemma} \begin{proof} If $\sigma\in G_K$ satisfies $\overline{\rho_{E,n}}(\sigma)=\operatorname{\mathrm{id}}$, then $\sigma(x(P))=x(P)$ for all $P\in E_n$. Now, let $\sigma\in \Gal(\overline{K}/K(x(E_n)))$. For $P\in E_n$, we have $\sigma(P)\in\{\pm P\}$. Suppose that there are two points $P,Q\in E_n$ such that $\sigma(P)=-P$ and $\sigma(Q)=Q$. Then, on the one hand, \[\sigma(P+Q)=-P+Q\]and, on the other hand, \[\sigma(P+Q)\in\{\pm (P+Q)\}.\] So either $P$ or $Q$ has order $2$, hence $n=2$, and in this case $P=-P$ for all $P\in E_n$. Consequently, either $\sigma(P)=P$ for all $P\in E_n$, or $\sigma(P)=-P$ for all $P\in E_n$. \end{proof} \begin{lemma}\label{égalité extension torsion et ordre}For $n\geq2$, we have $K(x(E_n))=K(x(E[n]))$. \end{lemma} \begin{proof} The inclusion $K(x(E_n))\subset K(x(E[n]))$ is obvious. Then we have \[\Gal(\overline{K}/K(x(E[n]))<\Gal(\overline{K}/K(x(E_n)).\]Now,
with iron L absorption in the ionization range from Fe xx–xxiv. Alternatively, associating this broad absorption line with a single feature, but with a smaller blueshift, would require an identification with the Fe xxiv at  keV. However, this would imply a single very broad line seen in isolation, without a strong contribution from other Fe L lines, e.g. Fe xxiii at  keV and lower ionization ions. Furthermore, as we will discuss later in Section 5, modeling this absorption profile with a grid of photoionized absorption spectra prefers the solution with a fast and highly ionized wind, noting that part of the large velocity width of the broad trough could be explained with a blend of these lines. Finally a narrower absorption line is also apparent at 846 eV in OBS. CD, and although it is much weaker than the broad 1.2 keV profile, it is also independently confirmed in the MOS (see Section 4.3). While the identification of this isolated line is uncertain, we note that if it is tentatively associated to absorption from O viii Ly (at  keV), its outflow velocity would be consistent with that found at iron K, at . The appearance of complex soft X-ray absorption structure near 1 keV in OBS E is perhaps not unexpected, especially as this RGS sequence appears the most absorbed and at the lowest flux of all the 2013–2014 XMM–Newton data sets. As shown in Figure 5, two deep absorption lines (at  keV and  keV) and an emission line at  keV are formally required by the data (see Table 2). Indeed, the addition of the two Gaussian absorption lines results in the fit statistic improving by for . Assuming that the absorption line profiles have the same width, then the two lines are found to be resolved with a common width of  eV (or  km s). One likely identification of the emission/absorption line pair in OBS. E at 913 eV and 1016 eV may be with Ne ix (at keV), which would then require the absorption line to be blueshifted with respect to the emission line component. Alternatively, if the absorption at 1016 eV is separately associated with Ne x Ly (at  keV), then this would require little or no velocity shift. However, the second absorption line at 1166 eV only requires a blueshift if it is associated with Ne x Lyman-, alternatively it could be associated to Fe xxiv without requiring a blueshift. Thus the precise identification of these 1 keV absorption lines are difficult to determine on an ad-hoc basis from fitting simple Gaussian profiles. Their most likely origin will be discussed further when we present the self consistent photoionization modeling of the OBS. E spectrum in Section 5.3. Nonetheless, regardless of their possible identification, the detection of the broad absorption profiles in the soft X-ray band in both OBS. E and OBS. CD may suggest the presence of a new absorption zone with somewhat lower velocity and/or ionization compared to the well-known ionized wind established at iron K. ### 4.2 Comparison to the 2001 and 2007 XMM–Newton Observations Following the results of the 2013–2014 observations, we re-analyzed the RGS spectra collected during the past XMM–Newton observations of PDS 456, when the quasar was observed in the two extreme states: a highly obscured one (2001; Reeves et al. 2003) and an unobscured state (2007; Behar et al. 2010). For the continuum model we again adopted the best fit found with the analysis of the mean RGS spectrum, allowing the of the Galactic absorption to vary as well as allowing the continuum parameters to adjust. For the 2001 observation we included an additional neutral partial covering absorber in the continuum model; indeed, during this observation, even though the intrinsic flux of PDS 456 is relatively high (see Table 1), the AGN appeared heavily obscured, with the presence of strong spectral curvature over the 1–10 keV band (Reeves et al. 2003). This curvature is especially evident in the EPIC MOS data (see Section 4.3). Without this absorber the derived photon index, although poorly constrained, is extremely hard (), and its extrapolation above 2 keV lies well above the EPIC spectra. On the other hand, the 2007 spectrum required a steeper photon index (), indicating a lack of intrinsic absorption. The residuals for the 2001 and 2007 RGS data to the baseline continuum model are shown in the two lower panels of Figure 4. Similarly to the OBS. E spectrum during the 2013–2014 campaign, the 2001 observation tracks the presence of the highly ionized wind. The presence of absorption near 1 keV in the 2001 observation was first noted in this data set by Reeves et al. (2003). A highly significant broad absorption profile is apparent in the residuals at eV, with an equivalent width of  eV, while the improvement in the fit statistic upon adding this absorption line is (see Table 2). The profile is resolved with a width of  eV (or  km s), as per the similar broad profiles in OBS. E. As discussed above, one plausible identification is with mildly blueshifted Ne x Ly (again notwithstanding any possible contribution from iron L absorption). At face value this suggests the presence of a lower velocity zone of the wind, which is investigated further in Section 5. In contrast to the 2001 spectrum, the 2007 observations show little evidence for intrinsic absorption and appear to be featureless (see also Behar et al. 2010). ### 4.3 Consistency check with EPIC-MOS soft X-ray spectra In this section we perform a consistency check on the lines detected in the RGS with the EPIC MOS spectra. Indeed, given the strength and breadth of the absorption features detected in the RGS spectra, we may expect them to be detectable at the MOS CCD spectral resolution. We thus considered the combined MOS 1+2 spectra for each observation and, while we fitted only the 0.5–2 keV energy range where the soft X-ray features occur, we also checked that the best fit models provide a good representation of the overall X-ray continuum to higher energies. The continuum model found from the analysis of the RGS spectra was adopted, but again allowing the parameters to adjust. As noted above, the 2001 MOS observation also shows pronounced curvature between 1.2–2.0 keV, which is apparent in the residuals of this spectrum in Figure 6, and was accounted for by including a neutral partial covering absorber. The residuals of the five MOS spectra (OBS. B, CD, E, 2001 and 2007) to the best fit continuum models are shown in Figure 6. Several absorption profiles are clearly visible (which are labelled on each of the corresponding spectral panels), and they are generally in good agreement with the features present in the RGS in Figure 4. In particular, the broad absorption profiles present during OBS. CD and OBS. E are confirmed at high significance, whereby both the broad profile at   keV (OBS. CD) and the complex absorption structure between 0.9–1.2 keV (OBS. E) emerge at high signal to noise in the MOS data. Likewise the broad absorption trough detected at 1.06 keV in the 2001 dataset, is also confirmed in the MOS spectra, with self consistent parameters. On the other hand, the featureless nature of the 2007 spectrum is also confirmed in the MOS, with no obvious residuals present in the spectrum. The results of the Gaussian line fitting to the MOS spectra are subsequently summarized in Table 3. As a further check of the consistency between the RGS and MOS detections of the absorption lines, we generated and overlaid confidence contours for each of the lines that are independently detected in each of the RGS and MOS spectra. As an illustration we show two of the examples from the OBS. E and 2001 sequences in Figure 7. The upper panel shows the 68%, 90% and 99% confidence contours (for 2 parameters of interest) of line energy against flux for the  keV absorption line detected in the OBS. E spectra. The contours show the close agreement between the two detectors, with the absorption line confirmed in
full vein not in $\mathbb{B}$ must have a vertex in common with a fat vein $\mathcal{V}^f_y$ corresponding to one of the veins $\mathcal{V}_y$ of $\mathbb{B}$. Let $u_{x,y}$ be the lowest vertical coordinate and $w_{x,y}$ the highest vertical coordinate of vertices in $\mathcal{V}^f_y \cap C_x$. We define $\mathcal{S}_0 = \{v_{x,y}\in V(G_{[j,j+k-1]}) : x \in [j,j+k-1], y < u_{x,1} \}$, $\mathcal{S}_b = \{v_{x,y}\in V(G_{[j,j+k-1]}) : x \in [j,j+k-1], y > w_{x,b} \}$, and for $y=1,\dots,b-1$ we define: \[ \mathcal{S}_i = \{v_{x,y}\in V(G_{[j,j+k-1]}) : x \in [j,j+k-1], w_{x,i} < y < u_{x,i+1} \}\] This gives us $b+1$ \emph{slices} $\{\mathcal{S}_0, \mathcal{S}_1, \cdots, \mathcal{S}_b \}$. We partition the vertices in the fat veins and the slices into sets which have similar neighbourhoods, which will facilitate the division of $G$ into panels. We colour the vertices of $G_{[j,j+k-1]}$ so that each slice has green/pink vertices to the left and red vertices to the right of the partition, and each fat vein has blue vertices (if any) to the left and yellow vertices to the right. Examples of vertex colourings are shown in Figure \ref{fig-col1}. Colour the vertices of each slice $\mathcal{S}_i$ as follows: \begin{itemize} \item Colour any vertices in the left-hand column green. Now colour green any remaining vertices in the slice that are connected to one of the green left-hand column vertices by a part vein that does not have a vertex in common with any of the fat veins corresponding to the full veins in $\mathbb{B}$. \item Locate the column $t$ of the right-most green vertex in the slice. If there are no green vertices set $t=s=j$. If $t>j$ then choose $s$ in the range $j \le s < t$ such that $s$ is the highest column index for which $\alpha_{s}=2$. If there are no columns before $t$ for which $\alpha_{s}=2$ then set $s=j$. Colour pink any vertices in the slice (not already coloured) in columns $j$ to $s$ which are below a vertex already coloured green. \item Colour any remaining vertices in the slice red. \end{itemize} Note that no vertex in the right-hand column can be green because if there was such a vertex then this would contradict the fact that there can be no full veins other than those which have a vertex in common with one of the fat veins corresponding to the full veins in $\mathbb{B}$. Furthermore, no vertex in the right hand column can be pink as this would contradict the fact that every pink vertex must lie below a green vertex in the same slice. Colour the vertices of each fat vein $\mathcal{V}^f_i$ as follows: \begin{itemize} \item Let $s$ be the column as defined above for the slice immediately above the fat vein. If $s=j$ colour the whole fat vein yellow. If $s>j$ colour vertices of the fat vein in columns $j$ to $s$ blue and the rest of the vertices in the fat vein yellow. \end{itemize} \begin{figure} \centering \begin{tikzpicture}[yscale=0.6,xscale=0.6, vertex4/.style={circle,draw=white,minimum size=6,fill=white},] \foreach \x/\y in {1/4,1/15,1/22,2/4,2/21,2/22,3/3, 3/20,3/21} \node[vertex,blue!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {2/15,3/12,3/13,3/14, 4/3,4/11,4/18,4/19,5/3,5/9,5/10,5/11,5/18, 6/3,6/9,6/18,7/1,7/5,7/2,7/3,7/6,7/7,7/8,7/9,7/16,7/18} \node[vertex,yellow!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {1/3,1/5,1/9,1/12,1/17,1/18,1/23,1/24, 2/1,2/7,2/16,2/23,3/6,3/7,3/22,4/4,4/21} \node[vertex,green!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {2/19,3/8,3/17,3/24, 4/1,4/8,4/14,4/24,5/5,5/6,5/7,5/13,5/14,5/22,5/23,5/24, 6/4,6/12,6/11,6/13,6/14,6/21,7/20,7/21,7/24} \node[vertex,red!70!black] (\x-\y) at (\x,\y) {}; { \draw[very thick,blue] (1-4)--(2-4)--(3-3)--(4-3) --(5-3)--(6,3)--(7-1); \draw[very thick,blue] (1-15)--(2-15)--(3-12)--(4-11) --(5-9)--(6-9)--(7-5); \draw[very thick,blue] (1-22)--(2-21)--(3-20)--(4-18) --(5-18)--(6-18)--(7-16); \draw[blue] (7-3)--(6-3)--(7-2); \draw[blue] (2-15)--(3-14)--(4-11)--(3-13)--(2-15); \draw[blue] (4-11)--(5-11)--(6-9)--(5-10)--(4-11); \draw[blue] (7-9)--(6-9)--(7-8);. \draw[blue] (7-7)--(6-9)--(7-6); \draw[blue] (1-22)--(2-22)--(3-21)--(4-18); \draw[blue] (2-22)--(3-20)--(4-19); \draw[blue] (2-21)--(3-21)--(4-19)--(5-18); \draw[blue] (6-18)--(7-18);, \draw[green] (1-24)--(2-23)--(3-22)--(4-21); \draw[green] (1-23)--(2-23); \draw[green] (1-18)--(2-16)--(1-17); \draw[green] (1-12)--(2-7)--(1-9); \draw[green] (2-7)--(3-7)--(4-4)--(3-6)--(2-7); \draw[green] (1-3)--(2-1); } \node (99)[label={below:Example $1$}] [vertex4] at (4,1) {}; \begin{scope}[shift={(8,0)}] \foreach \x/\y in {1/4,1/15,1/22,2/4,2/21,2/22,3/4, 3/20,3/21} \node[vertex,blue!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {2/15,3/12,3/13,3/14, 4/3,4/4,4/9,4/10,4/11,4/18,4/19,5/3,5/9,5/10,5/11,5/18,5/19, 6/3,6/4,6/9,6/11,6/18,6/19,7/3,7/4,7/9,7/11,7/18,7/19} \node[vertex,yellow!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {1/3,1/5,1/9,1/12,1/17,1/18,1/23,1/24, 2/1,2/7,2/16,2/23,3/6,3/7,3/22,4/5,4/21, 5/5,5/21,6/5,6/21} \node[vertex,green!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {2/19,3/8,3/17,3/24, 4/1,4/8,4/14,4/24, 5/6,5/7,5/13,5/14,5/23,5/24, 6/12,6/13,6/14,6/24, 7/1,7/2,7/6,7/7,7/8,7/14,7/16,7/24} \node[vertex,red!70!black] (\x-\y) at (\x,\y) {}; { \draw[very thick,blue] (1-4)--(2-4)--(3-4)--(4-3) --(5-3)--(6,3)--(7-3); \draw[very thick,blue] (1-15)--(2-15)--(3-12)--(4-9) --(5-9)--(6-9)--(7-9); \draw[very thick,blue] (1-22)--(2-21)--(3-20)--(4-18) --(5-18)--(6-18)--(7-18); \draw[blue] (2-15)--(3-14)--(4-11)--(3-13)--(2-15); \draw[blue] (7-9)--(6-9); \draw[blue] (1-22)--(2-22)--(3-21)--(4-18); \draw[blue] (2-22)--(3-20)--(4-19); \draw[blue] (2-21)--(3-21)--(4-19)--(5-19)-- (6-19)--(7-19); \draw[blue] (3-14)--(4-10)--(3-13)--(4-9); \draw[blue] (4-11)--(3-12)--(4-10); \draw[blue] (4-9)--(3-14); \draw[blue] (3-4)--(4-4); \draw[blue] (4-10)--(5-10); \draw[blue] (4-11)--(5-11)--(6-11)--(7-11); \draw[blue] (6-4)--(7-4); \draw[green] (1-24)--(2-23)--(3-22)--(4-21) --(5-21)--(6-21); \draw[green] (1-23)--(2-23); \draw[green] (1-18)--(2-16)--(1-17); \draw[green] (1-12)--(2-7)--(1-9); \draw[green] (2-7)--(3-7)--(4-5)--(3-6)--(2-7); \draw[green] (4-5)--(5-5)--(6-5); \draw[green] (1-3)--(2-1); } \node (99)[label={below:Example $2$}] [vertex4] at (4,1) {}; \end{scope} \begin{scope}[shift={(16,-8)}] \foreach \x/\y in {1/24,2/23,3/21,4/12, 5/12,6/12,7/12,8/12,9/8, 10/5} \node[vertex,blue!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {2/24,3/22,3/23, 4/13,4/14,4/15,4/16,4/17,4/18,4/19,4/20,4/21, 5/14,5/15,5/18,5/19,5/20, 6/13,6/14,6/15,6/18,6/19,7/15,7/16,7/19,7/21, 8/15,8/19,8/21} \node[vertex,blue!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {9/8,9/9,9/10,9/11,9/12, 10/5,10/6,10/7,10/8} \node[vertex,yellow!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {1/36,2/35,3/33,4/30, 5/30,6/30,7/30,8/30,9/19} \node[vertex,green!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {1/35,2/33,3/31,4/28,4/27,4/26,4/25, 5/27,6/27,7/27,8/27,9/16} \node[vertex,green!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {5/23,5/26,6/29,6/26,6/25,6/24,6/23,7/29, 7/26,7/25,7/24,7/23,8/24,8/23} \node[vertex,pink!70!black] (\x-\y) at (\x,\y) {}; \foreach \x/\y in {3/36,4/35,5/35,6/35,7/35,8/35,9/34} \node[vertex,red!70!black] (\x-\y) at (\x,\y) {}; { \draw[very thick,blue] (1-24)--(2-23)--(3-21)--(4-12) --(5-12)--(6,12)--(7-12)--(8-12)--(9-8)--(10-5); \draw[blue] (1-24)--(2-24)--(3-23)--(4-21); \draw[blue] (2-23)--(3-23)--(4-20)--(5-20); \draw[blue] (2-24)--(3-22)--(4-21); \draw[blue] (2-24)--(3-21)--(4-21); \draw[blue] (2-23)--(3-22)--(4-20); \draw[blue] (3-23)--(4-19)--(5-19)--(6-19)--(7-19)--(8-19); \draw[blue] (3-23)--(4-18);\draw[blue] (3-23)--(4-17); \draw[blue] (3-23)--(4-16);\draw[blue] (3-23)--(4-15); \draw[blue] (3-23)--(4-14);\draw[blue] (3-23)--(4-13); \draw[blue] (3-22)--(4-19); \draw[blue] (3-22)--(4-18);\draw[blue] (3-22)--(4-17); \draw[blue] (3-22)--(4-16);\draw[blue] (3-22)--(4-15); \draw[blue] (3-22)--(4-14);\draw[blue] (3-22)--(4-13); \draw[blue] (3-21)--(4-20);\draw[blue] (3-21)--(4-19); \draw[blue] (3-21)--(4-18);\draw[blue] (3-21)--(4-17); \draw[blue] (3-21)--(4-16);\draw[blue] (3-21)--(4-15); \draw[blue] (3-21)--(4-14);\draw[blue] (3-21)--(4-13); \draw[blue] (4-18)--(5-18)--(6-18); \draw[blue] (4-15)--(5-15)--(6-15)--(7-15)--(8-15); \draw[blue] (4-14)--(5-14)--(6-14); \draw[blue] (8-19)--(9-12);\draw[blue] (8-19)--(9-11); \draw[blue] (8-19)--(9-10);\draw[blue] (8-19)--(9-9); \draw[blue] (8-19)--(9-8); \draw[blue] (8-21)--(9-12);\draw[blue] (8-21)--(9-11); \draw[blue] (8-21)--(9-10);\draw[blue] (8-21)--(9-9); \draw[blue] (8-21)--(9-8); \draw[blue] (8-15)--(9-12);\draw[blue] (8-15)--(9-11); \draw[blue] (8-15)--(9-10);\draw[blue] (8-15)--(9-9); \draw[blue] (8-15)--(9-8); \draw[blue] (8-12)--(9-12);\draw[blue] (8-12)--(9-11); \draw[blue] (8-12)--(9-10);\draw[blue] (8-12)--(9-9); \draw[blue] (9-12)--(10-8);\draw[blue] (9-12)--(10-7); \draw[blue] (9-12)--(10-6);\draw[blue] (9-12)--(10-5); \draw[blue] (9-11)--(10-8);\draw[blue] (9-11)--(10-7); \draw[blue] (9-11)--(10-6);\draw[blue] (9-11)--(10-5); \draw[blue] (9-10)--(10-8);\draw[blue] (9-10)--(10-7); \draw[blue] (9-10)--(10-6);\draw[blue] (9-10)--(10-5); \draw[blue] (9-9)--(10-8);\draw[blue] (9-9)--(10-7); \draw[blue] (9-9)--(10-6);\draw[blue] (9-9)--(10-5); \draw[blue] (9-8)--(10-8);\draw[blue] (9-8)--(10-7); \draw[blue] (9-8)--(10-6); \draw[green] (1-36)--(2-35)--(3-33)--(4-30) --(5-30)--(6,30)--(7-30)--(8-30); \draw[green] (1-36)--(2-33)--(1-35)--(2-35); \draw[green] (2-35)--(3-31)--(2-33)--(3-33); \draw[green] (3-33)--(4-28)--(3-31)--(4-30); \draw[green] (3-33)--(4-27)--(3-31)--(4-26)--(3-33)-- (4-25)--(3-31); \draw[green] (4-27)--(5-27)--(6-27)--(7-27)--(8-27); \draw[green] (9-16)--(8-30)--(9-19); \draw[green] (9-16)--(8-27)--(9-19); \draw[pink] (5-26)--(6-26)--(7-26); \draw[pink] (6-25)--(7-25); \draw[pink] (6-29)--(7-29); \draw[pink] (6-24)--(7-24)--(8-24); \draw[pink] (5-23)--(6-23)--(7-23)--(8-23); } \node (99)[label={below:Example $3$}] [vertex4] at (5,9) {}; \end{scope} \end{tikzpicture}\par \caption{Examples of vein and slice colouring -- a $222222$, a $222000$ and a $222000022$ factor, with vertices coloured blue, green, pink, red and yellow as described. The only edges shown are the veins (bold blue), other edges in the fat veins (blue), part veins that start on the left column but do not reach the right column (green) and related pink rows.} \label{fig-col1} \end{figure} When we create a clique-width expression we will be particularly interested in the edges between the blue and green/pink vertices to the left and the red and yellow vertices to the right. \begin{prop}\label{red} Let $v$ be a red vertex in column $x$ and slice $\mathcal{S}_i$. If $u$ is a blue, green or pink vertex in column $x-1$ then \[uv \in E(G) \text{ if and only if } \alpha_{x-1}=2 \text{ and } u \in \mathcal{V}^f_{i+1} \cup \mathcal{S}_{i+1} \cup \cdots \cup \mathcal{V}^f_{b} \cup \mathcal{S}_b. \] Similarly, if $u$ is a blue, green or pink vertex in column $x+1$ then \[uv \in E(G) \text{ if and only if } \alpha_{x}=2 \text{ and } u \in \mathcal{S}_{0} \cup \mathcal{V}^f_{1} \cup \mathcal{S}_{1} \cup \cdots \cup \mathcal{V}^f_{i} \cup \mathcal{S}_{i}. \] \end{prop} \begin{proof} Note that as $u$ and $v$ are in consecutive columns we need only consider $\alpha$-edges. If $u$ is green in column $x-1$ of $\mathcal{S}_i$ then red $v$ in column $x$ of $\mathcal{S}_i$ cannot be adjacent to $u$ as this would place red $v$ on a green part-vein which is a contradiction. Likewise, if $u$ is green in column $x+1$ of $\mathcal{S}_i$ then red $v$ in column $x$ of $\mathcal{S}_i$ must be adjacent to $u$ since if it was not adjacent to such a green vertex in the same slice then this implies the existence of a green vertex above the red vertex in the same column which contradicts the colouring rule to colour pink any vertex in columns $j$ to $s$ below a vertex coloured green. The other adjacencies are straightforward. \end{proof} \begin{prop}\label{yellow} Let $v$ be a yellow vertex in column $x$ and fat vein $\mathcal{V}^f_i$. If $u$ is a blue, green or pink vertex in column $x-1$ then \[uv \in E(G) \text{ if and only if } \alpha_{x-1}=2 \text{ and } u \in \mathcal{V}^f_{i} \cup \mathcal{S}_{i} \cup \cdots \cup \mathcal{V}^f_{b} \cup \mathcal{S}_b. \] Similarly, if $u$ is a blue, green or pink vertex in column $x+1$ then \[uv \in E(G) \text{ if and only if } \alpha_{x}=2 \text{ and } u \in \mathcal{S}_{0} \cup \mathcal{V}^f_{1} \cup \mathcal{S}_{1} \cup \cdots \cup \mathcal{V}^f_{i-1} \cup \mathcal{S}_{i-1}. \] \end{prop} \begin{proof} Note that as $u$ and $v$ are in consecutive columns we need only consider $\alpha$-edges. If $u$ is blue in column $x-1$ of $\mathcal{V}^f_i$ then yellow $v$ in column $x$ of $\mathcal{V}^f_i$ must be adjacent to $u$ from the definition of a fat vein. Equally, from the colouring definition for a fat vein there cannot be a blue vertex in column $x+1$ of $\mathcal{V}^f_i$ if there is a yellow vertex in column $x$ of $\mathcal{V}^f_i$. The other adjacencies are straightforward. \end{proof} Having established these propositions, as the pink and green vertices in a particular slice and column have the same adjacencies to the red and yellow vertices, we now combine the green and pink sets and simply refer to them all as \emph{green}. \subsection{Extending \texorpdfstring{$\alpha$}{alpha} to the \texorpdfstring{$4$}{4}-letter alphabet} Our analysis so far has been based on $\alpha$ being a word from the alphabet $\{0,2\}$. We now use the following lemma to extend our colouring to the case where $\alpha$ is a
distinct mutant group types from Pitest~\cite{pitest}. This is important to determine whether the generation, selection or identification of commit-relevant mutants can be improved by focusing on specific mutant types. \textit{What is the prevalence of mutant types among (subsuming) commit-relevant mutants?} \autoref{fig:RQ4-test_mutants_operators} illustrates the prevalence of mutant types among commit-relevant mutants. Our evaluation results show that some mutant types are highly prevalent, such as \textit{Unary Operator Insertion Mutators (UOIMutators)}, \textit{Relational Operator Replacement Mutator (RORMutators)} and \textit{Constant Replacement Mutator (CRCRMutators)}. \revise{ On one hand, UOIMutators inject a unary operator (increment or decrement) on a variable, this may affect the values of local variables, arrays, fields, and parameters~\cite{pitest}, while RORMutators replace a relational operator with another one, e.g., ``$<$'' with ``$>$'' or ``$<=$'' with ``$<$''. On the other hand, CRCRMutators mutates inline constants. For further detauls about the mutant types, the table of constants and other mutation operators can be found in the official PiTest documentation\footnote{http://pitest.org/quickstart/mutators/}.} Specifically, 50.77\% of the commit-relevant mutants are of one of these three mutant types. This is mainly related to the fact these three mutation operators produced the majority (54.5\%) of the mutants considered in our study. Precisely, \autoref{fig:RQ4-ratio_relevant_mutants_operators} shows that the \emph{distribution} of commit-relevant mutants is clearly \emph{uniform} per mutant type. That is, in general, between 20\% and 30\% of the mutants for each type result to be commit-relevant. This indicates that mutants type does not increase or reduce the chances for mutants of being commit-relevant. The outliers of \autoref{fig:RQ4-ratio_relevant_mutants_operators}, corresponding to mutant types \revise{Bitwise Operator Mutator} (\textit{OBBNMutators}) and \revise{Invert Negatives Mutator} (\textit{InvertNegsMutat}), are because of the low number of mutants for these types: 13 out of 81 (16\%) mutants are commit-relevant in the case of \textit{OBBNMutators} mutant type, while 3 out of 5 (60\%) mutants are commit-relevant for \textit{InvertNegsMutat} mutant type. \revise In particular, OBBNMutators mutates (i.e., reverses) bitwise ``AND'' (\&) and ``OR'' ($|$) operators, while InvertNegsMutat operators inverts the negation of integers and floating-point numbers.} Similarly, Figures \ref{fig:RQ4-test_sub_mutants_operators} and \ref{fig:RQ4-ratio_sub_mutants_operators} show that the ratio of subsuming commit-relevant mutants per mutant type follows a uniform distribution as well. Typically, between 5-7\% of the mutants per mutant type turn to be subsuming commit-relevant. The outlier of \autoref{fig:RQ4-ratio_sub_mutants_operators} corresponds to \textit{InvertNegsMutat} mutant type, where none of the 3 commit-relevant mutants identified for this mutant type are subsuming (because of mutants of a different mutant type subsume them). \begin{result} The distribution of (subsuming) commit-relevant mutants per mutant type is uniform. Typically, between 20-30\% (5-7\%) of the mutants per mutant type are (subsuming) commit-relevant. \end{result} \smallskip\noindent \subsection{\RQ5: Effectiveness of Commit-relevant Mutants Selection} This section simulates a mutation testing scenario where the tester selects a mutant for analysis for which a test to kill it is developed. Note that a test case that is designed to kill a mutant may collaterally kill other mutants. Consequently, opening a space to examine the effectiveness of the test suites developed when guided by different mutant selection strategies. Accordingly, this study compares the following mutant selection strategies: “random mutants selection,” “mutants within a change,” and (subsuming) commit-relevant mutants. We measure their effectiveness in terms of the \textit{Relevant Mutation Score} (RMS) and \textit{Minimal-Relevant Mutation Score} (RMS*), which intuitively measures the number of (subsuming) commit-relevant mutants killed by the different test suites. Specifically, we investigate the extent to which selecting and killing each aforementioned mutant types improves the test suite quality, in terms of the number of (subsuming) commit-relevant mutants killed by the test suite. Then we pose the question: \textit{How many (subsuming) commit-relevant mutants are killed if a developer or test generator selects and kills random mutants or only mutants within a change?} \begin{table}[] \caption{Comparative Effectiveness of selecting and killing (subsuming) commit-relevant mutants in comparison to ``\revise{all mutants}'' and ``mutants within a change'' by observing RMS (Relevant Mutation Score) and RMS* (Subsuming Relevant Mutation Score)} \resizebox{\textwidth}{!}{ \begin{tabular}{c|cccccccccc|cccccccccc|} \cline{2-21} & \multicolumn{10}{c|}{\textbf{RMS}} & \multicolumn{10}{c|}{\textbf{RMS*}} \\ \hline \multicolumn{1}{|c|}{\textbf{Selection Strategy/Interval}} & \textit{\textbf{2}} & \textit{\textbf{4}} & \textit{\textbf{6}} & \textit{\textbf{8}} & \textit{\textbf{10}} & \textit{\textbf{12}} & \textit{\textbf{14}} & \textit{\textbf{16}} & \textit{\textbf{18}} & \textit{\textbf{20}} & \textit{\textbf{2}} & \textit{\textbf{4}} & \textit{\textbf{6}} & \textit{\textbf{8}} & \textit{\textbf{10}} & \textit{\textbf{12}} & \textit{\textbf{14}} & \textit{\textbf{16}} & \textit{\textbf{18}} & \textit{\textbf{20}} \\ \hline \multicolumn{1}{|c|}{\textbf{Random}} & 46.67 & 70.59 & 82.42 & 88.10 & 91.95 & 95.26 & 95.74 & 96.85 & 97.50 & 98.05 & 11.11 & 35.00 & 54.17 & 66.13 & 75.00 & 81.25 & 85.71 & 88.24 & 90.89 & 92.76 \\ \cline{1-1} \multicolumn{1}{|c|}{\textbf{Within a change}} & 46.48 & 59.52 & 65.91 & 67.48 & 68.42 & 69.47 & 69.96 & 70.18 & 70.40 & 71.09 & 11.95 & 25.00 & 28.95 & 32.31 & 33.33 & 33.67 & 34.38 & 34.88 & 35.23 & 35.29 \\\cline{1-1} \multicolumn{1}{|c|}{\textbf{Commit-Relevant}} & 75.00 & 95.05 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 40.74 & 83.72 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\ \cline{1-1} \multicolumn{1}{|c|}{\textbf{Subsuming Commit-Relevant}} & 80.00 & 98.51 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 65.75 & 95.35 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\ \hline \end{tabular} } \label{tab:RQ5-median-developer_simulation} \end{table} \begin{figure*}[bt!] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{figures/Box_plot_Simulation_relevant_ms_v3.pdf} \caption{Relevant Mutants Progression} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{figures/Box_plot_Simulation_minimal_relevant_ms_v3.pdf} \caption{Subsuming Relevant Mutants Progression} \end{subfigure} \caption{ Comparative Effectiveness of selecting and killing (subsuming) commit-relevant mutants in comparison to ``random mutants'' and ``mutants within a change''} \label{fig:RQ5-developer_simulation} \end{figure*} \autoref{tab:RQ5-median-developer_simulation} and \autoref{fig:RQ5-developer_simulation} demonstrates how the effectiveness of the developed test suites progresses when we analyze up to 20 mutants from the different mutant pools. We observed that when the same number of mutants are selected from the different pools, better effectiveness is reached by test suites developed for killing (subsuming) commit-relevant mutants. For instance, a test suite designed to kill six (6) selected (subsuming) commit-relevant mutants will achieve 100\% of RMS and RMS*. However, a test suite designed to kill six randomly selected mutants will achieve 82.42\% RMS and 54.17\% RMS*, while a test suite that kills six mutants within a change will achieve 65.91\% RMS and 28.95\% RMS*, respectively. More precisely, even after selecting 20 mutants, neither random selection \revise{from all mutants} nor within a change selection achieved 100\% of RMS and RMS*. This result demonstrates the significant advantage achieved by selecting (subsuming) commit-relevant mutants. Moreover, we observed that random selection \revise{from all mutants} is up to 1.6 times more effective than selecting mutants within a change. For instance, selecting 20 random mutants achieves 98.05\% RMS and 92.76\% RMS*, while selecting 20 mutants within a change only achieves 71.09\% RMS and 35.29\% RMS*. This result demonstrates the importance of selecting mutants \emph{outside} developers' committed changes. \begin{result} Selecting and killing (subsuming) commit-relevant mutants led to more effective test suites. They significantly reduced the number of mutants requiring analysis compared to random mutant selection and selecting mutants within a change. \end{result} \smallskip\noindent \subsection{\RQ6: Test Executions} In this section, we study the \emph{efficiency} of the different mutant sets in terms of the number of \emph{test executions} \revise{required to run the tests resulting from the analysis of 2-20 mutants. We thus, approximate the \revise{computational} demands involved when using all mutants, relevant mutants, (subsuming) relevant mutants and mutants located within commit changes. } \revise{\autoref{fig:RQ6-developer_simulation} illustrates the number of test executions required by the test suites \revise{derived by the analysis of 2-20 mutants}. We found that the analysis of commit-relevant mutants significantly reduces the number of required test executions by \revise{4.28} times on average over different intervals of analysed mutants (and \revise{16} times when using subsuming commit-relevant mutants) in comparison to \revise{test execution required when analysing all mutants}. For instance, users will need to perform \revise{601} test executions when deriving tests based on the \revise{analysis of 2} mutants, from the set of all mutants, compared to
John Preskill. How did SQuInT get here? Its origin stems from the history of Quantum Information Science (QIS) itself. I joined the faculty at UNM in 1995.  Those were heady times, on the heels of Shor’s  algorithm and new developments in quantum information theory, which  occurred at inflationary speeds.  Simultaneously, Bose Einstein Condensation had just been observed.  These two developments caused a revolution in quantum optics and AMO-physics from with SQuInT was founded. I, together with my colleague and now 20-year academic partner, Prof. Poul Jessen at the College of Optical Science, University of Arizona, focused on “optical lattices,” a brand new idea at that time, and the subject of Poul’s PhD thesis.  In Poul’s dissertation, he demonstrated that the motion of laser-cooled atoms,  trapped at the antinodes of standing waves, was quantized.  This quantum motion was reminiscent of that seen in atomic ions in Paul traps, and we set out to exploit this in optical lattices.  Indeed, a hot development of the 1990s was the ability to engineer nonclassical states of motion of ions, leveraging off of the analogy with the Jaynes-Cummings model of cavity QED.  As a side note, this capability was at the heart of the 1995 proposal by  Ignacio Cirac & Peter Zoller  for ion trap quantum computing and the immediate demonstration by Chris Monroe & Dave Wineland of the first CNOT gate.  Given these connections, in 1997 I organized a small workshop at UNM entitled Quantum Control of Atomic Motion, which brought together neutral atom trappers, ion trappers, and quantum opticians. Among the participants were Rainer Blatt, Hideo Mabuchi, Hersch Rabitz, and Dave Wineland.  Hersch’s presence was a new dimension, as we began to understand that the tools of quantum optimal control, previously developed mostly in the context of NMR and in physical chemistry, would be important for quantum control of atoms.  The meeting was repeated in 1998, as Quantum Control of Atomic Motion II.  By that time quantum computing was fully taking hold in the community.  Chris Monroe presented his logic gate results and we presented the first ideas for quantum computing in optical lattices.  The attendees decided we should be broadening the scope of the meeting to Quantum Information Science and Technology.  Hideo Mabuchi corresponded with Ike Chuang, who was at IBM-Almaden in San Jose California at the time.  Ike, of course, was at the center of the QI revolution and in December 1998 assembled a meeting of some of the key players including: Carl Caves, Richard Cleve, Chris Fuchs, Paul Kwiat, Poul Jessen, Hideo Mabuchi, David Meyer, Chris Monroe, John Preskill, Lu Sham, and  Birgitta Whaley. SQuInT Founders Meeting, IBM Almaden, San Jose CA, December 1998 And thus SQuInT was born.  The first meeting was held in 1999 (SQuInT99) in Albuquerque New Mexico at a budget hotel known as the Holiday Inn “Mountain View.”  Mostly we had a view of the nearby truck stop. But the meeting was of the highest quality.  Our first session was Chaired by Dave Wineland.  The speakers were Serge Haroche, Jeff Kimble, and Hideo Mabuchi.  I’d say we were on the right track! First Annual SQuInT Workshop, February 1999, Albuquerque NM At this first meeting we voted on the SQuInT Logo, created by Jon Dowling Here’s the backstory. Alice and Bob Kokepelli, the Hopi fertility deities, play their flutes to the dreamcatcher.   What has the dreamcathcer caught?  Part of the circuit diagram for quantum teleportation of course! At the time, SQuInT was envisioned to be a regional network.  As QIS was a new field, the plan was to facilitate collaborations and exchange of information given the local strength in the southwestern United States.  Some of the key nodes of the SQuInT Network at the time included Caltech, IBM-Almaden, Los Alamos, NIST Boulder, UA, UCB, UCSB, UCSD, and UNM.  SQuInT took as its mission two key objectives: (1) building a network where the interdisciplinary subject matter of QIS would grow through direct interactions of theoretical and experimental physicists and computer scientists, as well as chemists, engineers, and mathematicians; (2) provide training of students, postdocs, and others who were entering a newly emerging discipline.  In line with goal (2), the Annual SQuInT Workshop has been a forum friendly to young scientists, where students and postdocs give talks alongside senior leaders in the field, and where new networks and collaborations can build.  In addition, students organized “summer retreats,” which essentially served as summer schools, since there were few courses in QIS at that time. After its initial founding, SQuInT grew and the Annual Workshop traveled amongst the node institutions.  By the fourth meeting, we had grown to over 75 participants. After its establishment in 2007, CQuIC became the official administrative home of SQuInT.  The Annual Meeting alternates between New Mexico and one of the Node Institutions, of which there are now 30 across the United States and some international. These Nodes include universities, national laboratories, and industry, the latter of which has an increasing presence given the rapid developments in QI technologies (SQuInTNodes).   SQuInT Node institutions serve on the SQuInT Steering Committee and are the core participants in SQuInT, and can act as local hosts of the Annual Workshop.  Last year’s meeting took place in Berkeley CA with over 200 participants. Seventeenth Annual SQuInT Meeting, February 2015, Berkeley CA After 17 years serving as the Chief SQuInT Coordinator (plus 2 years of proto-SQuInT organization), I am proud to hand over the reigns to Prof. Akimasa Miyake.  SQuInT remains true to its goals of training, education, and growth of an interdisciplinary subject. Under Akimasa’s organization, we have a top-notch program, and I look forward to attending SQuInT, as a participant! # BTZ black holes for #BlackHoleFriday Yesterday was a special day. And no I’m not referring to #BlackFriday — but rather to #BlackHoleFriday. I just learned that NASA spawned this social media campaign three years ago. The timing of this year’s Black Hole Friday is particularly special because we are exactly 100 years + 2 days after Einstein published his field equations of general relativity (GR). When Einstein introduced his equations he only had an exact solution describing “flat space.” These equations are notoriously difficult to solve so their introduction sent out a call-to-arms to mathematically-minded-physicists and physically-minded-mathematicians who scrambled to find new solutions. If I had to guess, Karl Schwarzschild probably wasn’t sleeping much exactly a century ago. Not only was he deployed to the Russian Front as a solider in the German Army, but a little more than one month after Einstein introduced his equations, Schwarzschild was the first to find another solution. His solution describes the curvature of spacetime outside of a spherically symmetric mass. It has the incredible property that if the spherical mass is compact enough then spacetime will be so strongly curved that nothing will be able to escape (at least from the perspective of GR; we believe that there are corrections to this when you add quantum mechanics to the mix.) Schwarzchild’s solution took black holes from the realm of clever thought experiments to the status of being a testable prediction about how Nature behaves. It’s worth mentioning that between 1916-1918 Reissner and Nordstrom generalized Schwarzschild’s solution to one which also has electric charge. Kerr found a solution in 1963 which describes a spinning black hole and this was generalized by Newman et al in 1965 to a solution which includes both spin (angular momentum) and electric charge. These solutions are symmetric about their spin axis. It’s worth mentioning that we can also write sensible equations which describe small perturbations around these solutions. And that’s pretty much all that we’ve got in terms of exact solutions which are physically relevant to the 3+1 dimensional spacetime that we live in (it takes three spatial coordinates to specify a meeting location and another +1 to specify the time.) This is the setting that’s closest to our everyday experiences and these solutions are the jumping off points for trying to understand the role that black holes play in astrophysics. As I already mentioned,
point contact (left in Fig.~\ref{Fig:6.1}) has the transparency close to unity and the other contact (right in Fig.~\ref{Fig:6.1}) has a very small conductance $G_R \ll e^2/\pi\hbar$. This case can be realized experimentally by a corresponding adjustment of the voltages on the gates forming point contacts\cite{MarcusPrivate}. Moreover, it follows from the scaling arguments\cite{Furusaki95} that the asymmetry of contacts is a relevant perturbation. In the case $\Delta = 0$, the strongly asymmetric limit corresponds to the fixed point of a system with an infinitesimally small initial asymmetry. Therefore we can expect that at $\Delta/E_C\ll 1$, this limit adequately describes dots with a finite initial degree of asymmetry. \subsection{General formalism} \label{sec:6a} For calculation of such tunneling conductance, we have to modify derivation of Secs.~\ref{sec:3a} and \ref{sec:3b} in order to take into account the tunneling between the dot and the second lead. In comparison with Hamiltonian (\ref{Hamiltonian}), the total Hamiltonian of the system acquires two additional terms, \begin{equation} \hat{H}=\hat{H}_F + \hat{H}_C+\hat{H}_M +\hat{H}_T. \label{eq:6.1} \end{equation} Here $\hat{H}_F$ describes the electron motion in the dot and in the left lead and is given by (\ref{HF}), interaction Hamiltonian $\hat{H}_C$ is given by Eq.~(\ref{Hc}), and $ \hat{H}_M$ is the Hamiltonian of free electrons in the right lead \begin{equation} \hat{H}_M=\sum_p\xi_p \hat{a}^\dagger_p \hat{a}_p. \label{eq:6.2} \end{equation} Tunneling Hamiltonian $\hat{H}_T$ describes the weak coupling between the right lead and the dot, \begin{equation} \hat{H}_T=v_t \hat{\psi}^\dagger ( {\bf r}_t) \sum_p \hat{a}_p + h.c. \, , \label{eq:6.3} \end{equation} where ${\bf r}_t$ is the coordinate of the tunneling contact, $\hat{\psi}^\dagger({\bf r})$ is the electron wavefunction in the lead, and $v_t$ is the coupling constant which will be later related to the tunneling conductance of the contact $G_R$. Because $G_R \ll e^2/(2\pi\hbar)$, we can consider the tunneling current $I$ as the function of applied voltage $V$ in the second order of perturbation theory in tunneling Hamiltonian (\ref{eq:6.3}). This gives us the standard result\cite{Mahan} \begin{equation} I(eV) =i\left[ J\left(i\Omega_n\to eV+i0\right)- J\left(i\Omega_n\to eV-i0\right) \right], \label{eq:6.4} \end{equation} where $\Omega_n=2\pi Tn$ is the bosonic Matsubara frequency, and Matsubara current $J$ is defined as \begin{equation} J(i\Omega_n)=ev_t^2\nu \int_0^\beta d\tau e^{-i\Omega_n\tau} {\cal G}_M(\tau)\Pi (\tau). \label{eq:6.5} \end{equation} Here $\nu$ is the one-electron density of states per unit area and per one spin in the dot, ${\cal G}_M$ is the Green function of the electrons in the leads, \begin{equation} {\cal G}_M \equiv -\sum_{p_1,p_2}\langle T_\tau \hat{a}_{p_1}(\tau)\hat{a}_{p_2}^\dagger(0)\rangle =\nu_M \frac{\pi T}{\sin\pi T\tau }, \label{eq:6.6} \end{equation} with $\nu_M$ being the one-electron density of states per one spin in the lead, and function $\Pi(\tau)$ is given by \begin{equation} \Pi(\tau) = \nu^{-1} \langle T_\tau \bar{\psi}\left(\tau;{\bf r}_t\right) {\psi}\left(0; {\bf r}_t\right) \rangle. \label{eq:6.7} \end{equation} Averages in Eqs.~(\ref{eq:6.6}) and (\ref{eq:6.7}) are performed with respect to the equilibrium distribution of the system without tunneling. We choose to introduce $\nu$ into Eq.~(\ref{eq:6.4}) and into definition (\ref{eq:6.6}) to make function $\Pi (\tau )$ dimensionless. In the absence of the interaction, $E_C =0$, propagator $\Pi(\tau)$ is nothing but the Green function of non-interacting system; its ensemble average has the form analogous to Eq.~(\ref{eq:6.6}), \begin{equation} \left.\overline{\Pi(\tau)}\right|_{E_C=0} = \frac{\pi T}{\sin\pi T\tau }. \label{eq:6.8} \end{equation} Then, substitution of Eqs.~(\ref{eq:6.6}) and (\ref{eq:6.8}) into Eq.~(\ref{eq:6.5}) and analytic continuation (\ref{eq:6.4}) gives the tunneling current $I=s G_RV$ ($s$ is the spin degeneracy), where the tunneling conductance of the contact per one spin is \begin{equation} G_R = \frac{2\pi e^2}{\hbar}v_t^2\nu_M\nu . \label{eq:6.9} \end{equation} With the help of Eq.~(\ref{eq:6.9}) we can rewrite Eq.~(\ref{eq:6.5}) in terms of the bare conductance of the point contact: \begin{equation} J(i\Omega_n)=\frac{G_R}{2\pi e} \int_0^\beta d\tau \frac{\pi T e^{-i\Omega_n\tau} } {\sin\pi T\tau } \Pi (\tau). \label{eq:6.10} \end{equation} As we will see below, function $\Pi(\tau)$ can be analytically continued from the real axis to the complex plane, so that the result is analytic in a strip $0 < {\rm Re }\,\tau < \beta$, and has branch cuts along ${\rm Re}\,\tau=0,\beta$ lines. It allows one to deform the contour of integration as shown in Fig.~\ref{Fig:6.10}, and to obtain \begin{eqnarray} J(i\Omega_n)&=&\frac{G_R T}{2 e} \int_{-\infty}^\infty \!dt e^{\Omega_nt} \left[\theta (-\Omega_n) \theta(t)- \theta (\Omega_n) \theta(-t) \right]\times \nonumber\\ && \left(\frac{\Pi (it+0) } {\sinh[\pi T(t-i0)]}- \frac{\Pi (it-0) } {\sinh[\pi T(t+i0)] } \right). \label{eq:6.100} \end{eqnarray} Now the analytic continuation (\ref{eq:6.4}) can be performed, because the periodicity of the Matsubara Green functions was already taken into account. This gives \begin{eqnarray} I(eV)&=&i\frac{G_R T}{2 e} \int_{-\infty}^\infty \!dt e^{-ieVt} \times \nonumber\\ && \left[ \frac{\Pi (it+0) } {\sinh[\pi T(t-i0)]} -\frac{\Pi (it-0) } {\sinh[\pi T(t+i0)]} \right]. \label{eq:6.101} \end{eqnarray} Next, we use the analyticity of $\Pi(\tau)$ in the strip $0<{\rm Re}\,\tau <\beta$, and shift the integration variable $t \to t-i\beta/2$ in the first term in brackets in Eq.~(\ref{eq:6.101}), and $t \to t+i\beta/2$ in the second term. Bearing in mind that $\Pi(\tau)=-\Pi(\tau +\beta )$, we find \begin{equation} I=\left(T \sinh \frac{eV}{2T}\right) G_R \int_{-\infty}^\infty dt e^{-ieVt} \frac{\Pi \left(it+\frac{\beta}{2}\right)}{\cosh \pi T t}. \label{eq:6.102} \end{equation} Linear conductance $G$ is therefore given by \begin{equation} G= G_R \int_{-\infty}^\infty dt \frac{\Pi \left(it+\frac{\beta}{2}\right)}{2\cosh \pi T t}. \label{eq:6.103} \end{equation} \narrowtext{ \begin{figure}[h] \vspace{-0.5cm} \epsfxsize=6.7cm \hspace*{0.5cm} \epsfbox{contour2.eps} \vspace{0.3 cm} \caption{The integration contour used in the evaluation of the conductance, see Eq.~(\protect\ref{eq:6.10}) for (a) $\Omega_n <0$, and (b) for $\Omega_n > 0$. Branch cuts of the analytic continuation of $\Pi (\tau )$ are shown by thick lines.} \label{Fig:6.10} \end{figure} } Let us turn now to the actual calculation of the function $\Pi(\tau)$. It was shown in Ref.~\onlinecite{Furusaki95} that the interaction drastically affects the form of the function (\ref{eq:6.7}), however, some contributions were not taken into account. Our purpose is to construct an effective action theory, similar to that of Sec.~\ref{sec:3}, for calculation of $\Pi (\tau)$. Once again, we wish to get rid of the fermionic degrees of freedom of the dot. Similar to Eq.~(\ref{Q1}), it is convenient to rewrite charge operator in terms of the variables of the channel. However, here we have to keep in mind the fact that the tunneling events described by operators ${\psi}^\dagger\left({\bf r}_t\right)$ and ${\psi}\left({\bf r}_t\right)$ change the charge in the system by $+e$ and $-e$. It can be taken into account\cite{Furusaki95} by introducing three additional operators: Hermitian operator $\hat{n}$, and unitary operators $\hat{F}, \hat{F}^\dagger$ with the following commutation relations: \begin{equation} \left[\hat{n},\hat{F}^\dagger\right]=\hat{F}^\dagger. \label{eq:6.11} \end{equation} We can definitely choose Hilbert subspace in a way such that operator $\hat{n}$ has integer eigenvalues. Finally, these operators commute with all the fermionic degrees of freedom. Then, we can change the definition of the charge operator [cf. Eq.~(\ref{Q1})] to \begin{equation} \frac{\hat{Q}}{e}= - \int_{\mbox{channel}} d{\bf r}\psi^\dagger\psi+\hat{n}, \label{eq:6.12} \end{equation} and rewrite Eq.~(\ref{eq:6.7}) as \begin{equation} \Pi(\tau) = \nu^{-1} \langle T_\tau \hat{\bar{F}}(\tau )\bar{\psi}\left(\tau;{\bf r}_t\right) \hat{{F}}(0){\psi}\left(0; {\bf r}_t\right) \rangle. \label{eq:6.13} \end{equation} It is easy to see from Eqs.~(\ref{eq:6.12}) and (\ref{eq:6.11}) that operators $\hat{F}^\dagger, \hat{F}$ in Eq.~(\ref{eq:6.13}) change the charge by $+e$ and $-e$ respectively, in accordance with the initial definition of charge. After this manipulation, the Hamiltonian of the system and correlation function (\ref{eq:6.13}) become quadratic in the fermionic operators of the dot, so that part of the system can be integrated out. We use the identity similar to Eq.~(\ref{transformation}): \wide{m}{ \begin{eqnarray} {\rm Tr}_2&&\left\{e^{-\beta \hat{H}_F}T_\tau \bar{\psi}_2(\tau)\psi_2(0)\right\}= e^{-\beta \hat{H}_1 }{\rm Tr}_2 \left[ e^{-\beta \hat{H}_2} T_\tau \bar{\psi}_2(\tau)\psi_2(0) e^{-\int_0^\beta d\tau \hat{H}_{12}(\tau)} \right]= \label{eq:6.14}\\ &&e^{-\beta \Omega_2} \langle T_\tau \bar{\psi}_2(\tau)\psi_2(0)\rangle_2 e^{-\beta \hat{H}_1 } T_\tau e^{ \frac{1}{2}\int_0^\beta \!\! d\tau _1\int_0^\beta\!\! d\tau _2 \langle\hat{H}_{12}(\tau_1)\hat{H}_{12}(\tau_2)\rangle_2}+\nonumber \\ && e^{-\beta \Omega_2} e^{-\beta \hat{H}_1 }\!\int_0^\beta d\tau _3\int_0^\beta\! d\tau _4 T_\tau\ \langle T_\tau \bar{\psi}_2(\tau)\hat{H}_{12}(\tau_3)\rangle_2 \langle T_\tau {\psi}_2(0) \hat{H}_{12}(\tau_3))\rangle_2 e^{ \frac{1}{2}\int_0^\beta \!\! d\tau _1\int_0^\beta\!\! d\tau _2 \langle\hat{H}_{12}(\tau_1)\hat{H}_{12}(\tau_2)\rangle_2}, \nonumber \end{eqnarray} } \noindent where all $\psi^\dagger_2, \psi_2$ are the fermionic operators of the dot, and the rest of the notation is the same as that in Eq.~(\ref{transformation}). The calculation of the product $\langle\hat{H}_{12}(\tau_1)\hat{H}_{12}(\tau_2)\rangle_2$ was performed in Sec.~\ref{sec:3a}, see Eq.~(\ref{A1}), and all the steps leading to the derivation of the effective action (\ref{action}) can be repeated here. Calculation of the remaining operator products can be performed along the lines of Appendix~\ref{ap:1}. This yields \begin{eqnarray} \langle T_\tau\psi (\tau_1; {\bf r}_t) \hat{H}_{12}(\tau_2)\rangle_2 = -\psi (\tau_2,0)R^*(\tau_1-\tau_2), \label{eq:6.15}\\ \langle T_\tau\bar{\psi} (\tau_1; {\bf r}_t) \hat{H}_{12}(\tau_2)\rangle_2 = -\bar{\psi} (\tau_2,0)R(\tau_2-\tau_1), \nonumber \end{eqnarray} where, similar to Eq.~(\ref{A1}), $\psi(\tau; x)=e^{\tau \hat{H}_1}\psi( x)e^{-\tau \hat{H}_1}$ are the one dimensional fermionic operators of the channel in the interaction representation, $\bar{\psi}(\tau)=\psi^\dagger (-\tau )$. Kernel $R$ describes the motion of an electron from the tunnel contact to the entrance of the single mode channel, and it is given by \begin{equation} R(\tau)= \frac{1}{2m} \int dy \phi({y}) \partial_{x} {{\cal G}}(\tau; {\bf r},{\bf r}_t). \label{eq:6.16} \end{equation} Here, ${\cal G}$ is the exact Matsubara Green function of the closed dot subjected to the zero boundary condition. The wave function $\phi(y)$ describes the transverse motion in the single-mode channel, and the coordinates $x$ in the derivative of the Green function ${\cal G}$ is set to $+0$. Kernel $R(\tau)$ is the random quantity with the zero averages. In the universal regime, products of retarded $R^R(t)$ and advanced $R^A(t)$ counterparts of $R(\tau)$ entering into the Lehmann representation (\ref{Lehman}) have the following non-vanishing averages: \begin{mathletters} \label{eq:6.17} \begin{eqnarray} &&\frac{1}{\nu}\langle R^R_{H_1}(t_1)R^A_{H_2}(t_2)\rangle=\Delta {v_F}\delta (t_1+t_2)\theta (t_1) e^{-\frac{t_1}{\tau_H^C}}, \label{eq:6.17a} \\ &&\frac{1}{\nu}\langle R^R_{H_1}(t_1)\left[R^R_{H_2}(t_2)\right]^*\rangle= \nonumber\\ && \quad\quad \Delta{v_F} \delta (t_1-t_2)\theta (t_1) e^{-\frac{t_1}{\tau_H^D}} \label{eq:6.17b}, \end{eqnarray} where the decay times $\tau_H^{C,D}$ associated with applied magnetic fields $H_{1,2}$ are given by Eq.~(\ref{tauH}). All the higher momenta can be found by using the Wick theorem\cite{Footnote1}. Deriving Eq.~(\ref{eq:6.17b}) we use Eqs.~(\ref{eq:3.21}), (\ref{universal}), and the identity ${\cal G}^R(t;r_1,r_2) = \left[{\cal G}^A(-t;r_2,r_1)\right]^*$. \end{mathletters} To complete the derivation of the effective theory, we use Eqs.~(\ref{eq:6.14}) and (\ref{eq:6.15}), introduce left and right moving fermions similarly to Sec.~(\ref{sec:3a}), and thus obtain the effective action representation for $\Pi(\tau)$ from Eq.~(\ref{eq:6.13}): \begin{mathletters} \label{eq:6.18} \begin{eqnarray} &&\Pi (\tau) = \Pi_{in}(\tau) + \Pi_{el}(\tau); \label{eq:6.18a} \\ &&\Pi_{in}=-\frac{{\cal G}(- \tau; {\bf r}_t, {\bf r}_t ) } {\nu \langle T_\tau e^{-\hat{S}}\rangle} \langle T_\tau e^{-\hat{S}} \hat{\bar{F}}(\tau)\hat{F}(0) \rangle;\label{eq:6.18b}\\ &&\Pi_{el}\!=\frac{1} {\nu\langle T_\tau e^{-\hat{S}}\rangle}\int_0^\beta \!\! d\tau_1d\tau_2 R(\tau_1-\tau) R^*(-\tau_2)\times \label{eq:6.18c}\\ &&\langle T_\tau e^{-\hat{S}} \hat{\bar{F}}(\tau)\hat{F}(0) \left[\bar{\psi}_L(\tau_1)+\bar{\psi}_R(\tau_1)\right] \left[\psi_L (\tau_2) + \psi_R (\tau_2)\right] \rangle. \nonumber \end{eqnarray} Here the averaging is performed with respect to the Hamiltonian \end{mathletters} \begin{eqnarray} \hat{H}_0 &=& iv_F\int_{-\infty}^\infty dx \left\{\psi_L^\dagger\partial_x\psi_L - \psi_R^\dagger\partial_x\psi_R \right\} \label{eq:6.19} \\ &&+ \frac{E_C}{2}\left(\int_{-\infty}^0dx :\psi_L^\dagger\psi_L +\psi^\dagger_R\psi_R:+{\cal N}-\hat{n}\right)^2, \nonumber \end{eqnarray} and action
# Abstract Algebra/Group Theory/Products and Free Groups During the preliminary sections we introduced two important constructions on sets: the direct product and the disjoint union. In this section we will construct the analogous constructions for groups. ## Product Groups Definition 1: Let $G$ and $H$ be groups. Then we can define a group structure on the direct product $G\times H$ of the sets $G$ and $H$ as follows. Let $(g_1,h_1),(g_2,h_2)\in G\times H$. Then we define the multiplication componentwise: $(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2)$. This structure is called the direct product of $G$ and $H$. Remark 2: The product group is a group, with identity $(e_G,e_H)$ and inverses $(g,h)^{-1}=(g^{-1},h^{-1})$. The order of $G\times H$ is $|G\times H|=|G||H|$. Theorem 3: Let $G$ and $H$ be groups. Then we have homomorphisms $\pi_1\,:\, G\times H \rightarrow G$ and $\pi_2\,:\, G\times H \rightarrow H$ such that $\pi_1(g,h)=g$ and $\pi_2(g,h)=h$ for all $(g,h)\in G\times H$. These are called the projections on the first and second factor, respectively. Proof: The projections are obviously homomorphisms since they are the identity on one factor and the trivial homomorphism on the other. Corollary 4: Let $G$ and $H$ be groups. Then $\frac{G\times H}{H}\approx G$ and $\frac{G\times H}{G}\approx H$. Proof: This follows immediately from plying the first isomorphism theorem to Theorem 3 and using that $G\times\{e_H\}\approx G$ and $\{e_G\}\times H\approx H$. Theorem 5: Let $G$ and $H$ be groups. Then $G\times\{e_H\}$ and $\{e_G\}\times H$ are normal subgroups of $G\times H$. Proof: We prove the theorem for $G\times\{e_H\}$. The case for $\{e_G\}\times H$ is similar. Let $g,g^\prime\in G$ and $h\in H$. Then $(g,h)(g^\prime,e_H)(g,h)^{-1}=(gg^\prime g^{-1},hh^{-1})=(gg^\prime g^{-1},e_H)\in G\times\{e_H\}$. Commutative diagram showing the universal property satisfied by the direct product. We stated that this is an analogous construction to the direct product of sets. By that we mean that it satiesfies the same universal property as the direct product. Indeed, to be called a "product", a construction should have to satisfy this universal property. Theorem 6: Let $G$ and $H$ be groups. Then if $K$ is a group with homomorphisms $\phi_1\,:\, K\rightarrow G$ and $\phi_2\,:\, K\rightarrow H$, then there exists a unique homomorphism $u\,:\,K\rightarrow G\times H$ such that $\phi_1=\pi_1\circ u$ and $\phi_2=\pi_2\circ u$. Proof: By the construction of the direct product, $u\,:\, K\rightarrow G\times H$ is a homomorphism if and only if $\pi_1\circ u$ and $\pi_2\circ u$ are homomorphisms. Thus $u\,:\, K\rightarrow G\times H$ defined by $u((g,h))=(\phi_1(g),\phi_2(h))$ is one homomorphism satisfying the theorem, proving existence. By the commutativity condition this is the only such homomorphism, proving uniqueness. ### Products of Cyclic Groups Theorem 7: The order of an element $(a,b)\in\mathbb{Z}_m\times\mathbb{Z}_n$ is $|(a,b)|=\mathrm{lcm}(|a|,|b|)$. Proof: The lowest positive number $c$ such that $(a,b)^c=(ac,bc)=(0,0)$ is the smallest number such that $ac=rm$ and $bc=sn$ for integers $r,s$. It follows that $c$ divides both $|a|$ and $|b|$ and is the smallest such number. This is the definition of the least common divider. Theorem 8: $\mathbb{Z}_m\times \mathbb{Z}_n$ is isomorphic to $\mathbb{Z}_{mn}$ if and only if $m$ and $n$ are relatively prime. Proof: We begin with the left implication. Assume $\mathbb{Z}_m\times\mathbb{Z}_n\approx \mathbb{Z}_{mn}$. Then $\mathbb{Z}_m\times\mathbb{Z}_n$ is cyclic, and so there must exist an element with order $mn$. By Theorem 7 we there must then exist a generator $(a,b)\neq (0,0)$ in $\mathbb{Z}_m\times\mathbb{Z}_n$ such that $\mathrm{lcm}(|a|,|b|)=mn$. Since each factor of the generator must generate its group, this implies $\mathrm{lcm}(m,n)=mn$, and so $\gcd(m,n)=1$, meaning that $m$ and $n$ are relatively prime. Now assume that $m$ and $n$ are relatively prime and that we have generators $a$ of $\mathbb{Z}_m$ and $b$ of $\mathbb{Z}_n$. Then since $\gcd(m,n)=1$, we have $\mathrm{lcm}(m,n)=mn$ and so $|(a,b)|=mn$. this implies that $(a,b)$ generates $\mathbb{Z}_m\times\mathbb{Z}_n$, which must then be isomorphic to a cyclic group of order $mn$, im particular $\mathbb{Z}_{mn}$. Theorem 9 (Characterization of finite abelian groups): Let $G$ be an abelian group. Then there exists prime numbers $p_1,...,p_n$ and positive integers $r_1,...,r_n$, unique up to order, such that $G\approx \mathbb{Z}_{p_1^{r_1}}\times ... \times \mathbb{Z}_{p_n^{r_n}}$ Proof: A proof of this theorem is currenly beyond our reach. However, we will address it during the chapter on modules. ### Subdirect Products and Fibered Products Definition 10: A subdirect product of two groups $G$ and $H$ is a proper subgroup $K$ of $G\times H$ such that the projection homomorphisms are surjective. That is, $\pi_1(K)=G$ and $\pi_2(K)=H$. Example 11: Let $G$ be a group. Then the diagonal $\Delta=\{(g,g)\mid g\in G\}\subseteq G\times G$ is a subdirect product of $G$ with itself. Definition 12: Let $G$, $H$ and $Q$ be groups, and let the homomorphisms $\phi\,:\, G\rightarrow Q$ and $\psi\,:\, H\rightarrow Q$ be epimorphisms. The fiber product of $G$ and $H$ over $Q$, denoted $G\times_Q H$, is the subgroup of $G\times H$ given by $G\times_Q H=\{(g,h)\in G\times H\mid \phi(g)=\psi(h)\}$. In this subsection, we will prove the equivalence between subdirect products and fiber products. Specifically, every subdirect product is a fiber product and vice versa. For this we need Goursat's lemma. Theorem 13 (Goursat's lemma): Let $G$ and $G^\prime$ be groups, and $H\subseteq G\times G^\prime$ a subdirect product of $G$ and $G^\prime$. Now let $N=\ker \,\pi_2$ and $N^\prime=\ker\,\pi_1$. Then $N$ can be identified with a normal subgroup of $G$, and $N^\prime$ with a normal subgroup of $G^\prime$, and the image of $H$ when projecting on $G/N\times G^\prime /N^\prime$ is the graph of an isomorphism $G/N\approx G^\prime/N^\prime$. Proof: ### Semidirect Products More on the automorphism groups of finite abelian groups. Some results require theory of group actions and ring theory, which is developed in a later section. http://arxiv.org/pdf/math/0605185v1.pdf ## Free Groups In order to properly define the free group, and thereafter the free product, we need some preliminary definitions. Definition 10: Let $A$ be a set. Then a word of elements in $A$ is a finite sequence $a_1a_2...a_n$ of elements of $A$, where the positive integer $n$ is the word length. Definition 11: Let $x=a_1...a_n$ and $y=a_{n+1}...a_{n+k}$ be two words of elements in $A$. Define the concatenation of the two words as the word $xy=a_1...a_na_{n+1}...a_{n+k}$. Now, we want to make a group consisting of the words of a given set $A$, and we want this group to be the most general group of this kind. However, if we are to use the concatenation operation, which is the only obvious operation on two words, we are immediately faced with a problem. Namely, deciding when two words are equal. According to the above, the length of a product is the sum of the lengths of the factors. In other words, the length cannot decrease. Thus, a word of length $n$ multiplied with its inverse has length at least $n$, while the identity word, which is the empty word, has length $0$. The solution is an algorithm to reduce words into irreducible ones. These terms are defined below. Definition 12: Let $A$ be any set. Define the set $W(A)$ as the set of words of powers of elements of $A$. That is, if $a_1,...,a_n\in A$ and $r_1,...,r_n\in\mathbb{Z}$, then $a_1^{r_1}...a_n^{r_n}\in F(A)$. Definition 13: Let $x=a_1^{r_1}...a_n^{r_n}\in W(A)$. Then we define a reduction of $x$ as follows. Scan the word from the left until the first pair of indices $j,j+1$ such that $a_j=a_{j+1}$ is encountered, if such a pair exists. Then replace $a_j^{r_j}a_{j+1}^{r_{j+1}}$ with $a_j^{r_j+r_{j+1}}$. Thus, the resulting word is $x_{(1)}=a_1^{r_1}...a_{j-1}^{r_{j-1}}a_j^{r_j+r_{j+1}}a_{j+2}^{r_{j+2}}...a_n^{r_n}$. If no such pair exists, then $x=x_{(1)}$ and the word is called irreducible. It should be obvious if $x\in W(A)$ with length $n$, then $x_{(n)}$ will be irreducible. The details of the proof is left to the reader. Definition 14: Define the free group $F(A)$ on a set $A$ as follows. For each word $x\in W(A)$ of length $n$, let the reduced word $x_{(n)}\in F(A)$. Thus $F(A)\subseteq W(A)$ is the subset of irreducible words. As for the binary operation on $F(A)$, if $x,y\in F(A)$ have lengths $n$ and $m$ respectively, define $x*y$ as the completely reduced concatenation $(xy)_{(n+m)}$. Theorem 15: $F(A)$ is a group. Proof: Example 16: We will concider free groups on 1 and 2 letters. Let $A_1=\{a\}$ and $A_2=\{a,b\}$. Then $F(A_1)=\{a^n\mid n\in \mathbb{Z}\}$ with $a^na^m=a^{n+m}$. $F(A_2)=\{\prod_{i=1}^n a_{i}b_{i}\mid a_{i}\in F(\{a\}),b_{i}\in F(\{b\})\}$ such that $a_i\neq e$ for any $i>1$ and $b_i\neq e$ for any $i. Example product: $(a^2b^{-3}a)(a^{-1}ba)=a^2b^{-3}aa^{-1}ba=a^2b^{-3}ba=a^2b^{-2}a$. ###
(Q\Psi ) \label{ppsi1eq} \end{equation} or, equivalently, after multiplication by $E-H_{PP}$, \begin{equation} \left(E-H_{PP}-H_{PQ}\frac{1}{E-H_{QQ}}H_{QP}\right)(P\Psi) = 0 , \label{eq:eqsim} \end{equation} namely the equation obeyed by the component of $\Psi$ in the $P$-space. A more general procedure is however required should a solution of the equation \begin{equation} (E - H_{PP} ) \Psi_0 = 0 \label{eqpsi0} \end{equation} exists. In such a case, instead of eqs.~(\ref{ppsieq}) and (\ref{qpsieq}), one has \begin{equation} (P\Psi ) = \Psi_0 + \frac{1}{E - H_{PP}} H_{PQ} (Q\Psi ) \end{equation} and \begin{equation} (Q\Psi ) =\frac{1}{E - H_{QQ} - W_{QQ}} H_{QP} \Psi_0 \equiv \frac{1}{e_Q} H_{QP} \Psi_0 , \end{equation} where \begin{equation} W_{QQ} = H_{QP} \frac{1}{E - H_{PP}} H_{PQ} \label{wqqdef} \end{equation} and \begin{equation} e_{Q} = E - H_{QQ} - W_{QQ} . \label{eqdef} \end{equation} The above equations allow then one to recast (\ref{sist1}) as follows \begin{equation} (E - H_{PP} ) (P\Psi )= H_{PQ} \frac{1}{e_Q} H_{QP} \Psi_0 \label{sist1psi0} \end{equation} and to express $\Psi_0$ according to \begin{equation} \Psi_0 = \frac{1}{1 + \frac{\strut\displaystyle 1}{\strut\displaystyle E - H_{PP}} H_{PQ} \frac{\strut\displaystyle 1}{\strut\displaystyle e_Q} H_{QP}} (P\Psi ) . \label{psi0eq} \end{equation} The combination of the two above equations leads in turn to \begin{equation} (E - H_{PP} ) (P\Psi )= H_{PQ} \frac{1}{e_Q} H_{QP} \frac{1}{1 + \frac{\strut\displaystyle 1}{\strut\displaystyle E - H_{PP}} H_{PQ} \frac{\strut\displaystyle 1}{\strut\displaystyle e_Q} H_{QP}}(P\Psi ) , \label{sist1exp} \end{equation} where no trace is left of $\Psi_0$. Now using the operator identity \begin{equation} B \frac{1}{1 + CAB} = \frac{1}{1 + BCA} B , \label{opident} \end{equation} with the identifications \begin{equation} A = \frac{1}{e_Q} , \qquad B = H_{QP} ,\qquad C = \frac{1}{E - H_{PP}} H_{PQ} , \label{abcdef} \end{equation} one obtains \begin{eqnarray} (E - H_{PP} ) (P\Psi ) & = & H_{PQ} \frac{1}{e_Q} \frac{1}{1 + H_{QP} \frac{\strut\displaystyle 1}{\strut\displaystyle E - H_{PP}} H_{PQ} \frac{\strut\displaystyle 1}{\strut\displaystyle e_Q} } H_{QP} (P\Psi ) \nonumber \\ & = & H_{PQ} \frac{1}{e_Q} \frac{1}{1 + W_{QQ} \frac{\strut\displaystyle 1}{\strut\displaystyle e_Q}} H_{QP} (P\Psi ) . \label{sist1exp1} \end{eqnarray} The obvious identity \begin{equation} \frac{1}{e_Q} \frac{1}{1 + W_{QQ} \frac{\strut\displaystyle 1}{\strut\displaystyle e_Q}} = \frac{1}{\left ( \frac{\strut\displaystyle 1}{\strut\displaystyle e_Q}\right )^{-1} + W_{QQ} } \label{1eqid} \end{equation} leads finally to \begin{equation} \left (E - H_{PP} - H_{PQ} \frac{1}{\left ( \frac{\strut\displaystyle 1}{\strut\displaystyle e_Q}\right )^{-1} + W_{QQ}} H_{QP}\right ) (P\Psi ) = 0 , \label{fesheqa} \end{equation} which is entirely equivalent to (\ref{eq:eqsim}). It is important to note that the equation obeyed by $P\Psi$, being associated with the intricate many-body operator \begin{equation} {\cal H} = H_{PP} + H_{PQ} \frac{1}{\left ( \frac{\strut\displaystyle 1}{\strut\displaystyle e_Q}\right )^{-1} + W_{QQ}} H_{QP} , \label{calh} \end{equation} is not an eigenvalue equation in the usual sense. Indeed, the dependence upon $E$, the exact ground state energy of the system, is non-linear, since the latter appears also in the propagator in the $Q$-space in (\ref{fesheqa}). \section{ Averaging upon the energy } \label{sec:energy-average} The partition of the Hilbert space into a $P$ and a $Q$ sector should be performed in conformity to the principle of including into the $P$-space the wave functions with the simplest structure, namely those with a smooth spatial dependence, and in the $Q$-space the wave functions of greater intricacy. The actual $\Psi$ of the system should of course be viewed as a linear superposition of components with all the possible degrees of complexity. The problem then is: how to account for the average impact on the $P$-space of the $Q$-space wave functions, ignoring the detailed behaviour of the latter? To deal with this question an averaging procedure should first be prescribed. For this purpose we treat the energy $E$ as a variable upon which both the wave function and the matrix elements depend. Clearly, the components of the wave function varying most rapidly with $E$ lie in the $Q$-space: thus performing an average over the energy amounts to smoothing out the behaviour of $Q\Psi$ which can then be taken into account in the determination of the mean field. The latter rules the gentle physics taking place in the $P$-space. However, by replacing $(Q\Psi)$ with $\langle Q\Psi\rangle$ we also change at the same time $(P\Psi)$ into $\langle P\Psi\rangle$ and $E$ into, say, $\bar{E}_0$ (the angular brackets meaning, of course, energy averaging). The system equivalent to the Schroedinger equation will now accordingly read \begin{equation} \langle P\Psi \rangle = \tilde \Psi_0 + \frac{1}{{\bar E_0} - H_{PP}} H_{PQ} \langle Q\Psi \rangle , \label{avppsieq} \end{equation} where \begin{equation} \left ( {\bar E_0} - H_{PP} \right ) \tilde \Psi_0 = 0 \label{phi0eq} \end{equation} and \begin{equation} \langle Q\Psi \rangle= \langle \frac{1}{e_Q}\rangle H_{QP} \tilde \Psi_0 . \label{avqpsieq} \end{equation} Inserting (\ref{avqpsieq}) into (\ref{avppsieq}) yields \begin{equation} \left ( {\bar E_0} - H_{PP} \right ) \langle P\Psi \rangle = H_{PQ} \langle \frac{1}{e_Q}\rangle H_{QP} \tilde \Psi_0 . \label{avppsiphi0} \end{equation} Following then the same steps as before the equation \begin{equation} \left ( {\bar E_0} - H_{PP} - H_{PQ} \frac{1}{{\langle \frac{\strut\displaystyle 1}{\strut\displaystyle e_Q} \rangle}^{-1} + W_{QQ}} H_{QP} \right ) \langle P\Psi \rangle \equiv \left ( {\bar E_0} - {\bar {\cal H}}\right ) \langle P\Psi \rangle = 0 \label{avfesheq} \end{equation} is obtained: it clearly corresponds to equation (\ref{fesheqa}) when a suitable energy average has been performed. We should now face the problem of specifying the energy average. For this purpose a smoothing function $\rho (E , {\bar E_0} )$ with the property \begin{equation} \int \rho (E , {\bar E_0} ) dE = 1 \label{intnew} \end{equation} should be introduced. Then averaging a function $f(E)$ means to perform the integral \begin{equation} \langle f \rangle = \int \rho (E , {\bar E_0} ) f(E) dE \label{avef} \end{equation} which must be a real quantity since we are dealing with bound states. A smoothing function obeying the above conditions is \begin{equation} \rho (E , \bar E_0 ) = \frac{1}{2\pi \sy{i}} \frac{1}{E-(\bar E_0-\epsilon)} . \label{rhoreal} \end{equation} In this case the integration in eq.~(\ref{avef}) is performed along a path coinciding with the real axis $Re E$ with a small semi--circle described positively about the singularity $({\bar E_0} - \epsilon)$. If $f(E)$ is bounded at infinity sufficiently strongly, the Cauchy's integral formula can be applied so that the condition of eq.~(\ref{intnew}) is satisfied and eq.~(\ref{avef}) becomes \begin{equation} \langle f \rangle = f ({\bar E_0} -\epsilon) . \label{avefeps} \end{equation} In the present approach $\epsilon$ is an empirical parameter essentially measuring the range over which the average is taken. In other words, for a sufficiently large value the energy of the states within $\epsilon$ can be neglected and it is in this sense that an average is performed. At this point the system corresponding to the original Schroedinger equation can be cast into the form (see also Kawai, Kerman and McVoy \cite{kaw75}) \begin{eqnarray} (E - \bar {\cal H} ) (P\Psi ) & = & V_{PQ} (Q\Psi ) \label{avsista} \\ (E - H_{QQ} ) (Q\Psi ) & = & V_{QP} (P\Psi ) \label{avsistb} \end{eqnarray} where \begin{equation} \bar {\cal H} = H_{PP} + V_{PQ} V_{QP} \frac{1}{\bar E_0 -\epsilon - E} \label{hbarv} \end{equation} and \begin{eqnarray} V_{PQ} & = & H_{PQ} \sqrt{\frac{\bar E_0 - \epsilon -E} {\bar E_0 - \epsilon - H_{QQ}} } , \label{vpqdefr} \\ V_{QP} & = & \sqrt{\frac{\bar E_0 - \epsilon -E} {\bar E_0 - \epsilon - H_{QQ}} } H_{QP} . \label{vqpdefr} \end{eqnarray} Note that eqs.~(\ref{avsista}) and (\ref{avsistb}), while having the same structure as the original eqs.~(\ref{sist1}) and (\ref{sist2}), display a potential, coupling $(P\Psi)$ and $(Q\Psi)$, which is $V_{PQ}$ rather than $H_{PQ}$. The strength of the coupling is thus considerably reduced by roughly $(\epsilon / H_{QQ})^{\frac{1}{2}}$ which is much less than one. This in turn entails that $H_{PP}$ and $\bar {\cal H}$ are of the same order of magnitude. Now the spectral decomposition of the operator $1/( E - \bar {\cal H})$ can be performed in terms of eigenfunctions of the hermitian operator $\bar{\cal H}$, namely \begin{equation} \bar {\cal H} \Phi_n = \bar E_n \Phi_n . \label{hbareig} \end{equation} It reads \begin{eqnarray} (P\Psi ) & = & \frac{1}{E - \bar {\cal H}} V_{PQ} (Q\Psi ) = \sum_{n} \frac{| \Phi_n \rangle \langle \Phi_n |}{E - \bar E_n} V_{PQ} (Q\Psi ) \nonumber\\ & = & | \Phi_0 \rangle \frac{\langle \Phi_0 | V_{PQ} | Q\Psi \rangle}{E - \bar E_0} + \left ( \frac {1}{E - \bar {\cal H}}\right )^\prime V_{PQ} (Q\Psi ) , \label{spectd} \end{eqnarray} the prime on $(1/( E - \bar {\cal H}))$ signifying that the lowest eigenfunction $\Phi_0$ is to be excluded. From eq.~(\ref{spectd}) it follows \begin{equation} \langle \Phi_0 | P\Psi \rangle = \frac{\langle \Phi_0 | V_{PQ} | Q\Psi \rangle}{E - \bar E_0} . \label{normc} \end{equation} To obtain $(Q\Psi )$ we return to eqs.~(\ref{avsista}) and (\ref{avsistb}) and get \begin{equation} (Q\Psi ) = \frac{1}{E - h_{QQ}} V_{QP} |\Phi_0\rangle \langle \Phi_0 | P\Psi \rangle , \label{qpsiphi0} \end{equation} where \begin{equation} h_{QQ} = H_{QQ} + \bar W_{QQ} \label{shqq} \end{equation} and \begin{equation} \bar W_{QQ} \equiv V_{QP} \left ( \frac {1}{E - \bar {\cal H}}\right )^\prime V_{PQ} . \label{wbardef} \end{equation} Inserting eq.~(\ref{qpsiphi0}) into eq.~(\ref{normc}) one finally obtains \begin{equation} E - \bar E_0 = \langle \Phi_0 | V_{PQ} \frac{1}{E - h_{QQ}} V_{QP} | \Phi_0 \rangle , \label{deltae} \end{equation} where the right hand side represents the correction to the mean field energy $\bar{E}_0$. Remarkably, the quantity $\langle \Phi_0 | P\Psi \rangle$ does not appear in eq.~(\ref{deltae}), the reason
seem to be confusing "polygon" (which is 2-dimensional) with "polyhedron" (which is 3-dimensional), or possibly "circle" with "sphere"; I find it hard to figure out whether the question's asking for a 2D or 3D solution. With respect to input, it's generally best to leave it flexible unless doing so would be exploitable. Also, unless you made the image yourself, you need to give credit to the image creator (and verify that the copyright requirements on the image are suitable). – user62131 Jun 2 '17 at 14:58 • @ais523 Thanks for these suggestions. In fact, I was thinking about taking the 3d concept and applying it to 2d. That would translate by "fill the polygon with N circles (disks ?) so that these circles occupy as much surface as possible without them being more than 10% out of the polygon". – z3r0 Jun 2 '17 at 18:13 • @ais523 Concerning the victory condition, I don't know how to measure the optimality of an output so I guess it'd have to be a code golf. Finally, regarding the image, I didn't produce the image I use, however, I'm part of the community the image originates from and I'm pretty sure there would be no problems at all. If necessary, I'll add my own or will credit the user (I only know his forum nickname). – z3r0 Jun 2 '17 at 18:17 # Fairly Cut a Ham Sandwich in Half In this challenge we consider a discrete version of the ham sandwich theorem. In our case the theorem says: Given two sets of points in a plane, there is a line that simulaneously bisects both sets. So given two disjoint sets of distinct integral points in the plane, your task is finding a line of the form a*x + b*y = c that bisects these two sets and outputs the integers a,b,c. • Both (strict) half planes have to contain the same number of points of per set. • The line can contain input points, these are then not counted to either of the sides (e.g. when a set contains an odd number of points, or all points are on one line.) • The line is not necessarily unique. • The mentioned representation of a given line is unique up to an integral multiple (e.g. (m*a)*x + (m*b)*y = (m*c) represents the same line as above), but you do not have to output the fully reduced form. ### Examples As we said above, the output is not necessarily unique, so the presented outputs here are just examples. All the outputs are give in the form a,b,c. Input: [(1,2),(1,4),(2,1),(2,3),(3,2),(3,4),(4,1),(4,3)] [(1,1),(1,3),(2,2),(2,4),(3,1),(3,3),(4,2),(4,4)] Output: 410,640,2625 (410x + 640y = 2624) 1,1,5 (x+y=5) 0,2,5 (2y=5) In the following we see three valid outputs: Input: [(1,1),(2,2)] [(1,2),(2,1)] Output: 2,5,10 (2x+5y = 10) (more to be added) • The best solution to this as written is probably brute force (or even randomized brute force, i.e. keep picking random lines until one of them works). You should probably either explicitly allow that, or else design a rule to disallow it. – user62131 Jun 6 '17 at 23:26 # Collatz Bearings Everyone knows the collatz conjecture. It is that this function: when repeatedly applied on a positive non-zero integer will reach one. There are many ways to visualise this. Inspired by this post, with the original source of this method here, this is how we will do it: Start at 1, with a northward bearing. The next numbers will be one unit (of any, consistent) size, and x degrees clockwise (+x) if it is even, and x degrees anticlockwise (-x) if it is odd. An example of this can be seen here (Though it starts with an eastward bearing). It uses a few hundred random starting points and goes backwards. But it's probably easier to build it backwards. Here is a graph-like visualisation, showing the first 8 levels: There can be collisions. Your task is to take two numbers, which would correspond to 2 nodes on that tree, and return the bearing of the second node to the first node. ## Input 3 numbers. Positive integer a, Positive integer b, and angle x, in any unit you desire. b > a > 0, x is the angle of seperation, in it's simplest form (mod 360 for degrees, mod 2pi for radians, or having the upper half be negative if you wish.) a and b are guaranteed to be in different places (e.g., you won't have a = 20 and b = 21.) ## Output Using the method described above, the bearing of b from a in the graph, in the units of the input angle. ## Scoring This is code-golf, so the shortest program in bytes wins. ### Note If the Collatz Conjecture is eventually proven wrong, you do not need to take inputs where repeated application does not reach 1. • I don't understand what we're supposed to do. If we start at 1 then we'll go 1 -> 4 -> 2 -> 1 so we should draw a chain LLRLLRLLR.... The supplied image doesn't look like that chain. Jun 5 '17 at 14:55 • 1 is the starting point. Then you go clockwise up to 2. It's a really zoomed out image, I'll post a zoomed in one with numbers Jun 5 '17 at 14:56 • I think what's missing is something to say that the iteration is backwards. At least, that's how I can make sense of the graph: "It uses a few hundred random starting points" still confuses the issue. "The bearing of the second angle to the first angle" is also rather confusing: judging by the Output section I think it should be node rather than angle. Jun 5 '17 at 22:26 # Efficiently find the median ## Background Computer scientists have spent a long time looking into ways of sorting data faster. One of the known discoveries is that if you can use the actual values of the data, sorting can be faster than if you can only compare them. Finding the median of a list is a similar operation to sorting it (you can trivially implement it via sorting the list, then taking the element in the middle). However, if you're in an environment where you can only compare list elements (as opposed to looking at the elements directly), this is not typically the fastest way to find the median, as sorts are hurt more badly by the comparison restriction than finding the nth item is. What's the fastest way? Well, finding that is what this challenge is about. (However, you may want to read this Wikipedia article to get some ideas of the approaches that are typically used. I can't guarantee that the algorithm given there is the best, though, especially on a problem of the limited size given here.) ## The task Write a program that finds the index of the median element within a list of 31 elements. However, the program may not take the list as input, and may not inspect its values directly. Rather, the program may only make comparisons to determine which of two indexes corresponds to the larger element, via calling a separate comparison function. (In other words, your program deals entirely with list indexes, not list values.) In order to avoid solutions that brute-force their way through all possible algorithms, you must be able to run at least one worst-case input (i.e. an input that takes the maximum possible number of comparisons), using a comparison function which simply compares two array elements, in under 10 minutes on some computer you have access to. (Solutions which do not use brute force to find an algorithm are unlikely to get anywhere near this time bound.) ## Clarifications • You may assume that all the list elements are distinct, i.e. the comparison function will always specify that one of the elements is larger, no matter how they're compared. • You may choose the format in which the comparison function provides output, but there must only be two possible outputs (meaning "item at first
## Section8.2Another look at distributing apples or folders A recurring problem so far in this book has been to consider problems that ask about distributing indistinguishable objects (say apples) to distinct entities (say children). We started in Chapter 2 by asking how many ways there were to distribute $$40$$ apples to $$5$$ children so that each child is guaranteed to get at least one apple and saw that the answer was $$C(39,4)\text{.}$$ We even saw how to restrict the situation so that one of the children was limited and could receive at most $$10$$ apples. In Chapter 7, we learned how to extend the restrictions so that more than one child had restrictions on the number of apples allowed by taking advantage of the Principle of Inclusion-Exclusion. Before moving on to see how generating functions can allow us to get even more creative with our restrictions, let's take a moment to see how generating functions would allow us to solve the most basic problem at hand. ### Example8.4. We already know that the number of ways to distribute $$n$$ apples to $$5$$ children so that each child gets at least one apple is $$C(n-1,4)\text{,}$$ but it will be instructive to see how we can derive this result using generating functions. Let's start with an even simpler problem: how many ways are there to distribute $$n$$ apples to one child so that each child receives at least one apple? Well, this isn't too hard, there's only one way to do it—give all the apples to the lucky kid! Thus the sequence that enumerates the number of ways to do this is $$\{a_n\colon n\geq 1\}$$ with $$a_n=1$$ for all $$n\geq 1\text{.}$$ Then the generating function for this sequence is \begin{equation*} x+x^2+x^3+\cdots = x(1+x+x^2+x^3+\cdots) = \frac{x}{1-x}. \end{equation*} How can we get from this fact to the question of five children? Notice what happens when we multiply \begin{equation*} (x+x^2+\cdots)(x+x^2+\cdots)(x+x^2+\cdots)(x+x^2+\cdots) (x+x^2+\cdots). \end{equation*} To see what this product represents, first consider how many ways can we get an $$x^6\text{?}$$ We could use the $$x^2$$ from the first factor and $$x$$ from each of the other four, or $$x^2$$ from the second factor and $$x$$ from each of the other four, etc., meaning that the coefficient on $$x^6$$ is $$5 = C(5,4)\text{.}$$ More generally, what's the coefficient on $$x^n$$ in the product? In the expansion, we get an $$x^n$$ for every product of the form $$x^{k_1}x^{k_2}x^{k_3}x^{k_4}x^{k_5}$$ where $$k_1+k_2+k_3+k_4+k_5 = n\text{.}$$ Returning to the general question here, we're really dealing with distributing $$n$$ apples to $$5$$ children, and since $$k_i> 0$$ for $$i=1,2,\dots,5\text{,}$$ we also have the guarantee that each child receives at least one apple, so the product of the generating function for one child gives the generating function for five children. Let's pretend for a minute that we didn't know that the coefficients must be $$C(n-1,4)\text{.}$$ How could we figure out the coefficients just from the generating function? The generating function we're interested in is $$x^5/(1-x)^5\text{,}$$ which you should be able to pretty quickly see satisfies \begin{align*} \frac{x^5}{(1-x)^5} \amp = \frac{x^5}{4!}\frac{d^4}{dx^4}\left(\frac{1}{1-x}\right) = \frac{x^5}{4!}\sum_{n=0}^\infty n(n-1)(n-2)(n-3)x^{n-4}\\ \amp =\sum_{n=0}^\infty \frac{n(n-1)(n-2)(n-3)}{4!}x^{n+1} = \sum_{n=0}^\infty \binom{n}{4}x^{n+1}. \end{align*} The coefficient on $$x^n$$ in this series $$C(n-1,4)\text{,}$$ just as we expected. We could revisit an example from Chapter 7 to see that if we wanted to limit a child to receive at most $$4$$ apples, we would use $$(x+x^2+x^3+x^4)$$ as its generating function instead of $$x/(1-x)\text{,}$$ but rather than belabor that here, let's try something a bit more exotic. ### Example8.5. A grocery store is preparing holiday fruit baskets for sale. Each fruit basket will have $$20$$ pieces of fruit in it, chosen from apples, pears, oranges, and grapefruit. How many different ways can such a basket be prepared if there must be at least one apple in a basket, a basket cannot contain more than three pears, and the number of oranges must be a multiple of four? Solution. In order to get at the number of baskets consisting of $$20$$ pieces of fruit, let's solve the more general problem where each basket has $$n$$ pieces of fruit. Our method is simple: find the generating function for how to do this with each type of fruit individually and then multiply them. As in the previous example, the product will contain the term $$x^n$$ for every way of assembling a basket of $$n$$ pieces of fruit subject to our restrictions. The apple generating function is $$x/(1-x)\text{,}$$ since we only want positive powers of $$x$$ (corresponding to ensuring at least one apple). The generating function for pears is $$(1+x+x^2+x^3)\text{,}$$ since we can have only zero, one, two, or three pears in basket. For oranges we have $$1/(1-x^4) = 1+x^4+x^8+\cdots\text{,}$$ and the unrestricted grapefruit give us a factor of $$1/(1-x)\text{.}$$ Multiplying, we have \begin{equation*} \frac{x}{1-x} (1+x+x^2+x^3) \frac{1}{1-x^4} \frac{1}{1-x} = \frac{x}{(1-x)^2(1-x^4)} (1+x+x^2+x^3). \end{equation*} Now we want to make use of the fact that $$(1+x+x^2+x^3) =(1-x^4)/(1-x)$$ (by (8.1.1)) to see that our generating function is \begin{align*} \frac{x}{(1-x)^3} \amp= \frac{x}{2}\sum_{n=0}^\infty n(n-1)x^{n-2} = \sum_{n=0}^\infty\frac{n(n-1)}{2} x^{n-1} \\ \amp=\sum_{n=0}^\infty\binom{n}{2} x^{n-1} = \sum_{n=0}^\infty\binom{n+1}{2} x^n. \end{align*} Thus, there are $$C(n+1,2)$$ possible fruit baskets containing $$n$$ pieces of fruit, meaning that the answer to the question we originally asked is $$C(21,2) = 210\text{.}$$ The compact form of the solution to Example 8.5 suggests that perhaps there is a way to come up with this answer without the use of generating functions. Thinking about such an approach would be a good way to solidify your understanding of a variety of the enumerative topics we have already covered. ### Example8.6. Find the number of integer solutions to the equation \begin{equation*} x_1 + x_2 + x_3 = n \end{equation*} ($$n\geq 0$$ an integer) with $$x_1 \geq 0$$ even, $$x_2\geq 0\text{,}$$ and $$0\leq x_3\leq 2\text{.}$$ Solution. Again, we want to look at the generating function we would have if each variable existed individually and take their product. For $$x_1\text{,}$$ we get a factor of $$1/(1-x^2)\text{;}$$ for $$x_2\text{,}$$ we have $$1/(1-x)\text{;}$$ and for $$x_3$$ our factor is $$(1+x+x^2)\text{.}$$ Therefore, the generating function for the number of solutions to the equation above is \begin{equation*} \frac{1+x+x^2}{(1-x)(1-x^2)} = \frac{1+x+x^2}{(1+x)(1-x)^2}. \end{equation*} In calculus, when we wanted to integrate a rational function of this form, we would use the method of partial fractions to write it as a sum of “simpler” rational functions whose antiderivatives we recognized. Here, our technique is the same, as we can readily recognize the formal power series for many rational functions. Our goal is to write \begin{equation*} \frac{1+x+x^2}{(1+x)(1-x)^2} = \frac{A}{1+x} + \frac{B}{1-x} + \frac{C}{(1-x)^2} \end{equation*} for appropriate constants, $$A\text{,}$$ $$B\text{,}$$ and $$C\text{.}$$ To find the constants, we clear the denominators, giving \begin{equation*} 1+x+x^2 = A(1-x)^2 + B(1-x^2) + C(1+x). \end{equation*} Equating coefficients on terms of equal degree, we have: \begin{align*} 1 \amp = A+B+C\\ 1 \amp = -2A + C\\ 1 \amp = A - B \end{align*} Solving the system, we find $$A=1/4\text{,}$$ $$B=-3/4\text{,}$$ and $$C=3/2\text{.}$$ Therefore, our generating function is \begin{align*} \amp\frac{1}{4}\frac{1}{1+x} -\frac{3}{4} \frac{1}{1-x} +\frac{3}{2} \frac{1}{(1-x)^2}\\ =\amp \frac{1}{4}\sum_{n=0}^\infty (-1)^n x^n - \frac{3}{4} \sum_{n=0}^\infty x^n + \frac{3}{2}\sum_{n=0}^\infty n x^{n-1}\text{.} \end{align*} The solution to our question is thus the coefficient on $$x^n$$ in the above generating function, which is \begin{equation*} \frac{(-1)^n}{4} - \frac{3}{4} + \frac{3(n+1)}{2}, \end{equation*} a surprising answer that would not be too easy to come up with via other methods! The invocation of partial fractions in Example 8.6 is powerful, but solving the necessary system of equations and then hoping that the resulting formal power series have expansions we immediately recognize can be a challenge. If Example 8.6 had not asked about the general case with $$n$$ on the right-hand side of the equation but instead asked specifically about $$n=30\text{,}$$ you might be wondering if it would just be faster to write some Python code to generate all the solutions or more interesting to huddle up and devise some clever strategy to count them. Fortunately, technology can help us out when working with generating functions. In SageMath, we can use the series() method to get the power series expansion of a given function. The
harder instance of this environment, which will be presented in the next section. Three designs which were found with a high fitness by SG-MOrph are presented in Figure \ref{Appendix::Fig::multiped::optimal}. \begin{figure*}[h!] \centering \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/best/legged/legged_0_cut.png} \caption{4 DoF} \end{subfigure}% ~ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/best/legged/legged_1_cut.png} \caption{5 DoF} \end{subfigure}% ~ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/best/legged/legged_2_cut.png} \caption{5 DoF} \end{subfigure}% \caption{Selection of optimized morphologies found for Multi-Ped.} \label{Appendix::Fig::multiped::optimal} \end{figure*} \begin{figure*}[h!] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_legged/legged_morph_0.png} \caption{Morphology 1} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_legged/legged_morph_1.png} \caption{Morphology 2} \end{subfigure} \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_legged/legged_morph_2.png} \caption{Morphology 3} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_legged/legged_morph_3.png} \caption{Morphology 4} \end{subfigure} \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_legged/legged_morph_4.png} \caption{Morphology 5} \end{subfigure}% \caption{Performance of SG-MOrph on the Multi-Ped environment. Graphs show the average and standard deviation of the best performance seen in 100 episodes for each morphology-design combination, computed from five experiments. The first 10 morphologies/designs belong to the fixed initial design pool (Fig. \ref{Appendix::Fig::multiped::Initial}), thereafter 25 design optimization and 25 random design selections are performed. Figures a-e show the performance of SG-MOrph split into the five different morphological structures considered.} \label{Appendix::Fig::multiped::morphdata} \end{figure*} \begin{figure*}[h!] \centering \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_0_design_0_cut.png} \caption{Morph. 1, Design 1} \end{subfigure}% ~ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_0_design_1_cut.png} \caption{Morph. 1, Design 2} \end{subfigure}% ~ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_1_design_0_cut.png} \caption{Morph. 2, Design 1} \end{subfigure}% ~ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_1_design_1_cut.png} \caption{Morph. 2, Design 2} \end{subfigure} \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_2_design_0_cut.png} \caption{Morph. 3, Design 1} \end{subfigure}% ~ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_2_design_1_cut.png} \caption{Morph. 3, Design 2} \end{subfigure}% ~ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_3_design_0_cut.png} \caption{Morph. 4, Design 1} \end{subfigure}% ~ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_3_design_1_cut.png} \caption{Morph. 4, Design 2} \end{subfigure} \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_4_design_0_cut.png} \caption{Morph. 5, Design 1} \end{subfigure}% ~ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/initial/legged/morph_4_design_1_cut.png} \caption{Morph. 5, Design 2} \end{subfigure} \caption{Initial designs pool for Multi-Ped consisting of five different morphologies, i.e.~ graph structures, ranging from four to six degrees-of-freedom.} \label{Appendix::Fig::multiped::Initial} \end{figure*} \clearpage \subsection{Environment 4: Multi-Ped+Stairs} This environment is based on the previously introduced Multi-Ped environment, using the same morphological structures for optimization as well as reward functions and state spaces. Unlike the Multi-Ped environment, which had the agent traverse a flat terrain, the task in this environment is to navigate an uneven terrain with staircases as shown in Figure \ref{Appendix::Fig::stairs::env}. The agent has to navigate this terrain in a blind manner and does not receive any additional information, such as the location or current height of the closest step. In fact, we can see in Figure \ref{Appendix::Fig::stairs::all} that the performance of the random design selection baseline is much worse than in the Multi-Ped task, indicating that the set of optimal design parameter is much smaller and unlikely to be found by uniform sampling strategies. However, we find that on most morphologies the gap between GNN policies and MLP policies, the first performing better, widens, indicating room for improvement of SG-MOrph. This will be discussed further in one of the following sections. Finally, Figure \ref{Appendix::Fig::stairs::optimal} shows a selection of designs found to be optimal by SG-MOrph. \begin{figure*}[] \centering \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/best/stairs/stairs_0_cut.png} \caption{4 DoF} \end{subfigure}% ~ \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/best/stairs/stairs_1_cut.png} \caption{5 DoF} \end{subfigure}% ~ \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{fig2/designs/best/stairs/stairs_2_cut.png} \caption{5 DoF} \end{subfigure}% \caption{Selection of optimized morphologies found for Multi-Ped+Stairs.} \label{Appendix::Fig::stairs::optimal} \end{figure*} \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth]{fig2/designs/initial/stairs/scene_cut.png} \caption{Simulation environment for Multi-Ped+Stairs.} \label{Appendix::Fig::stairs::env} \end{figure}% \begin{figure*}[h!] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_stairs/stairs_morph_0.png} \caption{Morphology 1} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_stairs/stairs_morph_1.png} \caption{Morphology 2} \end{subfigure} \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_stairs/stairs_morph_2.png} \caption{Morphology 3} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_stairs/stairs_morph_3.png} \caption{Morphology 4} \end{subfigure} \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_stairs/stairs_morph_4.png} \caption{Morphology 5} \end{subfigure}% \caption{Performance of SG-MOrph on the Multi-Ped+Stairs environment. Graphs show the average and standard deviation of the best performance seen in 100 episodes for each morphology-design combination, computed from five experiments. The first 10 morphologies/designs belong to the fixed initial design pool (Fig. \ref{Appendix::Fig::multiped::Initial}), thereafter 25 design optimization and 25 random design selections are performed. Figures a-e show the performance of SG-MOrph split into the five different morphological structures considered.} \label{Appendix::Fig::stairs::all} \end{figure*} \subsection{Visualizing the Performance of the Design Space} One inherent property of the proposed framework is that the introduced objective function (Eq.\ \ref{Eq::mopt}) can be used to visualize the current believe of the graph neural networks for the performance of design variables in a given morphology/graph structure. Figure \ref{Appendix::Fig::HC::latents} shows as latent representation of the estimated performance of design variables for morphologies one and five in the HalfCheetah environment. We can see that the performance estimations for design variables are not uniform but the proposed objective funciton gives rise to complex structures and predictions in the design space. While this visualization property is not exploited in this paper, it opens the door for better integration of human designers in the automatic design adaptation process. \begin{figure}[h!] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{fig2/figures_hc/hc_morph_0_65.png} \caption{HalfCheetah Morph. 1} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{{fig2/figures_hc/hc_morph_4_65.png}} \caption{HalfCheetah Morph. 5} \end{subfigure}% \caption{Visualization of the estimated performance of continuous designs variables for graph structures 1 and 5 in HalfCheetah after 64 morphology-design evaluations. The high-dimensional design parameter space was projected onto a two-dimensional latent space using Principal Component Analysis (PCA) on all designs evaluated on a specific morphology. The visualizations show only the valid range of design parameters. Estimated performance was computed by using the proposed objective function in Eq.\ \ref{Eq::mopt}. } \label{Appendix::Fig::HC::latents} \end{figure} \subsection{Limitations, Dirty Laundry and Future Work} We see the main contribution of this paper in providing a first stepping stone for an effective framework combining classic MLP networks with graph neural networks for the fast and data-efficient co-adaptation of robotic designs and behaviour, suitable for direct real-world application. Nevertheless, based on the experiment conducted we were able to identify potential avenues for future improvements of the proposed algorithm. Experiments conducted on the two Multi-Ped environments show that while the proposed transfer-learning approach is able to jump-start MLP networks on a new design, it can be outperformed by GNN policies on certain environments. Our hypothesis is that the GNN actor is able to benefit in these harder environments from receiving training data seen from earlier successful prototypes, while the MLP actor is only being trained on data collected from the current prototype after the transfer-learning phase. A potential avenue to increase the performance of MLP policies further is to allow for the continuous flow of information and knowledge from GNN to MLP networks. This could be achieved, for example, by using loss functions combining both MLP and GNN critics, or by placing a prior on the MLP policies forcing them to stay in the vicinity of the GNN policy in the parameter or action space. Such approaches would still allow for the high-frequency evaluation of policies on robots, while using slower GNN networks only during the off-line training on an external computer. Furthermore, while we selected fixed pools of morphological structures similar to the experiments conducted in \cite{huang2020one}, of which we optimize the design parameters and thus having an infinite number of potential design prototypes, an obvious next step is allow for arbitrary graph structures. This would allow for the use of evolutionary algorithms mutating and adapting graph structures and could be performed without the need for extra simulations or real world experiments. Where this optional pool-adaptation could be performed is indicated in Algorithm 1 and Figure \ref{Appendix::Fig::Overview}. We think that the main obstacle for future developments in this problem area is the lack of available training environments which allow for the adaptation of discrete and continuous morphological and design parameters alike in an accessible and usable manner. Some prior work started to tackle this problem and provide open-source environments accessible to the research community. For example, Huang et al.\ introduce in \cite{huang2020one} a set of environments, each having a number of different agents with varying degrees-of-freedom, however without the ability to influence and change the physical parameters of agents. We aim to provide with this paper a first version of \textit{morphsim}, an simulation environment which allows for the easy adaptation of design parameters on a number of morphologies with an OpenAI-Gym-like interface and providing automatically generated graph
& \CheckmarkBold & \XSolidBrush \\ CLCRN & CondLocalConv & GRU & \CheckmarkBold & \CheckmarkBold \\ \bottomrule \end{tabular} }% \caption{Comparison of different spatio-temporal methods. \textit{`Spatial'} and \textit{`Temporal'} represent the spatial convolution and temporal dynamics modules. If the spatial kernel is predefined, it is not \textit{`learnable'}. Only our method is established for \textit{`continuous'} spatial domain from which meteorological signals usually sampled.}\label{tab:modelcomparision} \vspace{-0.4cm}\end{table} \paragraph{Datasets.} The datasets used for performance evaluation are provided in \textbf{WeatherBench} \cite{rasp2020weatherbench}, with 2048 nodes on the earth sphere. We choose four hour-wise weather forecasting tasks including temperature, cloud cover, humidity and surface wind component, the units of which are $\mathrm{K}$, $\%\times 10^{-1}$, $\%\times 10$, $\mathrm{ms}^{-1}$ respectively. We truncate the temporal scale from Jan.1, 2010 to Dec.31, 2018, and set input time length as 12 and forecasting length as 12 for the four datasets. \paragraph{Metrics.} We compare CLCRN with other methods by deploying three widely used metrics - Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE) to measure the performance of predictive models. \paragraph{Protocol.}Seven representative methods are set up, which can be classified into attention-based methods \cite{rozemberczki2021pytorch}: STGCN \cite{yu2018STGCN}, MSTGCN, ASTGCN \cite{Guo2019ASTGCN} and recurrent-based method: TGCN \cite{Zhao2020TGCN}, GCGRU \cite{seo2016gcgru}, DCRNN \cite{li2018diffusion}, AGCRN \cite{bai2020adaptive}. Note that the spatial dependency in AGCRN is based on the product of learnable nodes' embeddings, which is called 'Node Similarity'. The comparison of these methods and ours are given in Table.~\ref{tab:modelcomparision}. All the models are trained with target function of MAE and optimized by Adam optimizer for a maximum of 100 epoches. The hyper-parameters are chosen through a carefully tuning on the validation set (See Appendix D1 for more details). The reported results of mean and standard deviation are obtained through five experiments under different random seeds. \subsection{5.2 Performance comparison} \begin{table*}[htb] \centering \resizebox{2.15\columnwidth}{!}{ \begin{tabular}{c|ccccccccc|r} \toprule Datasets & Metrics & TGCN & STGCN & MSTGCN & ASTGCN & GCGRU & DCRNN & AGCRN & CLCRN & Improvements \\ \midrule \multirow{2}{*}{Temperature} & MAE & 3.8638±0.0970 & 4.3525±1.0442 & 1.2199±0.0058 & 1.4896±0.0130 & 1.3256±0.1499 & 1.3232±0.0864 & \underline{1.2551±0.0080} & \textbf{1.1688±0.0457} & 7.2001\% \\ & RMSE & 5.8554±0.1432 & 6.8600±1.1233 & 1.9203±0.0093 & 2.4622±0.0023 & 2.1721±0.1945 & 2.1874±0.1227 & \underline{1.9314±0.0219} & \textbf{1.8825±0.1509} & 2.5318\% \\ \midrule \multirow{2}{*}{Cloud cover} & MAE & 2.3934±0.0216 & 2.0197±0.0392 & 1.8732±0.0010 & 1.9936±0.0002 & \underline{1.5925±0.0023} & 1.5938±0.0021 & 1.7501±0.1467 & \textbf{1.4906±0.0037} & 6.3987\% \\ & RMSE & 3.6512±0.0223 & 2.9542±0.0542 & 2.8629±0.0073 & 2.9576±0.0007 & 2.5576±0.0116 & \underline{2.5412±0.0044} & 2.7585±0.1694 & \textbf{2.4559±0.0027} & 3.3567\% \\ \midrule \multirow{2}{*}{Humidity} & MAE & 1.4700±0.0295 & 0.7975±0.2378 & 0.6093±0.0012 & 0.7288±0.0229 & \underline{0.5007±0.0002} & 0.5046±0.0011 & 0.5759±0.1632 & \textbf{0.4531±0.0065} & 9.5067\% \\ & RMSE & 2.1066±0.0551 & 1.1109±0.2913 & 0.8684±0.0019 & 1.0471±0.0402 & \underline{0.7891±0.0006} & 0.7956±0.0033 & 0.8549±0.2025 & \textbf{0.7078±0.0146} & 10.3028\% \\ \midrule \multirow{2}{*}{Wind} & MAE & 4.1747±0.0324 & 3.6477±0.0000 & 1.9440±0.0150 & 2.0889±0.0006 & \underline{1.4116±0.0057} & 1.4321±0.0019 & 2.4194±0.1149 & \textbf{1.3260±0.0483} & 6.0640\% \\ & RMSE & 5.6730±0.0412 & 4.8146±0.0003 & 2.9111±0.0292 & 3.1356±0.0012 & \underline{2.2931±0.0047} & 2.3364±0.0055 & 3.4171±0.1127 & \textbf{2.1292±0.0733} & 7.1475\% \\ \bottomrule \end{tabular} } \caption{MAE and RMSE comparison in forecasting length of $12\mathrm{h}$. Results with \underline{underlines} are the best performance achieved by baselines, and results with \textbf{bold} are the overall best. Comparisons in other lengths and metrics are shown Appendix D2. }\label{tab:comparison}\vspace{-0.3cm} \end{table*} Because MAPE is of great difference among methods and hard to agree on an order of magnitude, we show it in Appendix D2. \begin{figure}[ht] \centering \subfigure[MAE on Temperature]{ \includegraphics[width=0.47\linewidth]{length_comparison_temperature_mae.png}}\hspace{1mm} \subfigure[MAE on Cloud cover]{ \includegraphics[width=0.47\linewidth]{length_comparison_cloud_cover_mae.png}}\\\vspace{-0.2cm} \addtocounter{figure}{-1} \end{figure} \begin{figure} \subfigure[MAE on Humidity]{ \includegraphics[width=0.47\linewidth]{length_comparison_humidity_mae.png}}\hspace{1mm} \subfigure[MAE on Wind]{ \includegraphics[width=0.47\linewidth]{length_comparison_component_of_wind_mae.png}}\vspace{-0.2cm} \caption{To avoid complicated and verbose plots, we choose the top three methods for MAE comparison in different forecasting length. }\label{fig:comparison}\vspace{-0.3cm} \end{figure} From Table.~\ref{tab:comparison} and Fig.~\ref{fig:comparison}, it can be conclude that (1) The recurrent-based methods outperform the attention-based, except that in Temperature dataset, MSTGCN works well. (2) Our method further improves recurrent-based methods in weather prediction with a significant margin. (3) Because most of the compared methods are established for traffic forecasting, they demonstrate a significant decrease in performance for meteorological tasks, such as TGCN and STGCN. The differences of the two tasks are analyzed by Sec. 4.4. The `performance convergence' phenomenon on Temperature is explained in Appendix D2. \subsection{5.3. Visualization of local patterns} \begin{figure}[H] \centering \includegraphics[width=2.7in]{patternchange.png} \caption{Changes of local kernels $\chi(\bm{\mathrm{x}}^{i'};\bm{\mathrm{x}}_{i})$ for uniformly-spaced $\bm{\mathrm{x}}_{i}$ obtained by trained CLCRN according to Humidity dataset.}\label{fig:patternchange} \end{figure} The proposed convolution kernel aims to imitate the meteorological local patterns. For this, we give visualization of conditional local kernels to further explore the local patterns obtained from trained models. We choose a line from the south-west to the north-east in USA, and sample points as center nodes uniformly on the line. As shown in Fig.~\ref{fig:patternchange}, the kernels conditional on center nodes show the smoothness property, and the patterns obtained from Humidity datasets demonstrate obvious directionality - nodes from the north-west and south-east impact the centers most. However, the kernel is over-smoothing - The change is very little although the center nodes vary a lot, which will be one of our future research issues. \subsection{5.4. Framework choice: CNN or RNN?} As concluded in (1) in performance comparison, recurrent-based methods usually outperform the attention-based in our evaluation. For the latter one, classical CNNs are usually used for intra-sequence temporal modeling. Here we further establish the CLCSTN by embedding our convolution layer into the framework of MSTGCN, as the attention-based version of CLCRN, to compare the cons and pros of the two frameworks. \begin{table}[htb] \resizebox{1.05\columnwidth}{!}{ \begin{tabular}{c|c|ccccc} \toprule & Lengths & Metrics & Temperature & Cloud cover & Humidity & Wind \\ \midrule \multirow{6}{*}{\rotatebox{90}{CLCSTN}} & \multirow{2}{*}{3h} & MAE & 1.1622±0.2773 & 1.5673±0.0050 & 0.4710±0.0423 & 1.2262±0.0072 \\ & & RMSE & 1.9097±0.5892 & 2.4798±0.0105 & 0.6765±0.0596 & 1.8085±0.0163 \\ \cline{2-7} & \multirow{2}{*}{6h} & MAE & 1.2516±0.2762 & 1.6461±0.0052 & 0.5125±0.0401 & 1.3582±0.0070 \\ & & RMSE & 2.0216±0.5409 & 2.5814±0.0106 & 0.7330±0.0553 & 1.9985±0.0168 \\ \cline{2-7} & \multirow{2}{*}{12h} & MAE & 1.3325±0.2204 & 1.7483±0.0044 & 0.5691±0.0385 & 1.5727±0.0035 \\ & & RMSE & 2.1239±0.3949 & 2.7101±0.0090 & 0.8104±0.0519 & 2.3058±0.0102 \\ \midrule \multirow{6}{*}{\rotatebox{90}{CLCRN}} & \multirow{2}{*}{3h} & MAE & 0.3902±0.0345 & 0.9225±0.0011 & 0.1953±0.0015 & 0.5233±0.0177 \\ & & RMSE & 0.6840±0.0488 & 1.6428±0.0020 & 0.3307±0.0037 & 0.9055±0.0246 \\ \cline{2-7} & \multirow{2}{*}{6h} & MAE & 0.7050±0.0402 & 1.1996±0.0023 & 0.3107±0.0035 & 0.8492±0.0265 \\ & & RMSE & 1.2408±0.1098 & 2.0611±0.0048 & 0.5114±0.0088 & 1.4296±0.0411 \\ \cline{2-7} & \multirow{2}{*}{12h} & MAE & 1.1688±0.0457 & 1.4906±0.0037 & 0.4531±0.0065 & 1.3260±0.0483 \\ & & RMSE & 1.8825±0.1509 & 2.4559±0.0027 & 0.7078±0.0146 & 2.1292±0.0733\\ \bottomrule \end{tabular}\vspace{-0.3cm} }\caption{MAE and RMSE comparison in different forecasting length of CLCSTN and CLCRN.}\label{tab:clcstncom} \end{table} From Table.~\ref{tab:clcstncom}, it is shown that the CLCRN outperforms CLCSTN in all evaluations. Besides, it is noted that the significant gap between two methods is in short term prediction rather than long term. We conjecture that the attention-based framework gives smoother prediction, while the other one can fit extremely non-stationary time series with great oscillation. Empirical studies given in Fig.~\ref{fig:humidity} show that the former framework tends to fit low-frequency signals, but struggles to fit short-term fluctuations. In long term, the influence of fitting deviation is weakened, so the performance gap is reduced. In this case, the fact that the learning curve of the former one is much smoother (Fig.~\ref{fig:learningcurve}) can be explained as well. The unstable learning curve is actually a common problem of all the recurrent-based models, which is another future research issue of ours. \begin{figure}[htb] \centering \includegraphics[width=1.0\linewidth]{vis_signal.png}\vspace{-0.2cm} \caption{Predictions on Humidity, where the filled intervals show steep slopes and drastic fluctuations. (Appendix D3 in detail) }\label{fig:humidity} \vspace{-0.3cm} \end{figure}\vspace{-0.7cm} \begin{figure}[H] \includegraphics[width=1\linewidth]{loss_clcrn.png}\vspace{-0.3cm} \caption{Learning curve of the two methods on Humidity.} \label{fig:learningcurve} \end{figure} \subsection{5.5. Advantages of horizon maps} \begin{table}[htb]\vspace{-0.3cm} \resizebox{1.05\columnwidth}{!}{ \begin{tabular}{l|ccccc} \toprule Methods & Metrics & Temperature & Cloud cover & Humidity & Wind \\ \midrule CLCRN\_log & MAE & 1.2638±0.1554 & 1.5599±0.0019 & 0.4663±0.0082 & 1.3958±0.0120 \\ & RMSE & 2.0848±0.1719 & 2.5171±0.0255 & 0.7341±0.0151 & 2.2659±0.0211 \\ \midrule CLCRN\_hor & MAE & 1.1688±0.0457
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 21 Aug 2018, 10:51 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If 2 different representatives are to be selected at random Author Message TAGS: ### Hide Tags Manager Joined: 22 Jul 2009 Posts: 170 Location: Manchester UK If 2 different representatives are to be selected at random  [#permalink] ### Show Tags 08 Jan 2010, 14:18 1 5 00:00 Difficulty: 95% (hard) Question Stats: 47% (01:45) correct 53% (01:28) wrong based on 1367 sessions ### HideShow timer Statistics If 2 different representatives are to be selected at random from a group of 10 employees and if p is the probability that both representatives selected will be women, is p > 1/2 (1) More than 1/2 of the 10 employees are women. (2) The probability that both representatives selected will be men is less than 1/10 OPEN DISCUSSION OF THIS QUESTION IS HERE: https://gmatclub.com/forum/if-2-differe ... 68280.html Math Expert Joined: 02 Sep 2009 Posts: 48106 Re: If 2 different representatives are to be selected at random  [#permalink] ### Show Tags 27 Feb 2012, 09:15 42 34 If 2 different representatives are to be selected at random from a group of 10 employees and if p is the probability that both representatives selected will be women, is p > 1/2 What is the probability of choosing 2 women out of 10 people $$\frac{w}{10}*\frac{w-1}{9}$$ and this should be $$>1/2$$. So we have $$\frac{w}{10}*\frac{w-1}{9}>\frac{1}{2}$$ --> $$w(w-1)>45$$ this is true only when $$w>7$$. (w # of women $$<=10$$) So basically question asks is $$w>7$$? (1) More than 1/2 of the 10 employees are women --> $$w>5$$ not sufficient. (2) The probability that both representatives selected will be men is less than 1/10 --> $$\frac{10-w}{10}*\frac{10-w-1}{9}<\frac{1}{10}$$ --> $$(10-w)(9-w)<9$$ --> $$w>6$$, not sufficient (1)+(2) $$w>5$$ and $$w>6$$: $$w$$ can be 7, answer NO or more than 7, answer YES. Not sufficient. You can use Combinations, to solve as well: $$C^2_w$$ # of selections of 2 women out of $$w$$ employees; $$C^2_{10}$$ total # of selections of 2 representatives out of 10 employees. Q is $$\frac{C^2_w}{C^2_{10}}>\frac{1}{2}$$ --> $$\frac{\frac{w(w-1)}{2}}{45}>\frac{1}{2}$$ --> --> $$w(w-1)>45$$ --> $$w>7$$? (1) More than 1/2 of the 10 employees are women --> $$w>5$$, not sufficient. (2) The probability that both representatives selected will be men is less than 1/10 --> $$C^2_{(10-w)}$$ # of selections of 2 men out of $$10-w=m$$ employees --> $$\frac{C^2_{(10-w)}}{C^2_{10}}<\frac{1}{10}$$ --> $$\frac{\frac{(10-w)(10-w-1)}{2}}{45}<\frac{1}{10}$$ --> $$(10-w)(9-w)<9$$ --> $$w>6$$, not sufficient (1)+(2) $$w>5$$ and $$w>6$$: $$w$$ can be 7, answer NO or more than 7, answer YES. Not sufficient. Hope it's clear. _________________ Intern Joined: 20 Feb 2012 Posts: 40 Re: If 2 different representatives are to be selected at random  [#permalink] ### Show Tags 27 Feb 2012, 09:11 10 37 If 2 different representatives are to be selected at random from a group of 10 employees and if p is the probability that both representatives selected will be women, is p > 1/2 ? (1) More than 1/2 of the 10 employees are women. (2) The probability that both representatives selected will be men is less than 1/10. ##### General Discussion Math Expert Joined: 02 Sep 2009 Posts: 48106 Re: If 2 different representatives are to be selected at random  [#permalink] ### Show Tags 08 Jan 2010, 15:25 7 5 sagarsabnis wrote: If 2 different representatives are to be selected at random from a group of 10 employees and if p is the probability that both representatives selected will be women, is p > 1/2 (1) More than 1/2 of the 10 employees are women. (2) The probability that both representatives selected will be men is less than 1/10 Please can any one try it? What is the probability of choosing 2 women out of 10 people $$\frac{w}{10}*\frac{w-1}{9}$$ and this should be $$>1/2$$. So we have $$\frac{w}{10}*\frac{w-1}{9}>\frac{1}{2}$$ --> $$w(w-1)>45$$ this is true only when $$w>7$$. (w # of women $$<=10$$) So basically question asks is $$w>7$$? (1) $$w>5$$ not sufficient. (2) $$\frac{10-w}{10}*\frac{10-w-1}{9}<\frac{1}{10}$$ --> $$(10-w)(9-w)<9$$ --> $$w>6$$, not sufficient (1)+(2) $$w>5$$, $$w>6$$ not sufficient _________________ Math Expert Joined: 02 Sep 2009 Posts: 48106 Re: If 2 different representatives are to be selected at random  [#permalink] ### Show Tags 08 Jan 2010, 16:49 1 sagarsabnis wrote: eehhhh!!! I didnt read the question properly and I was using 10C2 for calculation You can use C, for solving as well: $$C^2_w$$ # of selections of 2 women out of $$w$$ employees; $$C^2_{10}$$ total # of selections of 2 representatives out of 10 employees. Q is $$\frac{C^2_w}{C^2_{10}}>\frac{1}{2}$$ --> $$\frac{\frac{w(w-1)}{2}}{45}>\frac{1}{2}$$ --> --> $$w(w-1)>45$$ --> $$w>7$$? (1) $$w>5$$, not sufficient. (2) $$C^2_{(10-w)}$$ # of selections of 2 men out of $$10-w=m$$ employees --> $$\frac{C^2_{(10-w)}}{C^2_{10}}<\frac{1}{10}$$ --> $$\frac{\frac{(10-w)(10-w-1)}{2}}{45}<\frac{1}{10}$$ --> $$(10-w)(9-w)<9$$ --> $$w>6$$, not sufficient (1)+(2) $$w>5$$, $$w>6$$ not sufficient _________________ Manager Joined: 26 Sep 2007 Posts: 63 Re: If 2 different representatives are to be selected at random  [#permalink] ### Show Tags 23 Jun 2010, 13:40 Bunuel wrote: sagarsabnis wrote: (2) $$\frac{10-w}{10}*\frac{10-w-1}{9}<\frac{1}{10}$$ --> $$(10-w)(9-w)<9$$ --> $$w>6$$, not sufficient Hi Bunuel, How did you get w>6. I tried solving it by moving 9 on left side and expanding (w-10) (w-9)-9<0 w^2-19w+81<0 If we solve this quadratic we don't get a whole number for w. Math Expert Joined: 02 Sep 2009 Posts: 48106 Re: If 2 different representatives are to be selected at random  [#permalink] ### Show Tags 23 Jun 2010, 13:54 2 sevenplus wrote: Bunuel wrote: sagarsabnis wrote: (2) $$\frac{10-w}{10}*\frac{10-w-1}{9}<\frac{1}{10}$$ --> $$(10-w)(9-w)<9$$ --> $$w>6$$, not sufficient Hi Bunuel, How did you get w>6. I tried solving it by moving 9 on left side and expanding (w-10) (w-9)-9<0 w^2-19w+81<0 If we solve this quadratic we don't get a whole number for w. As you correctly noted $$w$$ must be an integer (as $$w$$ represents # of women). Now, substituting: if $$w=6$$, then $$(10-w)(9-w)=(10-6)(9-6)=12>9$$, but if $$w>6$$, for instance 7, then $$(10-w)(9-w)=(10-7)(9-7)=6<9$$. So, $$w>6$$. Hope it's clear. _________________ Intern Joined: 15 Mar 2010 Posts: 1 Re: If 2 different representatives are to be selected at random  [#permalink] ### Show Tags 11 Jul 2010, 05:53 Bunuel wrote: sagarsabnis wrote: If 2 different representatives are to be selected at random from a group of 10 employees and if p is the probability that both representatives selected will be women, is p > 1/2 (1) More than 1/2 of the 10 employees are women. (2) The probability that both representatives selected will be men is less than 1/10 Please can any one try it? What is the probability of choosing 2 women out of 10 people $$\frac{w}{10}*\frac{w-1}{9}$$ and this should be $$>1/2$$. So we have $$\frac{w}{10}*\frac{w-1}{9}>\frac{1}{2}$$ --> $$w(w-1)>45$$ this is true only when $$w>7$$. (w # of women $$<=10$$) So basically question asks is $$w>7$$? (1) $$w>5$$ not sufficient. (2) $$\frac{10-w}{10}*\frac{10-w-1}{9}<\frac{1}{10}$$ --> $$(10-w)(9-w)<9$$ --> $$w>6$$, not sufficient (1)+(2) $$w>5$$, $$w>6$$ not sufficient hi, since from basic data we find that w>7 and 2. statemant state that at least 2 men are there out of 10 ........ it is asking that . probability of choosing 2 men is 1/10 . ie. atleast 2 men are there. .... so, now combining w>7 and two men the only option left is w=8. hence ans is B Math Expert Joined: 02 Sep 2009 Posts: 48106 Re: If 2 different representatives are to be selected at random  [#permalink] ### Show Tags 11 Jul 2010, 07:44 1 wrldcabhishek wrote: Bunuel wrote: sagarsabnis wrote: If 2 different representatives are to be selected at random from a group of 10 employees and if p is the probability that both representatives selected will be women, is p > 1/2 (1) More than 1/2 of the 10 employees are women. (2) The probability that both representatives selected will be men is less than 1/10 Please can any one try it? What is the probability of choosing 2 women out of 10 people $$\frac{w}{10}*\frac{w-1}{9}$$ and this should be $$>1/2$$. So we have $$\frac{w}{10}*\frac{w-1}{9}>\frac{1}{2}$$ --> $$w(w-1)>45$$ this is true only when $$w>7$$. (w # of women $$<=10$$) So basically question asks is $$w>7$$? (1) $$w>5$$ not sufficient. (2) $$\frac{10-w}{10}*\frac{10-w-1}{9}<\frac{1}{10}$$ --> $$(10-w)(9-w)<9$$ --> $$w>6$$, not sufficient (1)+(2) $$w>5$$, $$w>6$$ not sufficient hi, since from basic data we find that w>7 and 2. statemant state that at least 2 men are there out of 10 ........ it is asking that . probability of choosing 2 men is 1/10 . ie. atleast 2 men are there. .... so,
\section{Experiments} \subsection{Action Classification} Table.~\ref{tab:action_class} shows action classification performance of our approach in comaprison with other state-of-the-arts in THUMOS14 and ActivityNet1.2 dataset. We use classification mean average precision (mAP) for evaluation. We see that the classification performance of our approach is very competitive with the SOTAs, specially in THUMOS14 we achieve 7.2\% mAP improvement over 3C-Net~\cite{narayan20193cnet}. We also achieve very competitive performance in ActivityNet dataset. Although our approach has not been designed for video action recognition task, it's high performance in action classification reveals the robustness of our method. \begin{table*}[tbp] \centering \begin{tabular}{c|c|c} \hline Methods & THUMOS14 & ActivityNet1.2 \\ \hline iDT+FV~\cite{wang2013action} & 63.1 & 66.5 \\ C3D~\cite{c3d} & - & 74.1 \\ TSN~\cite{Wang2016TemporalSN} & 67.7 & 88.8 \\ W-TALC~\cite{wtalc} & 85.6 & \bf93.2 \\ 3C-Net~\cite{narayan20193cnet} & 86.9 & 92.4 \\ Ours & \bf94.1 & 90.3 \\ \hline \end{tabular} \caption{Action Classification performance of our method with state-of-the-arts methods on THUMOS14 and ActivityNet1.2 dataset in terms of classification mAP. } \label{tab:action_class} \end{table*} \subsection{Detailed Performance on ActivityNet1.2} Table.~\ref{tab:anet_result_details} shows detailed performance of our approach on ActivityNet1.2 dataset in terms of localization mAP for different IoU thresholds. \begin{table*}[htbp] \fontsize{8}{10}\selectfont \centering \begin{tabular}{c|c||r r r r r r r r r r | r} \hline \multirow{2}{*}{\bf Supervision} & \multirow{2}{*}{\bf Method} & \multicolumn{11}{c}{\bf IoU}\\ && \bf0.5 & \bf0.55 & \bf0.6 & \bf0.65 & \bf0.7 & \bf0.75 & \bf0.8 & \bf0.85 & \bf0.9 & \bf0.95 & \bf{AVG} \\ \hline \hline \bf Full & SSN \cite{ssn} & 41.3 & 38.8 & 35.9 & 32.9 & 30.4 & 27.0 & 22.2 & 18.2 & 13.2 & 6.1 & 26.6 \\ \hline \multirow{11}{*}{\bf Weak} & UntrimmedNets \cite{wang2017untrimmednets} & 7.4 & 6.1 & 5.2 & 4.5 & 3.9 & 3.2 & 2.5 & 1.8 & 1.2 & 0.7 & 3.6 \\ & AutoLoc \cite{shou2018autoloc} & 27.3 & 24.9 & 22.5 & 19.9 & 17.5 & 15.1 & 13.0 & 10.0 & 6.8 & 3.3 & 16.0 \\ & W-TALC \cite{wtalc} & 37.0 & 33.5 & 30.4 & 25.7 & 14.6 & 12.7 & 10.0 & 7.0 & 4.2 & 1.5 & 18.0 \\ & TSM~\cite{yu2019weaktsm} & 28.3 & 26.0 & 23.6 & 21.2 & 18.9 & 17.0 & 14.0 & 11.1 & 7.5 & 3.5 & 17.1 \\ & 3C-Net~\cite{narayan20193cnet} & 35.4 & - & - & - & 22.9 & - & - & - & 8.5 & - & 21.1 \\ & CleanNet~\cite{liu2019weaklycleannet} & 37.1 & 33.4 & 29.9 & 26.7 & 23.4 & 20.3 & 17.2 & 13.9 & 9.2 & 5.0 & 21.6 \\ & Liu \emph{et~al}~\cite{liu2019completeness} & 36.8 & - & - & - & - & 22.0 & - & - & - & 5.6 & 22.4 \\ & Islam \emph{et~al}~\cite{islam2020weakly} & 35.2 & - & - & - & 16.3 & - & - & - & - & - & - \\ & BaS-Net~\cite{lee2020backgroundbasnet} & 34.5 & - & - & - & - & 22.5 & - & - & - & 4.9 & 22.2 \\ & DGAM~\cite{shi2020weaklysupervisedgam} & \bf41.0 & 37.5 & 33.5 & 30.1 & 26.9 & 23.5 & 19.8 & 15.5 & \bf10.8 & \bf5.3 & 24.4\\ & \bf{Ours} & \bf41.0 & \bf37.9 & \bf34.6 & \bf31.3 & \bf28.1 & \bf24.8 & \bf21.1 & \bf16.0 & \bf10.8 & \bf5.3 & \bf25.1 \\ \hline \end{tabular} \caption{Comparison of our algorithm with other state-of-the-art methods on the ActivityNet1.2 validation set for temporal action localization.} \label{tab:anet_result_details} \end{table*} \section{More Ablation} Fir.~\ref{fig:ablation_more} shows ablation studies on the hyper-parameters $\alpha$, $\beta$, and drop threshold $\gamma$ on THUMOS14 dataset. AVG mAP is the mean mAP value from IoU threshold 0.1 to 0.7 incremented by 0.1. Fig.~\ref{fig:abl_alpha} shows the performance for different weights on sparsity loss. Without sparsity loss, the model hardly learns any localization. As $\alpha$ increases, localization performance increases as well, and we get the best score for $\alpha=0.8$. Fig.~\ref{fig:abl_beta} reveals the performance improvement for different weights on guide loss. We empirically find that $\beta=0.8$ gives the best performance. In Fig.~\ref{fig:abl_drop}, we see the mAP performance for different values of dropping threshold $\gamma$. Fig.~\ref{fig:abl_seg_th} and Fig.~\ref{fig:abl_seg_anet} show the effect of video length during training for THUMOS14 and ActivityNet respectively. Note that THUMOS14 contains more denser videos with a large number of activities per video. Hence we observe that the performance increase for larger video length for THUMOS14, whereas, ActivityNet performs best for 80 length segments. Also, note that the number of segments are chosen randomly only during training. We use all segments during evaluation. \begin{figure*}[!htbp] \begin{center} \begin{subfigure}{0.3\textwidth} \includegraphics[width=0.9\linewidth]{figures/ablation/alpha.pdf} \caption{} \label{fig:abl_alpha} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=0.9\linewidth]{figures/ablation/beta.pdf} \caption{} \label{fig:abl_beta} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=0.9\linewidth]{figures/ablation/gamma.pdf} \caption{} \label{fig:abl_drop} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=0.9\linewidth]{figures/ablation/number_of_segments_THUMOS.pdf} \caption{} \label{fig:abl_seg_th} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=0.9\linewidth]{figures/ablation/number_of_segments_ActivityNet.pdf} \caption{} \label{fig:abl_seg_anet} \end{subfigure} \end{center} \caption{(a) Ablation on the weight of sparsity loss. (b) Ablation on the weight of guide loss. (c) Ablation on the drop-threshold for dropping snippets in the HAD module. (d) and (e) Ablation on the number of segments for a video during training.} \label{fig:ablation_more} \end{figure*} \section{More Qualitative Examples} We show more qualitative examples in Fig.~\ref{fig:qual_more}. In Fig.~\ref{fig:444polevaule}, there are several occurrences of Pole Vault activity, and our method can capture most of them. We show some failure examples in Figure.~\ref{fig:188highjump} and Fig.~\ref{fig:85diving}. In Fig.~\ref{fig:188highjump}, our model erroneously captures some activities as high jump. In those erroneous segments, we observe that the person tends to do a high jump activity but restrain in the end without completing the full action. The same goes for Fig.~\ref{fig:85diving}. Previous WTAL approaches~\cite{islam2020weakly, wtalc} have also shown similar issues as an inherent limitation of WTAL methods. Because of the weakly-supervised nature, we infer that some errors related to incomplete activities are inevitable. \begin{figure*}[htbp] \label{fig:qual} \begin{center} \begin{subfigure}[b]{0.8\textwidth} \includegraphics[width=\textwidth]{figures/visual/444PoleVault.pdf} \caption{Pole Vault} \label{fig:444polevaule} \end{subfigure} \begin{subfigure}[b]{0.8\textwidth} \includegraphics[width=\textwidth]{figures/visual/188HighJump.pdf} \caption{High Jump} \label{fig:188highjump} \end{subfigure} \begin{subfigure}[b]{0.8\textwidth} \includegraphics[width=\textwidth]{figures/visual/85Diving.pdf} \caption{Diving} \label{fig:85diving} \end{subfigure} \end{center} \caption{Qualitative results on THUMOS14. The horizontal axis denotes time. On the vertical axis, we sequentially plot the ground truth, our predicted localization, and our prediction score. (b) and (c) represent failure examples of our approach. } \label{fig:qual_more} \end{figure*} \section{Introduction} \label{sec:intro} Temporal action localization refers to the task of predicting the start and end times of all action instances in a video. There has been remarkable progress in fully-supervised temporal action localization \cite{Tran2018ACL, ssn, frcnn, lin2018bsn, xu2019gtad}. However, annotating the precise temporal ranges of all action instances in a video dataset is expensive, time-consuming, and error-prone. On the contrary, weakly supervised temporal action localization (WTAL) can greatly simplify the data collection and annotation cost. \begin{figure}[!ht] \centering \includegraphics[width=0.47\textwidth]{figures/introduction.pdf} \caption{The existing MIL framework does not necessarily capture the full extent of an action instance. In this example of a diving activity. (a) shows the ground-truth localization, and (b) shows the prediction from an MIL-based WTAL framework. The MIL framework only captures the most discriminative part of the diving activity, ignoring the beginning and ending parts of the full action.} \label{fig:intro} \vspace{-0.4cm} \end{figure} WTAL aims at localizing and classifying all action instances in a video given only video-level category label during training stage. Most existing WTAL methods rely on the multiple instance learning (MIL) paradigm \cite{wtalc, liu2019completeness, islam2020weakly}. In this paradigm, a video consists of several snippets; snippet-level class scores, commonly known as Class Activation Sequences (CAS), are calculated and then temporally pooled to obtain video-level class scores. The action proposals are generated by thresholding the snippet-level class scores. However, this framework has a major issue: it does not necessarily capture the full extent of an action instance. As training is performed to minimize the video-level classification loss, the network predicts higher CAS values for the discriminative parts of actions, ignoring the less discriminative parts. For example, an action might consist of several sub-actions \cite{hou2017real}. In the MIL paradigm, only a particular sub-action might be detected, ignoring the other parts of the action. An illustrative example of a diving activity is presented in Figure \ref{fig:intro}. We observe that only the most discriminative location of the full diving activity is captured by the MIL framework. Capturing only the most distinctive part of an action is sufficient to produce a high video-level classification accuracy, but
photoexciation region. The X$^+$(v=19) levels are identified as optimal "transfer states" by having the longest lifetimes among the weakly bound states at room and liquid nitrogen temperature. Once the ions are moved into the trap, the population could be transferred to different science states by adiabatic passage using microwave radiation. We also determined effective lifetimes at 400~K, which are shown in Fig.~\ref{fig:lifetime} and listed in Table~\ref{table:tau_400K}, for the case of elevated temperatures in the trap region, which might be caused by rf heating. Compared to room temperature, the lifetimes are reduced by about a factor of two. The effect of $g/u$-mixing experienced mainly by the X$^+$\,$(19,0)$ and X$^+$\,$(19,1)$ levels was found to reduce their fluorescence lifetime by 30\% to about 1000~s \citeref{moss1993b}. In this work, we found that the effect of $g/u$-mixing is negligible when the interaction with BBR is included: the effective lifetime of X$^+$\,$(19,1)$ is reduced by only 210~$\mu$s, about 0.011\%, and the effective lifetime of X$^+$\,$(19,0)$ is reduced by merely 60~$\mu$s, about 0.001\%. Other compelling findings of this study are the following: (i) The lifetime of A$^+$\,$(0)$ does not significantly vary with temperature and barely doubles when going from 293~K to 77~K. This is because the state has a vanishing PDCS well before the peak of the BBS and the contribution from dissociation mostly comes from the GHz region, where the BBS does not decrease as drastically with temperature as in the THz region (see Fig.\,\ref{fig:spectra}). This may set a limit on the interaction time of experiments relying on this state; (ii) The states of X$^+$\,$(\leq 18)$ cannot decay via electric dipole transitions and their lifetime from electric quadrupole transitions are on the order of days, nonetheless, as a consequence of BBR, they have an effective lifetime even shorter than the higher vibrational levels; (iii) For an initial population of $10^3$ ions in X$^+$\,$(19)$ levels (see Fig.\,\ref{fig:bbr}), only one ion is expected to reach A$^+$\,$(0)$ by the time this state reaches its maximum population, while most of the total population is transferred to the continuum. This means, that the high purity of state-selectively prepared molecular ion ensembles is conserved, enabling a high signal-to-noise ratio. Experiments with trapped \Htp\, ions relying on weaker two-photon or electric-quadrupole transitions can make use of more strongly bound X$^+$~states, for which BBR induced photodissociation is irrelevant. Several of such studies are currently pursued, in one case \Htp\, ions are produced in the vibrational ground state X$^+$$(v=0,N)$ by photoionization inside a rf trap \cite{schmidt2020a}, whereas in the other case a single \Htp\, ion is injected into a Penning trap, which allows to determine the rovibrational state before driving a spectroscopic transition \cite{tu2021a}. \begin{acknowledgments} \noindent MB acknowledges NWO for a VENI grant (VI.Veni.202.140). \end{acknowledgments} \section{Introduction} The hydrogen molecular ion (HMI) is the simplest molecular system and can be used to test quantum electrodynamics and to determine fundamental constants by comparing experimental and theoretical transition frequencies \cite{Alighanbari2020a,patra2020a}. While ab initio theory reached a level of $10^{-11}$ relative accuracy for \Htp\, and \HDp~\cite{Korobov2014b, Korobov2017a}, a comparable experimental accuracy has only been achieved for the \HDp~isotopologue \cite{koelemeij2007a,Alighanbari2020a,patra2020a}. In these experiments, rovibrationally-cold \HDp~ions are held in a radio-frequency trap and sympathetically cooled to reach the Lamb-Dicke regime in order to suppress Doppler effects during the spectroscopic interrogation. Cooling the rovibrational degrees of freedom of the hot \HDp~ions produced by electron bombardment relies on spontaneous emission; a process that is only possible in the heteronuclear isotopologues, because of the electric dipole moment originating from the mass and charge asymmetry \cite{Bunker1974a}. The absence of that very dipole moment for the homonuclear isotopologues has far reaching consequences regarding the ion production, as well as the multitude of strong transitions available for spectroscopic studies: (i) with radiative lifetimes of the order of weeks \cite{peek1979a}, any rovibrational state distribution produced during electron bombardment will be conserved, drastically reducing the number of available ions participating in a single spectroscopic transition, and (ii) strong electric-dipole allowed transitions exist only between different electronic states. Carrington and coworkers succeeded in measuring rovibronic transitions between weakly bound states of the ground (X$^+ ~ ^2\Sigma_g^+$) and first excited (A$^+ ~ ^2\Sigma_u^+$) electronic states of \Htp\, just below the H(1s) + H$^+$ dissociation threshold \cite{carrington1989a,carrington1989b,carrington1993a,carrington1993b,carrington1993c}. A fast ion beam with high current was used to compensate for the small number of ions per quantum state and having the transitions frequencies in the microwave range helped limiting the Doppler broadening. To avoid interaction broadening, the ion beam was directed through a sufficiently long microwave waveguide and the observation of forward and back-reflected modes allowed the cancellation of the first-order Doppler shifts. This resulted in a FWHM of 0.6~MHz and an absolute accuracy of the order of 0.5~MHz (relative accuracy $10^{-5}$). In a new generation of measurements we aim at improving the accuracy of the transition frequencies between the weakly bound states, which have been found to show an enhanced sensitivity on the proton-to-electron mass ratio \cite{augustovivcova2014a}, by employing a similar ion trap setup as used in \citeref{patra2020a}. The suppression of Doppler-related effects and the careful control of magnetic fields over a small trap volume have been shown to allow to reach Hertz-level accuracy \cite{menasian1974a}. This leaves the question of how to increase the population in individual quantum levels of the HMI, preferably creating the ions selectively in a single weakly bound rovibronic level. Such a state-selected ion generation can be achieved using mass-analyzed threshold ionization (MATI) \cite{zhu1991a}: this involves photoexcitation with a laser, slightly red-detuned from the targeted level in the ion $E^+(v', N')$, which will lead via direct ionization and auto-ionization to ions in lower lying states $E^+(v \ne v', N \ne N')$ as well as high-$n$ Rydberg states converging to the $E^+(v', N')$ threshold. The \emph{prompt} ions in unwanted states can be spatially separated from the neutral molecules in the Rydberg states, which are subsequently field-ionized to selectively provide ions in the state $E^+(v', N')$ for subsequent experiments \cite{mackenzie1994a}. In order to apply MATI for the efficient production of the weakly bound states in the vicinity of the dissociation threshold, a multi-step excitation pathway has to be employed to gradually increase the bond length from $\sim 1.4\,a_0$ (X$(v=0, N=1)$) to $\sim 23.4\,a_0$ (X$^+$\,$(v=19, N=1)$), as shown in the photoelectron spectroscopic studies by Beyer and Merkt for H$_2$, HD and D$_2$ \cite{Beyer2016a,beyer2018a,beyer2022a}. The crucial ingredient for the excitation of weakly bound ions, starting from the vibronic ground state of neutral molecular hydrogen, was the use of the long-range $\mathrm{H}\mathrm{\bar H}$ and $\mathrm{B}\mathrm{\bar B}$ intermediate states, first observed and characterized by W. Ubachs and his coworkers \cite{reinhold1997a,Reinhold1999a,deLange2001a}. These states belong to the class of \emph{ion-pair states} \cite{reinhold2005a} and are characterized by large bond lengths and mixed electronic character (regarding the orbital-angular momentum $\ell$ in a single-center description), allowing for the efficient generation of weakly bound molecular ions with $N=0-10$. Previous theoretical studies on the radiative lifetimes of the weakly bound states have shown lifetimes in excess of hundreds of seconds \cite{peek1979a,moss1993a}, even when including the effect of ortho-para or $g/u$-mixing due to the hyperfine structure \cite{bunker2000a}, making them very attractive for precision measurements. However, neither of these studies addressed the effect of photodissociation and state-redistribution induced by black-body radiation (BBR). The peak of the black-body spectrum (BBS) at room temperature is located at around $17$~THz, overlapping with the electric-dipole spectrum of the weakly bound states and supporting the possibility of having a perceptible effect on the lifetimes and the state-distribution of the selectively prepared ions. In the following, we present a theoretical study on the effective lifetimes of HMI in weakly bound states, taking the effects of BBR and $g/u$-mixing into account. Section \ref{sec:theory} shows the calculation of the Einstein coefficients for bound-bound and bound-continuum transitions, required to solve the rate equations for the HMI. Effective lifetimes and the time evolution of the state distributions are shown in section \ref{sec:results} and in section
(B-n: BLEU-n)} \label{diff_cl} \end{table} \begin{table}[!tbp] \centering \begin{tabular}{c|cccc} \hline & B-1 & B-2 & B-3 & B-4 \\ \hline Batch & 50.69 & 37.93 & 30.09 & 24.79 \\ Vocab & 51.20 & 38.33 & 30.45 & 25.14 \\ Vocab-Sent & \textbf{51.57} & \textbf{38.81} & \textbf{30.91} & \textbf{25.48} \\ \hline \end{tabular} \caption{BLEU scores with different sampling strategies of negative samples on the PHOENIX14T test set.} \label{diff_s} \end{table} \subsection{Comparison Results} In Table.\ref{table2}, we compare ConSLT with several methods for the SLT on the PHOENIX14T dataset. Our results are averaged over 10 runs with different random seeds. The \textbf{RNN-SLT} \cite{camgoz2018neural} only adopt full frame features from Re-sign. \textbf{SL-Transf} \cite{camgoz2020sign} models utilize pre-trained features from CNN-LSTM-HMM and jointly learn sign language recognition and translation. \textbf{PiSLTRc} \cite{xie2021pisltrc} uses position-informed temporal convolution base on \textbf{SL-Transf}. \textbf{SignBT} \cite{zhou2021improving} uses Sign Back-Translation for data augmentation. \textbf{MCT} \cite{camgoz2020multi} and \textbf{STMC-T} \cite{zhou2021spatial} are evaluated under multi-cue setting. \textbf{STMC-Transf} \cite{yin-read-2020-better} uses vanilla Transformer for SL translation with STMC gloss. Our method can substantially improve translation quality for SLT, largely outperforming the previous state-of-the-art. Specifically, compared with the \textbf{SignBT}, ConSLT achieves competitive results on the PHOENIX-2014-T dev set as \textbf{SignBT} utilizes external monolingual data through sign back translation. To our surprise, ConSLT outperforms SignBT 1.16 BLEU-4 on the PHOENIX-2014-T test set. Moveover, ConSLT gains an improvement of 1.83 BLEU-4 on PHOENIX14 test set compared with the end-to-end spatial-temporal multi-cue network \textbf{STMC-T}, 1.48 BLEU-4 compared with \textbf{STMC-Transf}. Experiments demonstrates that introducing contrastive learning could bring notable performance gain for SLT. \section{Analysis} \begin{figure}[!tbp] \centering \begin{tikzpicture}[baseline] \centering \begin{axis}[ every axis/.append style={line width=0.8pt}, xlabel={The number of negative samples}, yticklabel style={/pgf/number format/precision=2,/pgf/number format/fixed zerofill}, ylabel={BLEU-4}, enlargelimits=0.05, xmin=100, xmax=500, ymin=24.5, ymax=25.5, xtick={100,200,300,400,500}, ytick={24.50,24.75,25.00,25.25,25.50}, legend pos=north west, ymajorgrids=true, xmajorgrids=true, grid style=dashed, ] \addplot[ color=red, mark=triangle, ] coordinates { (100,24.72) (200,25.17) (300,25.28) (400,25.27) (500,25.48) }; \end{axis} \end{tikzpicture} \caption{The effect of different number of negative samples on the PHOENIX14T test set.} \label{diff_num} \end{figure} \begin{figure*}[!tbp] \centering \subfigure[STMC-Transformer]{ \includegraphics[scale=0.42]{stmc} } ~~~~~~ \subfigure[ConSLT]{ \includegraphics[scale=0.42]{ours} } \caption{Visualization of token embeddings in (a) STMC-Transformer, (b) ConSLT trained on PHOENIX14T.} \label{visual} \end{figure*} \subsection{Ablation Study} We perform the ablation study to investigate the influence of contrastive learning for SLT. First, following SimCSE \cite{gao2021simcse}, we train a model with sentence-level contrastive learning framework. More specifically, we pass the sign gloss to the vanilla Transformer \cite{vaswani2017attention} and obtain two sentence representations as “positive pairs”. We treat other sentence representations within a minibatch as negative examples. To make the comparisons fair, we apply cosine function and KL-divergence to calculate the distance of positive and negative pairs for the ablation experiments, respectively. We also set the number of negative samples to 500. For notation, \textbf{w/o CL} means no contrastive learning method, \textbf{S-CL} means sentence-level contrastive learning, \textbf{T-CL} means our token-level contrastive learning. As shown in Table \ref{diff_cl}, we can see that the different contrastive learning methods used in SLT all bring significant improvements. Furthermore, ConSLT gains improvements of +0.41 BLEU scores, compared with sentence-level contrastive learning. This further demonstrates that ConSLT is more suitable for SLT as our approach can obtain fine-grained token representations. As illustrated in Table \ref{diff_cl}, ConSLT which adopt KL-divergence as the distance metric of contrastive learning is better than cosine function. We speculate about the possible reason is that learning similar representations via KL-divergence is more difficult than via cosine function. Therefore, ConSLT achieves better performance in SLT. \subsection{Sampling strategy of negative samples} We further conduct comparative experiments to analyze the effectiveness of sampling strategy of negative samples in contrastive learning. We explore three sampling strategies. For notation, \textbf{Batch} denotes randomly sampling negative samples for each token from a mini-batch. \textbf{Vocab} denotes randomly sampling negative samples for each token from the vocabulary of spoken language sentences. \textbf{Vocab-Sent} denotes randomly sampling negative samples for each token from the vocabulary but not in the current sentence. Table \ref{diff_s} presents the results. \textbf{Vocab} substantially outperforms \textbf{Batch}. This gap is because the tokens in the vocabulary are more diverse than those in a mini-batch. Moreover, \textbf{Vocab-Sent} performs better than other methods. This indicates that the tokens in the same sentence should be semantically similar due to the contextual relationship, which is similar to predicting the masked token based on its contextual information in the masked language model. It also enhances the relationship between the masked token and its context. Therefore, the different tokens from the same sentence cannot be pushed apart. The comparative experiments also demonstrate ConSLT can obtain better token representations for SLT. \subsection{The number of negative samples} Different number of negative samples may affect the translation performance, so we further test our model with respect to different number of negative samples. We vary $K$ in $\{100,200,300,400,500\}$ to see the difference. We report the results in Figure \ref{diff_num}, from which we observe that the performance improves steadily when increasing the number of negative samples. The result supports the insight of \citet{chen2020simple} that more negative samples benefit the performance for contrastive learning. Note that when we increase K to 600, the out-of-memory issues on GPU will be triggered. \begin{table*}[!t] \scriptsize \centering \begin{tabular}{|ll|} \hline REF & morgen reichen die temperaturen von zweiundzwanzig grad an der ostsee bis zweiunddreißig grad am oberrhein. \\ & (tomorrow the temperatures will range from twenty-two degrees on the baltic sea to thirty-two degrees on the upper rhine.) \\ STMC-Transf & morgen temperaturen von zweiundzwanzig grad an der ostsee bis zweiunddreißig grad am oberrhein. \\ & (tomorrow temperatures from twenty-two degrees on the baltic sea to thirty-two degrees on the upper rhine.) \\ OUR & morgen \textbf{reichen die} temperaturen von zweiundzwanzig grad an der ostsee bis zweiunddreißig grad am oberrhein. \\ & (tomorrow the temperatures will range from twenty-two degrees on the baltic sea to thirty-two degrees on the upper rhine.) \\\hline \hline REF & im westen und nordwesten fallen einzelne schauer. \\ & (individual showers fall in the west and north-west.) \\ STMC-Transf & im westen und nordwesten gibt es einzelne schauer. \\ & (in the west and northwest there are a few showers.) \\ OUR & im westen und nordwesten fallen \textbf{nur} einzelne schauer. \\ & (in the west and northwest there are only a few showers.) \\ \hline \hline REF & und zum wochenende wird es dann sogar wieder ein bisschen kälter. \\ & (and at the weekend it even gets a little colder again.) \\ STMC-Transf & zum wochenende wird es dann auch wieder kälter. \\ & (at the weekend it gets colder again.) \\ OUR & \textbf{und} zum wochenende wird es dann auch wieder kälter. \\ & (and at the weekend it gets colder again.) \\ \hline \hline REF & der wind weht schwach bis mäßig am meer auch frisch. \\ & (the wind blows weak to moderate at the sea also fresh.) \\ STMC-Transf & schwacher bis mäßiger an der see teilweise frischer wind. \\ & (weak to moderate at the lake partly fresh wind.) \\ OUR & der wind weht schwach bis mäßig \textbf{an den küsten} auch frisch. \\ & (the wind blows weak to moderate on the coasts also fresh.) \\ \hline \hline REF & und nun die wettervorhersage für morgen freitag den achten oktober. \\ & (and now the weather forecast for tomorrow, friday, october eighth.) \\ STMC-Transf & und nun die wettervorhersage für morgen freitag den achtundzwanzigsten oktober. \\ & (and now the weather forecast for tomorrow, friday, october twenty-eighth.) \\ OUR & und nun die wettervorhersage für morgen freitag den \textbf{achten oktober}. \\ & (and now the weather forecast for tomorrow, friday, october eighth.) \\ \hline \hline REF & am tag von schleswig holstein bis nach vorpommern und zunächst auch in brandenburg gebietsweise länger andauernder regen. \\ & (On the day from Schleswig-Holstein to Western Pomerania and initially also in Brandenburg, there was prolonged rain in some areas.) \\ STMC-Transf & herrlicher sonnenschein in schleswig holstein in brandenburg in brandenburg bis morgen früh. \\ & (wonderful sunshine in schleswig holstein in brandenburg in brandenburg until tomorrow morning.) \\ OUR & also von schleswig holstein bis nach nordrhein westfalen in sachsen und brandenburg wird es morgen den ganzen tag bei \textbf{dauerregen}. \\ & (so from schleswig holstein to north rhine westphalia in saxony and brandenburg it will rain all day tomorrow.) \\ \hline \end{tabular} \caption{Qualitative comparison of STMC-Transformer and Our method on PHEONIX-2014T. REF refers to the
Markov model for host prediction. On the other hand, PHP \cite{lu2021prokaryotic} utilizes the $k$-mer frequency, which can reflect the codon usage patterns shared by the viruses and the hosts \cite{gouy1982codon, carbone2008codon}. DeepHost \cite{DeepHost} and PHIAF \cite{PHIAF} also utilize $k$-mer-based features to train a convolutional neural network for host prediction. Boeckaerts et al. build learning models using features extracted from receptor-binding proteins (RBPs) for host prediction \cite{boeckaerts2021predicting}. However, it is not trivial to annotate RBPs in all viruses. The authors only collected RBPs related to 9 hosts and thus this tool can only predict very limited host species. HostG \cite{shang2021detecting} utilizes the shared protein clusters between viruses and prokaryotes to create a knowledge graph and trains a graph convolutional network for prediction. Although it has high accuracy of prediction, it can only predict the host at the genus level. The best host prediction performance at the species level is reported by VHM-Net \cite{wang2020network}, which incorporates multiple features between viruses and prokaryotes such as CRISPRs, the output score of WIsH, BLASTN alignments, etc. By combining these features, VHM-net utilizes the Markov random field framework for predicting whether a virus infects a target prokaryote. Nevertheless, the accuracy at the species level is only 43\%. \begin{figure*}[h!] \centering \includegraphics[width=0.8\linewidth]{model.jpg} \caption{The key components of CHERRY. A) The multimodal knowledge graph. Triangle represents the prokaryotic node and circle represents virus nodes. Different colors represents different taxonomic labels of the prokaryotes. \uppercase\expandafter{\romannumeral1}-\uppercase\expandafter{\romannumeral3} illustrate graph convolution using neighbors of increasing orders. B) The graph convolutional encoder of CHERRY. C) The decoder of CHERRY.} \label{fig:model} \end{figure*} \subsection{Overview} In this work, we develop a new method, CHERRY, which can predict the hosts' taxa (phylum to species) for newly identified viruses. First, we construct a multimodal graph that incorporates multiple types of interactions, including protein organization information between viruses, the sequence similarity between viruses and prokaryotes, and the CRISPR signals (Fig. \ref{fig:model} A). In addition, we use $k$-mer frequency as the node features to enhance the learning ability. Second, rather than directly using these features for prediction, we design an encoder-decoder structure to learn the best embedding for input sequences and predict the interactions between viruses and prokaryotes. The graph convolutional encoder (Fig. \ref{fig:model} B) utilizes the topological structure of the multimodal graph and thus, features from both training and testing sequences can be incorporated to embed new node features. Then a link prediction decoder (Fig. \ref{fig:model} C) is adopted to estimate how likely a given virus-prokaryote pair forms a real infection. Unlike many existing tools, CHERRY can be flexibly used in two scenarios. It can take either query viruses or prokaryotes as input. For viruses, it can predict their hosts. For input prokaryotes, it can predict the viruses infecting them. Another feature behind the high accuracy of CHERRY is the construction of the negative training set. The dataset for training is highly imbalanced, with the real host as the positive data and all other prokaryotes as negative data. We carefully addressed this issue using negative sampling \cite{mikolov2013distributed}. Instead of using a random subset of the negative set for training the model, we apply end-to-end optimization and negative sampling to automatically learn the hard cases during training. To demonstrate the reliability of our method, we rigorously tested CHERRY on multiple datasets including the RefSeq dataset, simulated short contigs, metagenomic datasets. We compared CHERRY with WIsH, PHP, HoPhage, VPF-Class, RaFAH, HostG, vHULK, PHIST, DeepHost, PHIAF, and VHM-net, whose brief descriptions can be found in Table \ref{tab:compare}. The results show that CHERRY competes favorably against the state-of-the-art tools and yields 37\% improvements at the species level. \section{Method} We formulate host prediction as a link prediction problem \cite{al2006link} on a multimodal graph, which encodes virus-virus and virus-prokaryote relationships. To be specific, these relationships can be represented by a knowledge graph $G = (V, E)$ with node $v_i \in V$, where $i=1,2,..., N$. An edge between $v_i$ and $v_j$ is denoted as a tuple $(v_i, v_j) \in E$. There are two kinds of nodes in the graph: viral nodes $p_i \in P$ and prokaryotic nodes $h_i \in H$ ($P \cup H = V$). Then the link prediction task can be defined as: \textbf{given} a viral node $p_i$ and a prokaryotic node $h_i$, \textbf{what is the probability} of $p_i$ and $h_i$ having a link (infection). In the following section, first, we will describe how we construct the multimodal graph. Then, we will introduce the encoder-decode structure of CHERRY. \subsection{Construction of the knowledge graph $G$} \label{sec:graph} To utilize the features from both training and testing samples, we construct a multimodal graph $G$ by connecting viruses and prokaryotes in both the reference database and the test set. This multimodal graph is composed of protein organizations, sequence similarity, and CRISPR-based similarity. The node in the graph encodes $k$-mer frequency feature from the DNA sequences. According to the type of the connections, the edges in the knowledge graph can be divided into virus-virus connections and virus-prokaryote connections. \paragraph{Virus-virus connections:} we utilize the protein organizations to measure the similarity of biological functions between viruses. Intuitively, if two viruses share similar protein organizations, they are more likely to infect the same host. First, we construct protein clusters using the Markov clustering algorithm (MCL) on all viral proteins. For reference viral genomes, the proteins are downloaded from NCBI RefSeq. For query contigs, we use Prodigal \cite{hyatt2010prodigal} to conduct gene finding and protein translation. Then, we employ MCL to cluster proteins with $inflation =2.0$ based on the DIAMOND BLASTP \cite{buchfink2015fast} comparisons (E-value \textless 1e-5). Second, we followed \cite{bolduc2017vcontact, shang2021bacteriophage} and use Eq. \ref{edge1} to estimate the probability of two viruses $X$ and $Y$ sharing at least $c$ common protein clusters by assuming that each protein cluster has the same chance to be chosen. $x$ and $y$ are the numbers of proteins in $X$ and $Y$, respectively. Because Eq. \ref{edge1} computes the background probability under the hypothesis that virus $X$ and $Y$ don't share common host, we will reject this hypothesis when P is smaller than a cutoff. Finally, only pairs with P( $\ge$ c) smaller than $\tau_1$ will form virus-virus connections (Eq. \ref{edge2}). \begin{equation} \label{edge1} \indent P( \ge c) = {\textstyle \sum_{i=c}^{min(x,y)}} \frac{\binom{x}{i}\binom{n-x}{y-i}}{\binom{n}{y}} \end{equation} \begin{equation} \label{edge2} \indent virus\raisebox{0mm}{-}virus = \begin{cases} 1 & \text{ if } P( \ge c) < \tau_1 \\ 0 & \text{ otherwise } \end{cases} \end{equation} \paragraph{Virus-prokaryote connections:} we apply the sequence similarity between viral and prokaryotic sequences to define the virus-prokaryote connections. There are two kinds of sequence similarity that can be employed: CRISPR-based and general local similarity. Some prokaryotes will integrate some viral DNA fragments into their own genomes to form spacers \cite{dutilh2014highly, roux2016ecogenomics} in CRISPR. Therefore, many existing tools have used CRISPR as a main feature for host prediction \cite{edwards2016computational, Nathan2020vhmnet}. In our method, CRISPR Recognition Tool \cite{bland8p} is applied to capture potential CRISPRs from prokaryotes. If a viral sequence shares a similar region with the CRISPR, we will connect this viral node to the prokaryotic node. However, only a limited number of CRISPRs can be found. We thus also use BLASTN to measure the sequence similarity between the sequences. Viruses can mobilize host genetic material and incorporate it into their own genomes. Occasionally, these genes can bring an evolutionary advantage and the viruses will preserve them \cite{edwards2016computational}. For all viruses $p_i \in P$ and prokaryotes $h_i \in H$, we will run BLASTN for each pair $(p_i, h_i)$. Only pairs with BLASTN E-value smaller than $\tau_2$ will form virus-prokaryote connections. In addition, because we have known virus-prokaryote connections from the public dataset, we connect the viruses with their known hosts regardless of their alignment E-values. Finally, the edges $(p_i, h_i) \in E$ can be formulated as Eq. \ref{edge3}. If there is an overlap between CRISPR-based and BLASTN-based edge, we will only create one edge between the virus and prokaryote. \begin{equation} \label{edge3} \indent virus\raisebox{0mm}{-}prokaryote = \begin{cases} & \text{ if $\exists$ CRISPR
#!/usr/bin/env python """ Module test_integrators - Contains the unit tests for the PoissonIntegrator and ImperfectIntegrator class. :History: 25 Aug 2010: Created 03 Sep 2010: Test the new leak function. 27 Sep 2010: Python environment for windows verified. 16 Nov 2010: Set the seed of the random number generator, for more predictable and repeatable tests. 05 Oct 2011: Enhanced the saturation test to check extreme input. 13 Nov 2012: Major restructuring of the package folder. Import statements updated. 23 Apr 2014: Changed the terminology to match common MIRI usage (e.g. an "integration" includes an interval between resets, not a fragment of that interval). Removed redundant methods. ImperfectIntegrator class added to separate effects due to other than Poisson statistics. 17 Jun 2014: Removed the old "preflash" latency implementation. 08 Sep 2015: Make compatible with Python 3. 04 Dec 2015: Renamed from poisson_integrator to integrators. 12 Feb 2016: Added tests for extremely small floating point flux values. 17 Feb 2017: The Poisson noise calculation no longer includes a ratchet, so successive values can go down as well as up. Test readouts converted to signed integers so that negative differences can be managed. 05 May 2017: Corrected permission for nosetests. 13 Dec 2017: Added flux and noise tests. 04 Jan 2017: Check the flux is correct when noise is turned on and when there is a non-zero pedestal. Also check the flux is correct when zeropoint drift and latency effects are included. @author: Steven Beard (UKATC) """ # This module is now converted to Python 3. #import os, sys import copy import unittest import numpy as np from miri.simulators.integrators import PoissonIntegrator, ImperfectIntegrator class TestPoissonIntegrator(unittest.TestCase): def setUp(self): # Create a very simple 3 x 3 Poisson integrator object self.integrator = PoissonIntegrator(3, 3, verbose=0) self.integrator.set_seed(42) def tearDown(self): # Tidy up del self.integrator def test_creation(self): # Check for exceptions when creating bad PoissonIntegrator objects. # The dimensions of the object must be positive. self.assertRaises(ValueError, PoissonIntegrator, -1, -1, verbose=0) def test_description(self): # Test that the querying and description functions work. # For the test to pass these only need to run without error. title = self.integrator.get_title() self.assertIsNotNone(title) self.assertIsNotNone(self.integrator.nperiods) self.assertIsNotNone(self.integrator.readings) self.assertIsNotNone(self.integrator.exposure_time) # The shape should be 3x3, as created. self.assertEqual(self.integrator.shape[0], 3) self.assertEqual(self.integrator.shape[1], 3) counts = self.integrator.get_counts() del counts descr = self.integrator.__str__() self.assertIsNotNone(descr) def test_integrate(self): # Test that integrating on a flux works as expected. flux = [[1.0, 2.0, 3.0], \ [4.0, 5.0, 6.0], \ [7.0, 8.0, 9.0]] self.integrator.reset() # Long integrations self.integrator.integrate(flux, 100.0) readout1 = self.integrator.readout().astype(np.int32) self.integrator.integrate(flux, 100.0) readout2 = self.integrator.readout().astype(np.int32) # Short integration self.integrator.integrate(flux, 1.0) readout3 = self.integrator.readout().astype(np.int32) # Even a zero-length integration is valid. self.integrator.integrate(flux, 0.0) readout4 = self.integrator.readout().astype(np.int32) # The readout for the long integration must increase. diff1 = readout2 - readout1 self.assertTrue(diff1.min() >= 0) # The final exposure time must be the sum of all the # integration times. expected = 100.0 + 100.0 + 1.0 + 0.0 actual = self.integrator.exposure_time self.assertAlmostEqual(expected, actual) # An integration with no flux, or a wait, does not # increases the exposure time. before = self.integrator.exposure_time self.integrator.integrate(None, 100.0) after = self.integrator.exposure_time self.assertAlmostEqual(before, after) before = self.integrator.exposure_time self.integrator.wait(100.0) after = self.integrator.exposure_time self.assertAlmostEqual(before, after) # A perfect reset should restore the readings to zero self.integrator.reset() readout6 = self.integrator.readout() self.assertTrue(np.all(readout6 == 0)) def test_flux(self): # Test that, when noise is switched off, the output flux is the # input illumination multiplied by the exposure time, at least # to within one electron. Defining a pedestal level should make # no difference. # Create a very simple 3 x 3 Poisson integrator object with # Poisson noise turned off. intnonoise = PoissonIntegrator(3, 3, simulate_poisson_noise=False, verbose=0) intnonoise.set_pedestal(8000 * np.ones([3, 3])) exptime = 100.0 for fluxlevel in (1.234, 12.345, 123.456, 1234.56): flux = fluxlevel * np.ones((3,3), dtype=np.float32) intnonoise.reset() intnonoise.integrate(flux, exptime) readout1 = intnonoise.readout().astype(np.int32) intnonoise.integrate(flux, exptime) readout2 = intnonoise.readout().astype(np.int32) difference = readout2 - readout1 # The flux difference must be less than 1 electron. deviation = difference.mean() - (fluxlevel * exptime) self.assertLess(abs(deviation), 1.0) del intnonoise # Check that the flux level is also approximately # correct when noise is turned on self.integrator.set_pedestal(8000 * np.ones([3, 3])) exptime = 100.0 for fluxlevel in (1.234, 12.345, 123.456, 1234.56): flux = fluxlevel * np.ones((3,3), dtype=np.float32) self.integrator.reset() self.integrator.integrate(flux, exptime) readout1 = self.integrator.readout().astype(np.int32) self.integrator.integrate(flux, exptime) readout2 = self.integrator.readout().astype(np.int32) difference = readout2 - readout1 # The flux difference must be less than 5%. deviation = difference.mean() - (fluxlevel * exptime) self.assertLess(abs(deviation), difference.mean()/20.0) def test_noise(self): # Test that the Poisson noise level is approximately the # square root of the input signal (Bug 439). # The test is done using a 1024x1024 integrator to # eliminate the small number statistics. intnoise = PoissonIntegrator(1024, 1024, verbose=0) intnoise.set_seed(42) exptime = 10.0 # Try 2 groups with varying flux levels. for fluxlevel in (1.234, 12.345, 123.456, 1234.56): flux = fluxlevel * np.ones((1024,1024), dtype=np.float32) intnoise.reset() readout1 = intnoise.readout().astype(np.int32) intnoise.integrate(flux, exptime) readout2 = intnoise.readout().astype(np.int32) difference = readout2 - readout1 noise = difference.std() # The following comparison can only be approximate, since the # noise should approximate the square root of expected count, # not the actual readings. Make sure it is within 1%. deviation = difference.mean() - (noise * noise) self.assertLess(abs(deviation), difference.mean()/100.0) # Try 10 groups with a fixed flux level. fluxlevel = 4.0 flux = fluxlevel * np.ones((1024,1024), dtype=np.float32) intnoise.reset() last_readout = intnoise.readout().astype(np.int32) first_readout = copy.deepcopy(last_readout) for group in range(1, 10): intnoise.integrate(flux, exptime) this_readout = intnoise.readout().astype(np.int32) difference = this_readout - last_readout noise = difference.std() # Make sure the noise is within 1% of expectation. deviation = difference.mean() - (noise * noise) # print("Mean=",difference.mean(), "Noise^2=", (noise*noise)) # print("Compare", abs(deviation), "with", difference.mean()/100.0) self.assertLess(abs(deviation), difference.mean()/100.0) last_readout = this_readout # Check the noise level between the last and first reading. difference = this_readout - first_readout noise = difference.std() # Make sure the noise is within 1% of expectation. deviation = difference.mean() - (noise * noise) # print("Mean=",difference.mean(), "Noise^2=", (noise*noise)) # print("Compare", abs(deviation), "with", difference.mean()/100.0) self.assertLess(abs(deviation), difference.mean()/100.0) del intnoise def test_saturation(self): # If a bucket size is defined, the integrator will saturate # when the count reaches or exceeds this bucket size. saturated = 10 flux = [[100.0, 200.0, 300.0], \ [400.0, 500.0, 600.0], \ [700.0, 800.0, 900.0]] test = PoissonIntegrator(3, 3, bucket_size=saturated, verbose=0) test.reset() test.integrate(flux, 1000.0) readout = test.readout() self.assertTrue(np.all(readout == saturated)) del test # If a bucket size is not defined, the integrator must still # be able to cope with extreme input without overflowing. flux = [[1.0e9, 2.0e9, 3.0e9], \ [4.0e9, 5.0e9, 6.0e9], \ [7.0e9, 8.0e9, 9.0e9]] test = PoissonIntegrator(3, 3, bucket_size=None, verbose=0) test.reset() test.integrate(flux, 1000.0) readout = test.readout() # Overflow will cause spurious negative values. self.assertTrue(np.all(readout >= 0)) del test def test_extreme(self): # Check that the integration and readout functions can accept # extremely small floating point values without raising an # exception (Bug 16). self.integrator.reset() flux = [[1.0e-9, 2.0e-9, -3.0e9], \ [4.0e-9, 5.0e-15, 6.0e-9], \ [7.0e-9, -8.0e-4, -9.0e-15]] self.integrator.integrate(flux, 1.0) readout = self.integrator.readout(nsamples=4) # Check there are no negative values. self.assertTrue(np.all(readout >= 0)) # Check that the functions can safely accept a stream of random # positive and negative values. for rep in range(0,10): randflux = np.random.randn( 3, 3 ) - 0.5 self.integrator.integrate(randflux, 10.0) readout = self.integrator.readout(nsamples=4) # Check there are no negative values. self.assertTrue(np.all(readout >= 0)) def test_wrong_shape(self): # Attempting to integrate on a flux array of the wrong shape # should raise an exception. flux = [1.0, 3.0, 5.0] self.assertRaises(TypeError, self.integrator.integrate, flux, 1.0) def test_negative_time(self): # Attempting to integrate with a negative integration time # should raise an exception. flux = [[1.0, 2.0, 3.0], \ [4.0, 5.0, 6.0], \ [7.0, 8.0, 9.0]] self.assertRaises(ValueError, self.integrator.integrate, flux, -1.0) class TestImperfectIntegrator(unittest.TestCase): def setUp(self): # Create a very simple 3 x 3 imperfect Poisson integrator object self.integrator = ImperfectIntegrator(3, 3, verbose=0) self.integrator.set_seed(42) def tearDown(self): # Tidy up del self.integrator def test_creation(self): # Check for exceptions when creating bad ImperfectIntegrator objects. # TBD pass def test_description(self): # Test that the querying and description functions work. # For the test to pass these only need to run without
## 20 May 2011 ### Automatic forcing of promises in Klink I had meant to explain also about auto-forcing in $let' or $define!', but for some reason I didn't. So I'm adding this now. Background: In Kernel, combiners like $let' and $define!' destructure values. That is, they define not just one thing, but an arbitrarily detailed tree of definiendums. So when a value doesn't match the tree of definiendums, or only partly matches, but the part that doesn't match is a promise, Klink forces the promise and tries the match again. Unlike argobject destructuring, this doesn't check type. ### Automatic forcing of promises in Klink As I was coding EMSIP in Kernel, I realized that I was spending entirely too much time and testing to manage the forcing of promises. I needed to use promises. In particular, some sexps had to have the capability of operating on what followed them, if only to quote the next sexp. But I couldn't expect every item to read its tail before operating. That would make me always read an entire list before acting. This isn't just inefficient, it is inconsistent with the design as an object port, from which objects can be extracted one by one. So what I have now is automatic forcing of promises. This occurs in two destructuring situations. I've coded one, and I'm about to code the other. ### Operatives' typespecs Background: For a while now, Klink has been checking types before it calls any built-in operative. This operation can check an argobject piece by piece against a typespec piece by piece. That destructures it treewise to arguments that fill an array that is exactly the arguments for to the C call. It's very satisfactory. Now when an argobject doesn't match a typespec, but the argobject is a promise, the destructure operation arranges for the promise to be forced. After that comes the tricky part. While the destructuring was all in C, it could just return, having filled the target array. But now it has to also reschedule another version of itself, possibly nested, and reschedule the C operation it was working towards. All fairly tricky, but by using the chain combiners and their support, and by passing destructure suitable arguments, I was able to make it work. ### Defining As in $let' or $define!'. I'm about to code this part. I expect it to be along similar lines to the above, but simpler (famous last words). ## Status I haven't pushed this branch to the repo yet because I've written only one of the two parts, the destructuring. That part passes the entire test suite. I haven't yet tried it with EMSIP to see if it solves the problem. ## Does it lose anything? ISTM this does not sacrifice anything, other than the {design, coding, testing, debugging} effort I've spent on it. ### Functionality It subtracts no functionality. force' is still available for those situations when manual control is needed. ### Restraint The opposite side of functionality. Does this sacrifice the ability to refrain from an action? No. In every circumstance that a promise is forced, the alternative would be an immediate error. There could never have been a capacity to do the same thing except refraining from forcing. But does it sacrifice the ability to make other code refrain from an action? No, the other code could have just called force' at the same points. ### Exposure Does this expose whether a promise has been forced? No, not in any way that wasn't already there. Of course one can deduce that a promise has been forced from the fact that an operation has been done that must force that promise. That's always been the case. ### Code size The init.krn code is actually slightly smaller with this. The C code grew, but largely in a way that it would have had to grow anyways. ## FAIrchy diagram I wrote yesterday about FAIrchy, my notion that combines FAI and futarchy. Here is an i* diagram that somewhat captures the system and its rationale. It's far from perfect, but captures a lot of what I was talking about. Many details are left out, especially for peripheral roles, ## Some comments on this diagram technically I felt like I needed another type of i* goal-node to represent measurable decision-market goal components, which are goal-like but are unlike both hard and soft i* goals. Similarly I wanted a link-type that links these to measurement tasks. I used the dependency link, which seemed closest to what I want, but it's not precisely right. There's some line-crossing. Dia's implementation of i* makes that inevitable for a large diagram. ## FAIrchy1 In this blog post I'm revisiting a comment I made on overcomingbias2. I observed that Eliezer Yudkowsky's Friendly Artificial Intelligence (FAI) and futarchy have something in common, that they are both critically dependent on a utility function that has about the same requirements. The requirements are basically: • Society-wide • Captures the panorama of human interests • Future-proof • Secure against loophole-finding ## Background: The utility function Though the utility functions for FAI and futarchy have the same requirements, thinking about them has developed very differently. The FAI (Singularity Institute) idea seems to be that earlier AIs would think up the right utility function. But there's no way to test that the AI got it right or even got it reasonable. In contrast, in talking about futarchy it's been clear that a pre-determined utility function is needed. So much more thought has gone into it from the futarchy side. In all modesty, I have to take a lot of the credit for that myself. However, I credit Robin Hanson with originally proposing using GDP3. GDP as such won't work, of course, but it is at least pointed in the right general direction. My thinking about the utility function is more than can be easily summed up here. But to give you a general flavor of it, the problem isn't defining the utility function itself, it's designing a secure, measurable proxy for it. Now I think it should comprise: • Physical metrics (health, death, etc) • Economic metrics • Satisfaction surveys. • To be taken in physical circumstances similar to secret-ballot voting, with similar measures against vote-selling, coercion, and so forth. • Ask about overall satisfaction, so nothing falls into the cracks between the categories. • Phrase it to compare satisfaction across time intervals, rather than attempting an absolute measure. • Compare multiple overlapping intervals, for robustness. • Existential metrics • Metrics of the security of the other metrics. • Citizen's proxy metrics. Citizens could pre-commit part of their measured satisfaction metric according to any specific other metric they chose. • This is powerful: • It neatly handles personal identity issues such as mind uploading and last wills. • It lets individuals who favor a different blend of utility components effect that blend in their own case. • May provide a level of control when we transition from physical-body-based life to whatever life will be in the distant future. • All in all, it puts stronger control in individual hands. • But it's also dangerous. There must be no way to compel anyone to proxy in a particular way. • Proxied metrics should be silently revocable. Citizens should be encouraged, if they were coerced, to revoke and report. • It should be impossible to confirm that a citizen has made a certain proxy. • Citizens should not be able to proxy all of their satisfaction metric. • (Not directly a utility component) Advisory markets • Measure the effectiveness of various possible proxies • Intended to help citizens deploy proxies effectively. • Parameterized on facets of individual circumstance so individuals may easily adapt them to their situations and tastes. • These markets' own utility function is based on satisfaction surveys. This isn't future-proof, of course. For instance, the part about physical circumstances won't still work in 100 years. It is, however, something that an AI could learn from and learn with. ## Background: Clippy and the box problem One common worry about FAI is when the FAI
# 70 761 exam details Consider an abrupt PN junction (at T = 300 K) shown in the figure. Network solution methods: Nodal and mesh analysis; Network theorems: superposition, Thevenin and Norton’s, maximum power transfer; Wye‐Delta transformation; Steady state sinusoidal analysis using phasors; Time domain analysis of simple linear circuits; Solution of network equations using Laplace transform; Frequency domain analysis of RLC circuits; Linear 2‐port network parameters: driving point and transfer functions; State equations for networks. If EC is the lowest energy level of the conduction band, EV is the highest energy level of the valance band and EF is the Fermi level, which one of the following represents the energy band diagram for the biased N-type semiconductor? The donor doping concentration ND and the mobility of electrons μn are 1016 cm-3 and 1000 cm2 V-1s-1 , respectively . The built-in potential of an abrupt p-n junction is 0.75 V. If its junction capacitance (CJ) at a reverse bias (VR) of 1.25 V is 5 pF, the value of CJ (in pF) when VR = 7.25 V is_________. As shown, a uniformly doped Silicon (Si) bar of length L=0.1 µm with a donor concentration ND=1016 cm-3 is illuminated at x=0 such that electron and hole pairs are generated at the rate of GL=GLO1-xL,0≤x≤L, where GLO=1017 cm-3 S-1.Hole lifetime is 10-4s, electronic charge q=1.6×10-19 C, hole diffusion coefficient DP=1000 cm2/s and low level injection condition prevails. Main aim is to strengthen the skill and knowledge among students inorder to develop them as a good Electronic Engineer. The focus throughout the course lies on the applications of these technologies. Dimitrijev- Semiconductor Devices- Oxford 4. JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD II Year B.Tech. here EC6202 EDC Syllabus notes download link is provided and students can download the EC6202 Syllabus and Lecture Notes and can make use of it. Electronics Circuits II Syllabus for ECE 4th Semester EC2251. The slope of the ID vs.VGS curve of an n-channel MOSFET in linear regime is 10−3Ω−1 at VDS=0.1 V. For the same device, neglecting channel length modulation, the slope of the ID vs. VGS curve (in AV) under saturation regime is approximately _________. Introduction to Embedded Systems book by Shibu K V 3. Covers operational amplifiers, diode circuits, circuit characteristics of bipolar and MOS transistors, MOS and bipolar digital circuits, and simulation software. If the base width in a bipolar junction transistor is doubled, which one of the following statements will be TRUE? Note: This syllabus is same for BME, CSE, ECE, EEE, EIE, Electronics and Computer Engineering, ETM, ICE, IT/CST, Mechanical Engineering (Mechatronics). Along with GATE 2021 Electronics and Communication Engineering syllabus, candidates are also advised to check the GATE 2021 exam pattern for effective exam preparation. Devise an effective preparation strategy for GATE 2021 with Electronics & Communication Engineering (EC) Syllabus. 4. Compared to a p-n junction with NA=ND=1014/cm3, which one of the following statements is TRUE for a p-n junction with NA=ND=1020/cm3? Electrostatics; Maxwell’s equations: differential and integral forms and their interpretation, boundary conditions, wave equation, Poynting vector; Plane waves and properties: reflection and refraction, polarization, phase and group velocity, propagation through various media, skin depth; Transmission lines: equations, characteristic impedance, impedance matching, impedance transformation, Sparameters, Smith chart. Share Notes with your friends. Assuming a linearly decaying steady state excess hole concentration that goes to 0 at x=L, the magnitude of the diffusion current density at x=L/2, in A/cm2, is__________. Field Theory Contact Hours/Week Cr. Number systems; Combinatorial circuits: Boolean algebra, minimization of functions using Boolean identities and Karnaugh map, logic gates and their static CMOS implementations, arithmetic circuits, code converters, multiplexers, decoders and PLAs; Sequential circuits: latches and flip‐flops, counters, shift‐registers and finite state machines; Data converters: sample and hold circuits, ADCs and DACs; Semiconductor memories: ROM, SRAM, DRAM; 8-bit microprocessor (8085): architecture, programming, memory and I/O interfacing. AIM. MODULE 4. Match each device in Group I with its characteristic property in Group II. A BJT is biased in forward active mode. EC6202 Notes Syllabus all 5 units notes are uploaded here. The semiconductor has a uniform electron concentration of n =1 × 1016 cm-3 and electronic charge q =1.6 × 10-19 C. If a bias of 5 V is applied across a 1 µm region of this semiconductor, the resulting current density in this region, in kA/cm2, is____________. ECE 333: Electronics I Spring 2020 Catalog: Introduction to electronic devices and the basic circuits. The value of the resistance of the voltage Controlle resistor (in Ω) is_____. The GATE 2020 syllabus for ECE PDF will have a total of 8 sections covered in it and they are, Engineering Mathematics; Networks, Signals and Systems; Electronic Devices; Analog Circuits; Digital Circuits; Control Systems; Communications and Electromagnetics. For the MOSFET M1 shown in the figure, assume W/L = 2, VDD = 2.0 V,μnCox=100  μA/V2 and VTH = 0.5 V. The transistor M1 switches from saturation region to linear region when Vin (in Volts) is__________. Gate CSE Practice Questions; Algorithms Notes; Gate CSE Question Bank; C Programming; Gate Year Wise Solutions; Topic Wise Solutions; Gate 2021 Syllabus; ESE/IAS ECE; Menu Close. Note that $V_{GS}$ for M2 must be $>$1.0 V. The voltage (in volts, accurate to two decimal places) at $V_x$ is _______. The width of the depletion region is W and the electric field variation in the x-direction is E (x ). Continuous-time signals: Fourier series and Fourier transform representations, sampling theorem and applications; Discrete-time signals: discrete-time Fourier transform (DTFT), DFT, FFT, Z-transform, interpolation of discrete-time signals; LTI systems: definition and properties, causality, stability, impulse response, convolution, poles and zeros, parallel and cascade structure, frequency response, group delay, phase delay, digital filter design techniques. here EC6202 EDC Syllabus notes download link is provided and students can download the EC6202 Syllabus and Lecture Notes and can make use of it. Thomas L.Floyd, “Electronic devices” Conventional current version, Pearson prentice hall, … Assume kT/q = 25 mV. For Course Code, Subject Names, Theory Lectures, Tutorial, Practical/Drawing, Credits, and other information do visit full semester … 22.5K. ECN-203 Signals and Systems : DCC 4 : 17. The built-in potential and the depletion width of the diode under thermal equilibrium conditions, respectively, are. Bell-Electronics Devices and Circuits-Oxford 3. Anna University EC6202 Electronic Devices and Circuits Syllabus Notes 2 marks with answer is provided below. Assuming that the reverse bias voltage is » built-in potentials of the diodes, the ratio C 2/C1 of their reverse bias capacitances for the same applied reverse bias, is_________. Complex Analysis: Analytic functions, Cauchy's integral theorem, Cauchy's integral formula; Taylor's and Laurent's series, residue theorem. The transistor is of width 1 μm. ECE - I Sem L T/P/D C 4 -/-/- 4. Circuit Theory and Devices (CTD): This course intends to develop problem solving skills and understanding of circuit theory through the application of techniques and principles of electrical circuit analysis to common circuit problems. The course deals with the op-amp, the diode, the bipolar junction transistor, and the field-effect transistor. By continuing to browse the site, you agree to our Privacy Policy and Cookie Policy. The measured transconductance gm of an NMOS transistor operating in the linear region is plotted against the gate voltage VG at constant drain voltage VD. The current in an enhancement mode NMOS transistor biased in saturation mode was measured to be 1 mA at a drain-source voltage of 5 V. When the drain-source voltage was increased to 6 V while keeping gate-source voltage same, the drain current increased to 1.02 mA. Computers as Components-principles of Embedded computer system design, Wayne Wolf, Elsevier. EC6202 Notes Syllabus all 5 units notes are uploaded here. The charge of an electron is 1.6 X 10-19 C. The resistivity of the sample (in Ω-cm) is _______. The slope of the
discover how to derive the massless quantum $AdS_2$ $R$-matrix from the Yangian universal $R$-matrix. The paper is strutured as follows: in section \ref{s2} we recall the notion of Hopf-algebra twists; in section \ref{s3} we recall the massless $AdS_3$ $S$-matrix; in section \ref{s4} we derive the factorising twist from the universal $R$-matrix of the $\alg{gl}(1|1)$ Yangian, discuss how the coproducts transform after we (un)do the twist, and tackle the three-site problem for the monodromy matrix; in section \ref{s5} we show how to recover a simple scalar product \cite{JuanMiguelAle} using the twist; in section \ref{specu} we speculate on the general case; in section \ref{s7} we make a connection with the free-fermion realisation \cite{Dublin}; in section \ref{s8} we make an extension to mixed-flux backgrounds; in section \ref{s9} we treat the $AdS_2$ case and show how to use the Yangian universal $R$-matrix to derive the massless $R$-matrix, alongside with a rather unusual version of the factorising twist. We then make some Conclusions and Acknowledgments. \section{Hopf algebra twists\label{s2}} We briefly recall here the notion of Hopf algebra twist \cite{Chari:1994pz}. Given a quasi-triangular Hopf algebra $\cal{A}$, where we denote by $\mathfrak{1}$ the unit with respect to the multiplication, with $\Delta$ the coproduct, with $\epsilon$ the counit and with $\cal{R}$ the universal $R$-matrix, we can obtain another quasi-triangular Hopf algebra by {\it twisting} it. This means that, if we can find an invertible element ${\cal{F}} \in {\cal{A}\otimes \cal{A}}$ satisfying \begin{eqnarray} (\epsilon \otimes \mathfrak{1}) \, {\cal{F}} = (\mathfrak{1} \otimes \epsilon) \, {\cal{F}} = \mathfrak{1}, \qquad ({\cal{F}} \otimes \mathfrak{1})(\Delta \otimes \mathfrak{1} ) {\cal{F}} = (\mathfrak{1} \otimes {\cal{F}}) (\mathfrak{1} \otimes \Delta) {\cal{F}} \end{eqnarray} (with $\mathfrak{1}$ acting by multiplication in the above formulas), then the new coproduct and, respectively, new $R$-matrix \begin{eqnarray} \tilde{\Delta}(x) = {\cal{F}} \Delta(x) {\cal{F}}^{-1} \qquad \forall \, \, x \in {\cal{A}}, \qquad \tilde{{\cal{R}}} = {\cal{F}}^{op} {\cal{R}} {\cal{F}}^{-1}, \end{eqnarray} define a new quasi-triangular Hopf algebra. The idea of a factorising twist correponds to finding a particular twists such that the new $R$-matrix is equal to $\mathfrak{1} \otimes \mathfrak{1}$. From the definition, this automatically implies that the old $R$-matrix can be factorised as \begin{eqnarray} {\cal{R}} =( {\cal{F}}^{op})^{-1} \, \mathfrak{1}\otimes \mathfrak{1} \, {\cal{F}} = ( {\cal{F}}^{op})^{-1} {\cal{F}}. \end{eqnarray} Because the new $R$-matrix is equal to identity, the new coproduct must be co-commutative - although it can be highly non-trivial. In fact, one has \begin{eqnarray} \tilde{\Delta}^{op}(x) \tilde{{\cal{R}}} = \tilde{{\cal{R}}} \tilde{\Delta}(x) \qquad \forall \, \, x \in {\cal{A}}, \qquad \tilde{{\cal{R}}} = \mathfrak{1} \otimes \mathfrak{1} \qquad \leftrightarrow \qquad \tilde{\Delta}^{op} = \tilde{\Delta}. \end{eqnarray} For purely bosonic Hopf algebras a theorem by Drinfeld always guarantees the existence of a factorising twist under suitable assumptions - see \cite{Maillet,Maillet2}, where a very detailed presentation of Hopf and quasi-Hopf algebras and their twists is also offered. \section{The massless $AdS_3$ $S$-matrix\label{s3}} In this paper we will only focus on the same-chirality massless $S$-matrix of the $AdS_3 \times S^3 \times T^4$ superstring, as it will be sufficient to illustrate our main results. We will also not be concerned at present with the scalar (so-called {\it dressing}) factor, but will instead focus merely on the suitably normalised matrix part. We will also make systematic use of the associated $R$-matrix. If the $S$-matrix is defined as \begin{equation} S : V_1 \otimes V_2 \longrightarrow V_2 \otimes V_1, \qquad S |v_\alpha (\gamma_1) \rangle \otimes |v_\beta (\gamma_2)\rangle = S^{\rho \sigma}_{\alpha \beta} (\gamma_1 - \gamma_2) |v_\rho (\gamma_2) \rangle \otimes |v_\sigma (\gamma_1)\rangle,\label{summat} \end{equation} the $R$-matrix is instead given by \begin{eqnarray} S = \Pi \circ R, \qquad R = \rho_1 \otimes \rho_2 \, ({\cal{R}}), \end{eqnarray} where $\Pi$ is the graded permutation on the states: \begin{eqnarray} \Pi |x\rangle \otimes |y\rangle = (-)^{[x][y]} \, |y\rangle \otimes |x\rangle, \end{eqnarray} where $[x]$ provides the fermionic number of the excitation $x$ and equals $0$ for bosons and $1$ for fermions. The notation $|v_\alpha(\gamma_i)\rangle$ denotes a basis of vectors of so-called {\it one-particle} representation: namely, we have a basis of the vector space $V_i$, which is of dimension $d_i$ (typically both $d_i=2$ for $i=1,2$ in this paper), hence $\alpha = 1,...,d_i$; such vector space carries an $\cal{A}$-representation $\rho_i:{\cal{A}} \to End(V_i)$, and such a representation is characterised by a particular parameter $\gamma_i$ (which is real for unitary representations). Such a parameter will play the role of the physical rapidity of the scattering states, but in general is to be understood as the eigenvalue of a particular Casimir element of $\cal{A}$ in the representation $\rho_i$. In (\ref{summat}) summation of repeated indices is understood. In the paper\footnote{We thank the anonymous referee for the pointing out a notational discrepancy in a previous version of the paper.} we will typically understand an italic $R$ symbol as denoting the representation $\rho_1 \otimes \rho_2$ of the corresponding $\cal{R}$ universal object (for instance, we will have $R_+$ by which we will mean $\rho_1 \otimes \rho_2 \, ({\cal{R}}_+)$, etc.). Likewise for the symbol $F$ (with respect to its corresponding $\cal{F}$). We will also exploit the difference-form of the $S$-matrix in the appropriate pseudo-relativistic variables \begin{eqnarray} \gamma_i = \log \tan \frac{p_i}{4}, \qquad i=1,2, \end{eqnarray} $p_i$ being the momentum of the $i$-th particle. The $R$-matrix then maps \begin{equation} R : V_1 \otimes V_2 \longrightarrow V_1 \otimes V_2. \end{equation} In the specific case we will study the matrix representation has been worked out in \cite{DiegoBogdanAle}, and reads \begin{equation}\label{eq:RLLlimtheta} \begin{aligned} &R |b\rangle \otimes |b\rangle\ = {} \, |b\rangle \otimes |b\rangle, \\ &R |b\rangle \otimes |f\rangle\ = -\tanh\frac{\gamma}{2} |b\rangle \otimes |f\rangle + {\rm sech}\frac{\gamma}{2}|f\rangle \otimes |b\rangle, \\ &R |f\rangle \otimes |b\rangle\ = {} \, {\rm sech}\frac{\gamma}{2} |b\rangle \otimes |f\rangle + \tanh\frac{\gamma}{2} |f\rangle \otimes |b\rangle, \\ &R |f\rangle \otimes |f\rangle\ = - {} \, |f\rangle \otimes |f\rangle\,, \end{aligned} \end{equation} where \begin{equation} \label{eq:diff-form} \gamma = \gamma_1 - \gamma_2\, \end{equation} and $(b,f)$ are a (boson, fermion) doublet, which we will conventionally number as $|b\rangle = |1\rangle$ and $|f\rangle = |2\rangle$ (not to be confused with the spaces $1$ and $2$ of the tensor product). We have also suppressed the $\gamma$ in the basis vectors to lighten up the notation (\ref{summat}) - here $d_i=2$ (in fact $1+1$) $\forall \, \, i=1,2$. One can verify that this $R$-matrix satisfies \begin{eqnarray} R_{21} (-\gamma) R_{12}(\gamma) = 1_2 \otimes 1_2, \qquad R_{12}(\gamma) = R, \qquad R_{21}(\gamma) = perm \circ R,\label{bru} \end{eqnarray} where $1_2$ is the two-dimensional identity matrix. Property (\ref{bru}) is the manifestation of the {\it braiding unitarity} property of the $R$-matrix in the chosen two-dimensional representation. The map $perm$ is the graded permutation on the operators: \begin{eqnarray} perm (A \otimes B) = (-)^{[A][B]} B \otimes A, \qquad A,B : V_i \longrightarrow V_i, \qquad i=1 \, \, \mbox{or} \, \, 2, \end{eqnarray} and it is only defined for operators which are homogeneous with respect to the fermionic grading. In this paper we shall only consider operators which are homogeneous in this sense. Practically, one can write \begin{eqnarray} R = R_{12}(\gamma) = R^{abcd}(\gamma) E_{ab}\otimes E_{cd}, \qquad R_{21}(\gamma) = (-)^{([a]+[b])([c]+[d])} R^{abcd}(\gamma) E_{cd}\otimes E_{ab}. \end{eqnarray} In these formulas, all repeated indices are summed over the range $1,2$. $E_{ij}$ are the matrices with all zeroes except a $1$ in row $i$ and column $j$. The $R$-matrix also has the curious property \begin{eqnarray} R_{12}^2(\gamma) = 1_2 \otimes 1_2. \end{eqnarray} \section{$R$-matrix twist\label{s4}} \subsection{Two sites from the Universal $R$-matrix} The $R$-matrix twist is obtained by making use of the universal $R$-matrix formulation of the $\mathfrak{gl}(1|1)$ Yangian, which is the quantum supergroup underlying all $AdS_3$ $S$-matrices. We first review the Khoroshkin-Tolstoy construction which provides the universal $R$-matrix and which, when evaluated in specific representations, gives us the physical $R$-matrix described in the previous section. The first thing to do is to define the (super)Yangian of $\alg{gl}(1|1)$ in the so-called second realisation of Drinfeld's \cite{Drinfeld:1987sy}, see in particular \cite{Khoroshkin:1994uk,Koro2}\cite{unive0,unive}. In fact what admits a universal $R$-matrix is the (super)Yangian double $DY(\alg{gl}(1|1))$, which is generated by Chevalley-Serre elements ${}{e}_n$, ${}{f}_n$, ${}{h}_n$, ${}{k}_n$, $n\in\mathbbmss{Z}$, satisfying \begin{equation} \label{eq:Lie} \begin{gathered} \comm{{}{h}_0}{{}{e}_n} = -2{}{e}_n , \qquad \comm{{}{h}_0}{{}{f}_n} = +2{}{f}_n, \qquad \acomm{{}{e}_m}{{}{f}_n} = -{}{k}_{m+n} , \\ \comm{{}{h}_m}{{}{h}_n} = \comm{{}{h}_m }{{}{k}_n} = \comm{{}{k}_m }{{}{k}_n} = \comm{{}{k}_m }{{}{e}_n} = \comm{{}{k}_m }{{}{f}_n} = \acomm{{}{e}_m}{{}{e}_n} = \acomm{{}{f}_m}{{}{f}_n} = 0 , \\ \comm{{}{h}_{m+1}}{{}{e}_n} - \comm{{}{h}_m}{{}{e}_{n+1}} + \acomm{{}{h}_m}{{}{e}_n} = 0, \qquad \comm{{}{h}_{m+1}}{{}{f}_n} - \comm{{}{h}_m}{{}{f}_{n+1}} - \acomm{{}{h}_m}{{}{f}_n} = 0. \end{gathered} \end{equation} Closely following \cite{Koro2}, one then needs to introduce the generating functions ({\it currents}) \begin{equation}
:-) Posted by: Urs Schreiber on December 31, 2008 3:01 PM | Permalink | Reply to this Re: Organizing the Pages at nLab I second the motion to just improve something if you don’t like what you see. I also don’t think that worrying about format should slow down the production of content. Like suggestions above, you can always add “What it Really Means” headings with alternative explanations. The more people that contribute content, the better it will become. People like me who are very interested, but not able to contribute much in terms of content, should be more than happy to contribute by tidying things up. I’m happy to go through pages and make cosmetic changes to free the experts up to just increase the intellectual content. Posted by: Eric on December 31, 2008 5:08 PM | Permalink | Reply to this Re: Organizing the Pages at nLab Todd writes: There are a number of spots – I think the entry on enriched categories may be one – where in effect the reader is told “you figure it out!”, which I strongly disfavor if left merely at that. Of course. I write this sort of thing when I want the reader who knows what’s going on — like you, Todd! — to get annoyed and improve the article. If not, I’ll have to fix the darned thing myself someday. We want a gleaming city full of skyscrapers; right now we have a bunch of mud huts. But you’ll see: someday we’ll have that city. The yearning for perfection should not slow the march to betterment. The key is to keep typing. Sometimes people ask me “How do you write so much?” And I answer by holding out my hands, palms down, and wiggling my fingers. But: we need more people to help! Anyone who knows something about categories, $n$-categories, homotopy theory, operads, or anything like that — spend 5 minutes at the nLab today! Start or improve an article! Please! Posted by: John Baez on December 31, 2008 6:10 PM | Permalink | Reply to this Re: Organizing the Pages at nLab Thanks for clarifying! My complaint was written before I saw how that particular entry had been updated, but I thought I had seen a number of entries like that and I mistakenly interpreted the “left to the reader” as “exercise for the reader”. Since I knew how to do that exercise, and you knew I knew, I didn’t realize I was “it”! Just me being dumb. :-) The message I keep getting [addressed to me personally] is to get off my butt and start typing! (Well, normally I type while seated, but you know what I mean.) Which I’m starting to do, you know. More will come. Posted by: Todd Trimble on December 31, 2008 7:09 PM | Permalink | Reply to this Re: Organizing the Pages at nLab Yes, it would be good if the wiki software could automatically create a table of contents from the headlines given, as happens on Wikipedia. I don’t know how hard it would be to get that. Jacques will certainly know more. Posted by: Urs Schreiber on December 31, 2008 2:57 PM | Permalink | Reply to this Re: Organizing the Pages at nLab “take this opportunity to present everything arrow theoretically.” why not clicable diagrams, with the translation “in words” appearing after a click? Posted by: yael on January 2, 2009 9:30 PM | Permalink | Reply to this Re: Organizing the Pages at nLab By the way, this General Discussion page needs a table of contents, with links, for people to find information on different topics. It’s sprawling out of control. Much of the discussion there is no longer active; it could be moved to an archive page, or even removed altogether. (It's always in the history if anybody really needs it, but archives are better if you want it accessible.) It is hard to see where the last contribution is. Hit ‘See changes’ at the bottom. Posted by: Toby Bartels on December 31, 2008 5:38 PM | Permalink | Reply to this Re: Organizing the Pages at nLab It is hard to see where the last contribution is. Hit ‘See changes’ at the bottom. Yes, but that’s still inconvenient. For instance, this will alert me only of the last change made, not any changes in between that I might have missed. I think: a Wiki is not a discussion forum. If possible, we should have “GeneralDiscussion” about the $n$Lab here on the blog. We should maybe create a general-purpose blog entry “General $n$Lab-Discussion” here on the blog, similar in purpose to the entry “TeXnical issues”, and link to that from the $n$Lab, instead of the the current “GeneralDiscussion” page. Posted by: Urs Schreiber on January 2, 2009 1:59 PM | Permalink | Reply to this Re: Organizing the Pages at nLab It is hard to see where the last contribution is. Hit ‘See changes’ at the bottom. Yes, but that’s still inconvenient. For instance, this will alert me only of the last change made, not any changes in between that I might have missed. Hit ‘Back in time’ next (repeatedly, until you've gone back far enough —the time stamps can help here— or it disappears). I agree that it's inconvenient, and that discussion works better here. But I want people to know that it is possible, especially as some discussion continues to happen there. Posted by: Toby Bartels on January 3, 2009 5:07 AM | Permalink | Reply to this Re: Organizing the Pages at nLab I have now written a bit in the entry $n$Lab:About on “What the $n$Lab is and what it is not.” Trying to provide helpful hints on some of the issues which we discussed above. Posted by: Urs Schreiber on January 2, 2009 1:55 PM | Permalink | Reply to this Read the post nLab -- General Discussion Weblog: The n-Category Café Excerpt: A place for general discussion concerning the nLab. Tracked: January 4, 2009 1:10 PM contributing to the nLab John wrote (by email, as quoted by David in the above entry): I’m even imagining a fiendish plan where I force my grad students to write nLab articles on topics they’re learning about. Not sure that’s a good idea. I find it a very good idea. (Maybe with force replaced by encourage). Myself, I am using the $n$Lab as a notebook for things that I am learning/thinking about, too. If all of us make a note in the $n$Lab on whatever material we come across in our daily work, we’ll have a remarkable win-win situation. Posted by: Urs Schreiber on January 4, 2009 1:32 PM | Permalink | Reply to this Re: contributing to the nLab Such nlab entries by grad students (wish I had some) would be excellent training before writing their thesis. Some bench scientists require students write a grant proposal - again good practice. And who knows the fall out as happens e.g. writing up notes from the master’s lectures - you know who I mean. Posted by: jim stasheff on January 4, 2009 2:45 PM | Permalink | Reply to this Re: Organizing the Pages at nLab John wrote (by email, as quoted by David in the above entry): However, Urs seems to like focusing on high-level folks, while I focus on low-level folks. I have two remarks on this: 1st, I’d think it is a good thing if the contributors to the $n$Lab have complementary material to offer. This is one advantage of a collaborative wiki over a private one, the differing points of view that it can offer. 2nd, for the record I should maybe admit that I am using the blog and the wiki mainly because and to the extent that I feel that I personally benefit from it in my research. I am not in a position to spend large chunks of time besides my research with expositional writing, even if I wanted to. What I write are notes that help me as notes for my own research, but written
the description of the physical world. I can find a lot of examples for registration, but not many example that arbitrarily transform a 3D volume. The paper presents the problem of modelling the parallel kinematic delta system using specialized software Simmechanics constituting the extension of the Matlab environment. 001:2*pi; x = cos(theta); y = sin(theta); hbead = line (x(1), y(1),'marker','o', 'markersize',10);. H= trplot(R, options) as above but returns a handle. MATLAB is an interactive system whose basic data type is the array or matrix. replace('\r ', '\t'). Find out information about rotating coordinate system. in the Cartesian coordinate system. camorbit(dtheta,dphi,'coordsys') The coordsys argument determines the center of rotation. I in essence need to rotate them about the origin of my local system as much as my local system is rotated from my global system. 3D Coordinate Translation and Rotation Formulas for Excel. Oussama Aatiq. - "Fundamentals of Digital Image Processing" by Chris Solomon and Toby Breckon. Represent latitude-longitude data using a geographic CRS or x-y map data using a projected CRS. Changing Coordinate Systems. Coordinates address spatial dimensions, time and conditions and may fall into common predefined systems such as cartesian, polar and geographic coordinates. The Matlab/Octave rotations library is a collection of functions, bundled as m-scripts, that address computations and numerical handling of rotations This corresponds to the global z-axis, expressed in terms of the rotated coordinate frame. 36 depicts how to interpret the azimuth and elevation angles relative to the plot coordinate system. Apr 08, 2016 · So if you want to keep it really simple, create an attribute set with two attributes: 1) Energy and 2) Instantaneous attribute with energy as input and Rotate phase -90 degrees as output. We can nd the rotation matrices in Matlab using the following. The rotation service uses the following transformation matrix to change the output vectors for 2-D horizontal transformations. around Z axis) Y axis (angle e. There are a few ways of finding optimal rotations between points. Get Better Grades. Analysis and Design of Control Systems Using Matlab. In this blog, I show you how to please help me about "fnplt" i have many data and i intend plot them in a 3-dimensional coordinate(rectangular coordinate). Geocentric and Geodetic Latitudes; NED Coordinates; ECI Coordinates; ECEF Coordinates; Coordinate Systems for Display. We shall see that this is an ideal definition of the ECI coordinate system, but we won't worry about the slight rotations involved until later. Each has a specific coordinate system for rendering motion. MATLAB rotate matrix by angle. Sensor Fusion and Tracking Toolbox defaults to frame rotation. In this article. Cartesian Coordinate System - Original (0, 0) is in the bottom left corner. MATLAB will interpret y as a variable (not a value of 2). Construct a rectangular, or Cartesian, coordinate system for three-dimensional space by specifying three mutually orthogonal coordinate axes. function R = rot_coordsXYZmtx(ax,ay,az) % R=rot_coordsXYZmtx(ax,ay,az) W. I have to report the results in absolute coordinates, so I wanted to check my conversion formulas and found your calculator. (0,0,1) is a point one unit directly above the origin. (Note: The coordinate systems shown in Digital Image Processing by Gonzalez and Woods, as well as Digital Image Processing Using MATLAB by Gonzalez, Woods, and Eddins, are different from what I've shown here. Rotation A rotation of a vector ~vin a coordinate system Ais an operation, which modi es ~v's representation in A. View rotation using the PLAN command; Moving or rotating the UCS can make it easier to work on particular areas of a drawing. The kinematicTrajectory System object generates trajectories using specified acceleration and angular velocity. Wind Coordinates. If the rotation axis in an arbitrary axis, directed by a unit vector $$\vec{n}$$, then take $$R$$ as the following matrix, according to whether $$\vec{n}$$ is given by its spherical coordinates or its Cartesian coordinates. The following Matlab project contains the source code and Matlab examples used for rotate, offset, transform cartesian coordinates. If we want to rotate the aircraft, we perform. I need to transform a non-rotating coordinate system into a rotating one. So for each marker, one rotation and translation vector is returned. It is assumed that row number corresponds to the node number. system local to the member. The numbers x, y and z are the x-, y- and z-coordinates of P. A coordinate reference system (CRS) provides a framework for defining real-world locations. Analysis and Design of Control Systems Using Matlab. : Zrot) and then around rotated (in first rotation. (Note: The coordinate systems shown in Digital Image Processing by Gonzalez and Woods, as well as Digital Image Processing Using MATLAB by Gonzalez, Woods, and Eddins, are different from what I've shown here. A short summary of this paper. Rotate by using the sliders! Projecting 3D on 2D. Hover over values, scroll to zoom, click-and-drag to rotate and pan. To learn more about the different coordinate systems, see Coordinate Transformations in Robotics. The paper presents the problem of modelling the parallel kinematic delta system using specialized software Simmechanics constituting the extension of the Matlab environment. This MATLAB function rotates the Cartesian points using the quaternion, quat. MATLAB GUI codes are included. replace('\r ', '\t'). I've tried using Matlab's mapshow function to display my data (I've previously only displayed the image data without georeferencing). In frame rotation, the point is static and the coordinate system moves. To learn more about quaternion mathematics and how they are implemented in Sensor Fusion and Tracking Toolbox™, see Rotations, Orientation, and Quaternions. Rotation matrices are used in two senses: they can be used to rotate a vector into a new position or they can be used to rotate a coordinate basis (or coordinate system) into a new one. R = rotx (30) R = 3×3 1. These four points only represent the coordinates of the point without rotating around the system more than once. The corresponding homogeneous coordinate system equation is obtained from the relative position of each coordinate system, and then the kinematics equation of the robot manipulator is obtained [21. MATLAB stores image as a two-dimensional array, i. Я перевел код в функцию MATLAB: CalculateEllipse. This CS also determines the absolute meaning of torque and motion about the joint axis. I want to be able to rotate the shape 360 degrees in an animation. ransformationT A (coordinate) transformation is an operation, which describes a vector ~v's. Robotics System Toolbox assumes that positions and orientations are defined in a right-handed Cartesian coordinate system. MATLAB - Arrays - All variables of all data types in MATLAB are multidimensional arrays. The goal of the Matlab GUI was to allow easy manipulation of trajectories in a. To rotate clockwise, just use the value with a change of sign. x' and y', we need to the distance of P from the rotated axes, in terms of x, y, and θ. orthogonal coordinate system, find a transformation, M, that maps XYZ to an arbitrary orthogonal system UVW. , when is parallel to one of them) the angular momentum vector becomes parallel to the angular velocity vector. Also note that a counter-clockwise sense is. MATLAB is a technical analysis language that allows you to treat any vector of data as coordinates in your system of choice. There are many choices of cube coordinate system, and many choices of axial coordinate system. pi The number p. This sequence of rotations follows the convention outlined in [1]. The camera and mirror axes are well aligned, that is, only small deviations of the rotation are considered into the model. In the second coordinate pair we rotated in a clock-wise direction to get to the point. Plotly's MATLAB® graphs are interactive in the web browser. Translational Degrees of Freedom;
\section*{Introduction} Trend-following, or momentum, strategies have the attractive property of generating trading returns with a positively skewed statistical distribution. Consequently, they tend to hold on to their profits and are unlikely to have severe `drawdowns'. They are very scalable and are employed in most asset classes---most traditionally in futures, where they are a favourite strategy among CTA (`commodity trading advisor') firms, but also in OTC markets---and by both buy- and sell-side practitioners. The basic premise behind momentum is to buy what has been going up recently, and sell what has been going down. In other words, if recent returns have been positive then future ones are more likely to be positive, and similarly with negative. Systematic strategies formalise this notion by (i) measuring momentum, essentially by smoothing out recent returns to obtain a signal that is not too rapidly-varying, and (ii) having a law that turns this signal into a trading position, i.e.\ how may contracts or what notional to have on. Put this way, the ideas that only the finest minds can understand CTA strategies, or how the theory of statistics is of central importance to their construction, or how one needs to have been steeped in managed futures for many years to build a workable strategy, are seen to be self-serving and pretentious---a conclusion implicitly arrived at, even if not thus expressed, by other authors. Before talking about skewness we may as well deal with the first moment, that is to say the expected return. it is important to understand that this is an entirely separate matter. Any statement about this depends on markets exhibiting momentum, i.e.\ serially-correlated returns. This can be ascribed to the way information is disseminated into markets or of behavioural characteristics of market participants. However, it is entirely subjective and is a matter of believing that markets will continue to behave in the way that they have done in the past. In contrast, as we analyse in detail here, even if market returns exhibit no serial correlation, during which period the strategy will produce no average return, the \emph{trading} returns of a momentum strategy will still have positive third moment. What is interesting is that the skewness characteristic is a product of the design of the strategy, whereas in long-only equities and credit the (negative) skewness is an intrinsic feature of the asset class that has to be tolerated and risk-managed. Positive skewness results from the way positions are taken. Suppose that we look at the trading returns from one particular instrument (perhaps US Tsy bond futures) of a particular period (perhaps one week) over a long period of history. Let us group these returns by the size of the underlying position. The magnitude of P\&L will typically be larger when the magnitude of the position is larger, but crucially it will typically be positive too. This is because momentum strategies typically run bigger positions \emph{when they have already made money}---as opposed to reversion strategies that follow the opposite principle. In statistical language the full distribution of P\&L is a mixture of distributions of different mean and variance: the components with a higher variance have positive mean and that is a recipe for positive skewness. Studies on the subject have generally been empirical (for a good overview see e.g.\ \cite{Till11} and references therein, and \cite{AcarSatchell02} for a general introduction to technical trading). However, there is a decent literature on quantitative aspects. The first work was by Acar \cite{Acar92} who derived a variety of results in discrete time using different forecasting models and also ascertained that the distribution of momentum trading returns has positive skewness. Potters \& Bouchaud \cite{Potters05} consider a particular type of momentum strategy and derive rigorous results about its performance. Bruder et~al.\ \cite{Bruder11}, and an even longer extension by Jusselin et~al.\ \cite{Jusselin17}, devote considerable effort to deriving moving-average filters from an underlying model. In practice, however, this seems to create more problems than it solves because the assumed model may be wrong: better, we think, is to use a convenient definition of moving-average filter, here the exponentially-weighted moving average (EMA) as it is easily calculated by recursion, and design a strategy using those. Then in 2012 two papers were published in RISK by the author, which considerably broadened the scope of the subject. Both focused on the third-moment characteristics of momentum models. The first \cite{Martin12b} dealt with linear models, by which we mean a signal proportional to a momentum signal obtained by applying exponential smoothing to the market returns. The second \cite{Martin12c} showed how to deal with strategies defined as nonlinear transformations of momentum signals, within the same framework. This paper is a synthesis of these two. More recently Dao \cite{Dao16} focuses on the connection between convexity, option-like characteristics, and momentum strategies. In building momentum strategies, two main considerations are important. The first relates to backtesting, in other words finding what worked best in the past and assuming that it will continue to do so. Part of the problem with this is that it is too reliant on historical data: if left unchecked, it wastes an inordinate amount of time in fitting and overfitting, mainly because different models typically produce almost identical historical performance. The second relates to design: that is to say, without regard to the past, force the strategy to have certain statistical properties when the market behaves in predefined ways. One idea, which we consider in depth, is when markets are not trending (and so market returns are uncorrelated). Another is when the market does trend in a way that it did in a particular historical scenario, such as gold in the first decade of this century, or in the first 18 months from January 2019---some designs behave differently from others. There is to an extent a trade-off between these considerations: better positive skewness and better performance in certain trending scenarios may be obtained at the expense of worse average historical performance and vice versa. This necessitates subjective decision, and despite the great effort of systematic trading firms to claim that there is no discretion in the implementation of their systems---nowadays assisted by the smoke-screens of statistical theory and `machine learning'---inevitably there must be. An incidental conclusion from reading \cite{Dao16} is that the SG CTA index is very easily replicated, giving the lie to the contentions, blithely trotted out by the CTA industry, that barriers to entry are so high, that the subject can only be understood by those with years of experience, that a cohort of PhDs are required to build strategies, and that proprietary execution algorithms are important---the last of these is clearly nonsense given the low speed at which the replicating strategy in \cite{Dao16} trades (see Figure~7 in that paper). In fact, rather than clever trade execution being important for momentum strategies, it is the reverse that is true: momentum is an important ingredient in trade execution, as over short time scales many financial time series exhibit momentum. Further, the relatively poor performance of the SG CTA index since the end of the Global Financial Crisis, together with the underlying simplicity of momentum trading, should make investors question whether CTAs' management fees are justified, as well as how big an asset allocation they should receive by comparison with standard investments in equities and fixed income. A consequence of positive skewness is that the proportion of winning trades may well be negative \cite{Potters05}. Small trading losses are common, but occasional big gains are produced when the strategy levers itself into a trend. The longevity of trend-following funds suggests that this characteristic has served them well over the years, pointing to the conclusion that the oft-asked question ``What is your fraction of winning trades?'' is misleading. The link between moments and proportion of winning trades can be formalised with the Gram-Charlier expansion, which
\begin{eqnarray} K^{(N_{f},\nu)}_N(z,z';m_1,\ldots,m_{N_{f}}) &=& e^{-\frac{N}{2}(V(z^2)+V(z'^2))}(-1)^{\nu}\sqrt{zz'} \prod_{f}^{N_{f}}\sqrt{(z^2+m_f^2)(z'^2+m_f^2)}\cr &&\times ~\frac{ \tilde{\cal Z}_{\nu}^{(N_{f}+2)}(m_1,\ldots,m_{N_{f}},iz,iz')}{ \tilde{\cal Z}_{\nu}^{(N_{f})}(m_1,\ldots,m_{N_{f}})} ~, \label{Krep1} \end{eqnarray} \vspace{12pt}\noindent We are now ready to take the double-microscopic limit in which $\zeta \equiv z N2\pi\rho(0)$ and $\mu_i \equiv m_i N2\pi\rho(0)$ are kept fixed as $N\!\to\!\infty$. In this limit the prefactor $\exp[-(N/2)(V(z^2)+V(z'^2))]$ becomes replaced by unity. By identifying $\Sigma = 2\pi\rho(0)$, and using the universal relation (\ref{ZZ}) we finally arrive at the following master formula \cite{DAD}: \begin{equation} K_S^{(N_{f})}(\zeta,\zeta';\mu_1,\ldots,\mu_{N_{f}}) ~=~ C_{2} \sqrt{\zeta\zeta'}\prod_{f}^{N_{f}} \sqrt{(\zeta^2+\mu_f^2)(\zeta'^2+\mu_f^2)}~\frac{ {\cal Z}_{\nu}^{(N_{f}+2)}(\mu_1,\ldots,\mu_{N_{f}},i\zeta,i\zeta')}{ {\cal Z}_{\nu}^{(N_{f})}(\mu_1,\ldots,\mu_{N_{f}})} ~.\label{mf} \end{equation} {}From this one single formula all double-microscopic spectral correlators can be computed directly from QCD chiral Lagrangians in the appropriate scaling regime. In particular, for the spectral density itself we find \begin{equation} \rho_S(\zeta;\mu_1,\ldots,\mu_{N_{f}}) ~=~ C_2 |\zeta| \prod_{f}^{N_{f}}(\zeta^2+\mu_f^2)~\frac{ {\cal Z}_{\nu}^{(N_{f}+2)}(\mu_1,\ldots,\mu_{N_{f}},i\zeta,i\zeta)}{ {\cal Z}_{\nu}^{(N_{f})}(\mu_1,\ldots,\mu_{N_{f}})} ~.\label{spec} \end{equation} The overall proportionality factor $C_2$ can be determined by using the matching condition \begin{equation} \lim_{\zeta\to\infty} \rho_S(\zeta;\mu_1,\ldots,\mu_{N_{f}}) = 1/\pi ~, \end{equation} which fixes $C_2 = (-1)^{\nu+[N_{f}/2]}$. \vspace{12pt}\noindent The higher $k$-point double-microscopic spectral correlation functions are conveniently evaluated using the double-microscopic limit of the general relation (\ref{correl}). Curiously, it is also possible to relate these higher $k$-point functions to finite-volume QCD partition functions with $2k$ additional quark species \cite{DAD}: \begin{eqnarray} \rho_S(\zeta_1,\ldots,\zeta_k;\mu_1,\ldots,\mu_{N_f}) &=& C^{(k)} \prod_i^k\left(\zeta_i\prod_f^{N_f}(\zeta_i^2+\mu_f^2)\right) \prod_{j<l}^k|\zeta_j^2-\zeta_l^2|^2 \nonumber\\ &&\times\ \frac{{\cal Z}_{\nu}^{(N_{f}+2k)} (\mu_1,\ldots,\mu_{N_f};\{i\zeta_1\},\ldots, \{i\zeta_k\})} {{\cal Z}_{\nu}^{(N_{f})}(\mu_1,\ldots,\mu_{N_f})} , \label{corrft} \end{eqnarray} Each additional imaginary quark mass $i\zeta_j$ is thus doubly degenerate. The overall proportionality constant $C^{(k)}$ can again be fixed by a matching condition. For $k=1$ the relation (\ref{corrft}) simply coincides with the previous expression for the double-microscopic spectral density. But already for $k=2$ (and all higher values of $k$) the expressions are completely different, relating as they do the spectral correlators to finite-volume QCD partition functions with different numbers of flavors. It is quite amazing that the finite-volume QCD partition function (\ref{ZLS}) has all this structure, which takes on such a simple form in random matrix language, encoded in it. In fact, by combining eqs. (\ref{correl}) and (\ref{corrft}) one obtains an infinite sequence of consistency conditions for QCD partitions. The relations become particularly transparent if we first take the additional fermion masses to physical values by replacing $\zeta_j\to -i\zeta_j$ (inspection of the explicit solution of ref. \cite{JSV} shows immediately that this can be done unambiguously). We then find the following infinite sequence of consistency conditions \cite{DAD}: \begin{eqnarray} &&\det_{1\leq a,b\leq k}\left[\sqrt{\zeta_a\zeta_b}\prod_{f=1}^{N_{f}} \sqrt{(\mu_f^2-\zeta_a^2)(\mu_f^2-\zeta_b^2)} {\cal Z}_{\nu}^{(N_{f}+2)}(\mu_1,\ldots,\mu_{N_{f}},\zeta_a,\zeta_b)\right] = \cr && \tilde{C}^{(k)} \prod_i^k\left( \zeta_i\prod_{f=1}^{N_{f}} (\mu_f^2-\zeta_i^2)\right) \prod_{j<l}^k|\zeta_j^2-\zeta_l^2|^2 ~ \frac{{\cal Z}_{\nu}^{(N_{f}+2k)} (\mu_1,\ldots,\mu_{N_f},\{\zeta_1\},\ldots,\{\zeta_k\})} {{\cal Z}_{\nu}^{(N_{f})}(\mu_1,\ldots,\mu_{N_{f}})^{1-k}} ~, \label{cons} \end{eqnarray} where $\tilde{C}^{(k)}$ is some overall $\mu_i$-independent normalization constant. Precisely these relations encode in the finite-volume QCD partition function the fact that in the random matrix picture the kernel (\ref{kernel}) generates all spectral correlation functions through the relation (\ref{correl}). \section{Direct computations from chiral Lagrangians} We have learned that the {\em massless} spectral sum rules \cite{LS} do not provide the proper starting point for computing the microscopic spectral density. It is the {\em double-microscopic} limit \cite{SV,JNZ,DN,WGW} that is needed. This was in fact clear already from the first demonstration of the equivalence of the random matrix theory partition function and the finite-volume QCD partition function \cite{SV,HV}. Instead of the massless spectral sum rules, one should focus on the ``massive spectral sum rules'' \cite{SV,D} because these contain the analytical structure that allows one to unravel the spectral correlators from the finite-volume partition function. This fact becomes very clear when one considers the most simple example, the massive spectral sum rule corresponding to {\em quenched} QCD. Defining \begin{equation} G(\mu) ~\equiv~ 2\mu\int_0^{\infty}d\lambda \frac{\rho_S(\lambda)}{ \lambda^2+\mu^2} ~,\label{G} \end{equation} this can be written as a Stieltjes transform: \begin{equation} \int_0^{\infty}dt \frac{\rho_S(t)/\sqrt{t}}{t+y} ~=~ G(\sqrt{y}) ~\equiv~ F(y) ~. \end{equation} The inverse of this is given by the discontinuity: \begin{equation} \frac{\rho_S(\sqrt{t})}{\sqrt{t}} = \frac{1}{2\pi i}\lim_{\epsilon\to 0} \left[F(-t-i\epsilon) - F(-t+i\epsilon)\right] ~. \label{disc} \end{equation} So if one could compute the l.h.s. of (\ref{G}) directly from a finite-volume partition function, one would have achieved a derivation of the spectral density without having at any intermediate stage to go through the random matrix theory framework at all. The trouble is that the massive spectral sum rule (\ref{G}) refers to a ``quenched quark'' of (rescaled) mass $\mu$. If it were a {\em dynamical} quark, one could compute the function $G(\mu)$ straightforwardly from \cite{D} \begin{equation} G(\mu) ~=~ \frac{\partial}{\partial\mu} \ln Z_{\nu}(\mu) - \frac{\nu}{\mu} ~, \label{G1} \end{equation} where the last term subtracts the contribution from the zero modes. The needed trick, recently discovered by Osborn, Toublan and Verbaarschot \cite{OTV}, is to compute the r.h.s. of eq. (\ref{G1}) in a finite-volume field theory that contains yet another ``quark'', now of opposite statistics and of initially different mass (so that, after taking the degenerate mass limit, the two determinants cancel in the partition function itself). The result is (for $\nu=0$) \cite{OTV}: \begin{equation} G(\mu) = \Sigma\mu[I_0(\mu)K_0(\mu)+ I_1(\mu)K_1(\mu)] \end{equation} a result that was first derived the other way around, from the random matrix theory result, by Verbaarschot \cite{V'}. Substituting this into eq. (\ref{disc}), one finds, straight from the finite-volume partition function, \begin{equation} \rho_S(\lambda) = \frac{1}{2}|\lambda|\left[J_0(\lambda)^2 + J_1(\lambda^2)\right] \end{equation} which of course agrees with the result obtrained from random matrix theory. Not only could those authors compute the function $G(\mu)$ this way, they also managed to rewrite the general expression (\ref{disc}) in precisely the form (\ref{spec}) using the technique of partially quenched chiral perturbation theory \cite{BG}, here based on the super Lie group $U(N_f+1|1)$. Their result naturally generalizes to higher $k$-point spectral correlation functions, now given in the form (\ref{corrft}). The relevant super Lie group will here be $U(N_f+k|k)$. \section{Finite-volume corrections} We have seen that the microscopic spectral correlators have a natural interpretation as finite-size scaling functions. This makes them ideally suited for lattice gauge theory studies, and in fact there have now been Monte Carlo tests of the spectral densities of QCD in both (3+1) dimensions \cite{MC4} and (2+1) dimensions \cite{MC3}. (There have also been interesting studies of the applicability of random matrix techniques beyond the microscopic limit, in the ``bulk'' \cite{bulk}). One obvious question in that connection concerns finite-size corrections. In actual computations the volume $V$ is often far from being asymptotically large, and one could ask whether it is also possible to analytically calculate subleading corrections. For example, for the double-microscopic spectral density itself one could envisage an expansion of the kind \begin{equation} \rho_S^{(V)}(\zeta;\mu_1,\ldots,\mu_{N_{f}}) = \rho_S^{(\infty)}(\zeta;\mu_1,\ldots,\mu_{N_{f}}) \left[1 + \frac{A}{V}f(\zeta;\mu_1,\ldots,\mu_{N_{f}}) + \ldots\right] ~, \end{equation} with $A$ some dimensionful constant, and $f$ a correction-to-scaling function. One could hope that such corrections could be computed analytically using the random matrix theory formulation. In fact, if we go back to the derivation of eq. (\ref{Krep1}) from (\ref{Krep}), we could try to keep the subleading corrections that come from ignoring the difference between $N$ and $N-1$ in (\ref{Krep}). However, on top of these we can also get subleading contributions from the potential $V(\lambda^2)$, even in the microscopic limit. We therefore conclude that such subleading $1/N$ corrections in the random matrix picture will be {\em non-universal}, and hence cannot be expected to be related to the Dirac eigenvalue spectrum. This is completely in accord with the field theory picture, in which subleading terms in $1/V$ will involve non-static modes of the pseudo-Goldstone bosons. The kinetic term $\frac{1}{2}f_{\pi}^2 Tr[\partial_{\mu}U\partial_{\mu}U]$ in the effective Lagrangian can therefore not be neglected. A new dimensionful scale (namely $f_{\pi}$) has entered, and the Dirac spectrum will cease to be a scaling function related to just $V$ and $\Sigma$. Of course, one could try to systematically analyze $1/V$ corrections in this very precise framework of the effective Lagrangian. The most natural starting point will unfortunately not be the usual expansion around the kinetic term, but rather a low-momentum expansion around the mass term Tr$[{\cal M}U]$. \section{Flavor dependence} We finally address the question of the flavor dependence of all these results. The number of flavors enters in a very simple way in both the field theory and random matrix picture. In the former it determines the coset integration for the effective Lagrangian, while in the latter it enters only through the strength of the determinant in the expression (\ref{ZRM}). Both lead to an extremely mild dependence on the number of flavors. This is because we throughout normalize the chiral condensate to one flavor-independent number $\Sigma$, -- a convenient normalization because it puts the different theories with different flavor content on the same common scale. Clearly the whole framework collapses as the number $N_f$ exceeds the value $N_f^*$ above which QCD no longer supports spontaneous chiral symmetry. If we allow ourselves to treat this upper number of flavors $N_f^*$ as a free and tunable parameter, then a normalizable condensate $\Sigma$ will simply cease to exist precisely at $N_f=N_f^*$. In the random matrix theory context this may correspond to hitting the boundary where $\rho(0) \to 0$ \cite{ADMN'}. Beyond this point all results discussed here will no longer be valid. Just at the point where the condensate disappears one can define critical exponents that count the rate at which $\rho(\lambda)$ vanishes as $\lambda\to 0$ \cite{JNPZ}; however they will here not correspond to a physical phase transition
# 2 2 (k - 2)(8k + 1) 2"65) ###### Question: 2 2 (k - 2)(8k + 1) 2"65) #### Similar Solved Questions ##### 2 8 Cor(6 ) 8 Csc(0) 2 8 Yante) t0 undelined, sin(e) (if possible) six triqonomethcenter 2 8 Cor(6 ) 8 Csc(0) 2 8 Yante) t0 undelined, sin(e) (if possible) six triqonomethc enter... ##### 5. cylindrical rod of radius r = 2.00 cm and mass 1.25 kg is upright on the edge of a rotating disk of mass 10.0 kg and radius 25.0 cm as is shown in diagram (a) below. The system is rotating at 15.0 rad/s. The rod falls on its side as shown in diagram (b): Diagrams (c) and (d) present a side view. What is the new angular velocity of the system? Marks: 5(c)Looking downSide View 5. cylindrical rod of radius r = 2.00 cm and mass 1.25 kg is upright on the edge of a rotating disk of mass 10.0 kg and radius 25.0 cm as is shown in diagram (a) below. The system is rotating at 15.0 rad/s. The rod falls on its side as shown in diagram (b): Diagrams (c) and (d) present a side view. ... ##### Java Question Hey guys can you help me with this question ?Write the definition of a class, swimmingPool, to implement the proper-ties of a swimming pool.Your class should have the instance variables to store thelength ( in feet), width ( in feet), depth ( in feet), the fillrate ( in gallons per minute) at which ... ##### A heater element is made of Nichrome with resistivity of 5.00 X 10-7 n.m and cross sectional area of7.50 X 10-7 m?. The heater element operates at 110 V and dissipates 3500 Watts of power:What is the curent density? A heater element is made of Nichrome with resistivity of 5.00 X 10-7 n.m and cross sectional area of7.50 X 10-7 m?. The heater element operates at 110 V and dissipates 3500 Watts of power: What is the curent density?... ##### Suppose that the height (in centimeters) of a candle is a linear function of the amount... Suppose that the height (in centimeters) of a candle is a linear function of the amount of time (in hours) it has been burning. After 14 hours of burning, a candle has a height of 20.6 centimeters. After 30 hours of burning, its height is 11 centimeters. What is the height of the candle after 26 hou... ##### Qrder: Tegretol 0.3 @ P.o. bid Awailable:NDC 0003 0052-30 FSC 1021 6505-01-153-4524 Tegretol" 100 mg carbamazepine USP1 W Chewablo Tablets J00 tablet? J Dlsponto In Ilahl Vant reailnt 1 contiinet (USPI 20 1 Caullor; Federsl 'polat 1 Kalnton-cnota mai 433m 8 5 Dosage: Order: Diazepam mg via nasogastric tube Lid Available: Diazepam oral solution 5 mg per 5 mL milliliters should the client receive per dosage? mL How many will the client receive per day? mg b. How many milligrams Order: Qrder: Tegretol 0.3 @ P.o. bid Awailable: NDC 0003 0052-30 FSC 1021 6505-01-153-4524 Tegretol" 100 mg carbamazepine USP 1 W Chewablo Tablets J00 tablet? J Dlsponto In Ilahl Vant reailnt 1 contiinet (USPI 20 1 Caullor; Federsl 'polat 1 Kalnton-cnota mai 433m 8 5 Dosage: Order: Diazepam m... ##### Given Al R={{17},{2},{5,11},{13,14,101} } Determine Represent Ras J set of ordered pdirs Given Al R={{17},{2},{5,11},{13,14,101} } Determine Represent Ras J set of ordered pdirs... ##### (1 point) For each of the following; perform the indicated operations on the vectorsa =2j+k,6=-27-3j+2k,2=47+2j.(a) 4a + 3b =(b) 6a + 2b _ 27 = (1 point) For each of the following; perform the indicated operations on the vectors a =2j+k,6=-27-3j+2k,2=47+2j. (a) 4a + 3b = (b) 6a + 2b _ 27 =... ##### CS-320 Computer Organization and Architecture Homework 7 Due: 04/15/2019 1. (25 points total) Consider the following... CS-320 Computer Organization and Architecture Homework 7 Due: 04/15/2019 1. (25 points total) Consider the following loop. LOOP: ld x10, e(x13) ld x11, 8 (x13) add x12, x10, x11 subi x13, x13, 16 bnez x12, LOOP Assume that perfect branch prediction is used (no stalls due to control hazards), that t... ... ##### 17) The higher the bowling score the better. The lower the golf score the better. Assume... 17) The higher the bowling score the better. The lower the golf score the better. Assume both are normally distributed a) Suppose the mean bowling score is 155 with a standard deviation of 16 points. We will give a trophy for the best 5% of scores. What score must you get to receive a trophy? Suppos... ##### 2. Provide the major organic product of the following reaction: 1. CH CH CH MgBr (2... 2. Provide the major organic product of the following reaction: 1. CH CH CH MgBr (2 eq.) OCH3 2. H30 1. CHỊCOH H2C 2, CHỊCH,MgBr 3. H, H2O 1. CH,CH,MgBr (2 eq) H₂C 2. H', H2O... ##### Consider a solid metal sphere (S) a few centimeters in diameter and a feather (F). For each quantity in the list that follows, indicate whether the quantity is the same, greater, or lesser in the case of S or in that of F. Explain in each case why you gave the answer you did. Here is the list: (a) the gravitational force, (b) the time it will take to fall a given distance in air, (c) the time it will take to fall a given distance in vacuum, (d) the total force on the object when falling in vacuu Consider a solid metal sphere (S) a few centimeters in diameter and a feather (F). For each quantity in the list that follows, indicate whether the quantity is the same, greater, or lesser in the case of S or in that of F. Explain in each case why you gave the answer you did. Here is the list: (a) t... ##### Describe the end behavior of the graph of each function. Do not use a calculator. $P(x)=-\sqrt{7} x^{3}-4 x^{2}+2 x-1$ Describe the end behavior of the graph of each function. Do not use a calculator. $P(x)=-\sqrt{7} x^{3}-4 x^{2}+2 x-1$... ##### 2 A company preparing for a Chapter 7 liquidation has listed the following liabilities: 56 nts... 2 A company preparing for a Chapter 7 liquidation has listed the following liabilities: 56 nts . • Note payable A of $134,000 secured by land having a book value of$72,000 and a fair value of $92,000. Note payable B of$164,000 secured by a building having a $82,000 book value and a$62,000 fa... ##### Consider that you prepared a solution by mixing 0.21 g solute with 9.52 g of solvent.... Consider that you prepared a solution by mixing 0.21 g solute with 9.52 g of solvent. If you measured that the solution had a molality of 0.154 m, what is the molar mass of the solute? Express your answer numerically to three significant figures.... ##### Use the modular exponentiation algorithm from lecture to compute 7200 mod 91. Show your work Use the modular exponentiation algorithm from lecture to compute 7200 mod 91. Show your work... ##### 4. Financial ratios in the forecasting process Your boss has asked you to take a closer... 4. Financial ratios in the forecasting process Your boss has asked you to take a closer look at your company's credit policies. You have been given the following information: Accounts receivable balance: $715,000 Average daily sales:$12,980 Weighted average cost of capital: 8% Your firm's d... ##### Question 11patieni has metabolc mutation that causes him 5 dovolo lom blood sugar and Iatiquo upon exorciso In betirean ments You immediatety Suspect that he cannot store glucose 8s glycogen in his Ivcr; but you Icam that he actually has excesslve Alycogan deposts in his Iivor: Tha elmplest explnation Ior this condition he has Jw ucurvrty ol hexokinasuhe has Jw actity ol phosphoproloin phosphatasu-1, he has low acthty of glucose 6-phosphatasu in hts Iivut ondoplnsmic rulcutum ha hnt hlgh acinity question 11 patieni
wheel that are important to consider for the purposes that it is used in this event. The wheels mass, or more appropriately, its rotational inertia is one of the important of those properties. As we know from physics, the resistance to acceleration in a linear motion is different from the resistance to acceleration in a rotational motion. It is notable that the wheel in a scrambler is a rolling object, and thus it is a great mistake to ignore both components of its motion. It is a well known derivation from physics that the total energy of the rolling wheel is: 1. $E = 1/2 \left( m R^2 + I \right) {\omega}^2$ E = total energy; m = mass of the wheel; R = radius of the wheel; I = rotational inertia of the wheel around its axel; ? (omega) = radial velocity Note that radial velocity is easily related to the normal velocity of the wheel(and thus the car it is attached to by a simple equation: 1. $v = \omega \times R$ v = velocity of the car It can be noted that the increase in mass, radius and rotational inertia all will lead to the wheel storing more energy in it that can be better used to accelerate the car. Note that the radius term is squared, so any increase in the wheels radius will severely impact its energy properties. Lastly, the inertial component is very often dependent on the squared radius of the wheel as well, further increasing the importance of the radius of the wheel. Note, that nothing has been said whether having a large mass of the wheel is bad or good. This question cannot be answered decisively in either direction. Smaller mass wheels (i.e. wheels that are small and light) allow for your scrambler to go faster, but they will also have to spin faster and thus are very susceptible to axle friction. Many teams with tiny wheels failed to reach the wall simply because the cars, while launched at respectable speed, were bogged down by friction. Ways to reduce this friction will be discussed shortly. Teams with large wheeled scramblers enjoy lumbering, but roughly constant speeds. Once you accelerate the beast of a lawnmower wheel, it will not want to stop for a while. Note that the wheel radius is of a particular importance to one of the integrated mass scrambler types; that will be discussed later. The second important aspect of a wheel in a scrambler is its traction. Traction is of supreme importance for the car's stability, and when the wheels are used as brakes, for its braking. The force of static friction that a wheel exerts on a ground can be approximated well by the following equation(note that static friction is used, the wheel is assumed not to be skidding, which would require us to use the kinetic friction): 1. $F_f = m g k$ Ff = force of friction; m = mass of that the wheel supports; k = the coefficient of static friction Since this friction is always desired by the scrambler car (unless you specifically desire your scrambler to skid out of control at the drop of a hat) it appears that it can simply be increased by increasing the mass of the car. This is dubious for multiple reasons, first is that this mass increase will make the car slower, and secondly is that by the Newton's second law, 1. $F = m a$ F = force; m = mass; a = acceleration The equation (3) can be rewritten as this(assuming that the mass supported by the wheel is equal to the total mass of the car, a poor approximation as shall be seen later on): $m a = m g k$ 1. $a = gk$ a = acceleration due to friction The m's cancel, and it is appears that the acceleration due to the force of friction is independent of the mass of the car! This is a naive understanding of the processes involved however. The equation (3) assumes that the wheel's geometry does not change when more weight is applied. This is true for rigid wheels, but for anything that is even mildly deformable it no longer applies. The deviation from ideal behaviour is not that significant however, and thus the result from equation (5) is largely valid: the only way to increase the traction of the wheel is to increase its coefficient of friction. With the theory covered, let us now discuss the material considerations of the wheels. #### Material Considerations Physics is all nice and good, but given the limited budgets of most teams it has to be eclipsed by the availability of the materials. Most teams use some sort of pre-made disks for their wheels, a common sight are CD/LP based wheels. Roller blade wheels are often visible as well. Better funded teams may wield custom made acrylic wheels, or even those cut from sheets of balsa. Harking to the first property of a wheel discussed above lighter wheels are generally better, and thus teams go to great lengths to lighten their wheels. Given good drilling equipment it is advisable to drill a series of holes in the wheel to make it lighter. Take care to do it in a symmetrical fashion, and not to weaken the wheel too much. Note that from equation (1) it can be seen that the greatest effect shall be seen from reducing the weight of the rims of the wheel. And as always, look at the professional wheels for guidance, high-speed bicycles often have very efficient designs for their wheels. To reduce the axle friction teams use ball bearings. These gadgets, while sometimes expensive and tough to find, will often nullify any problems associated with small wheels. Keep these free from dust and well lubricated with grease or other lubricant. For traction many teams use rubber bands around their wheel rims, some use latex gloves as a faster and easier alternative. Latex tubing slit lengthwise also makes a good traction enhancer. The rules prohibit any lasting glues from being used for traction purposes, so take care not to use anything like that. And, in general, it is good to make sure that your wheels are straight, that the axles are straight, and that the bearings are mounted square to the car's chassis. It is never a good idea to have your car travel in a wide arc towards the wall. Most wheels happen to have up to 4 holes all the same size. This strategy is all good and well if your using Plexiglas,milled aluminum, steel, etc., etc. But if you are using CDs, fibrous alloys, etc., etc. this is not going to work. It will ruin the structural integrity of the wheel. ### Brakes While some teams cannot even get their scrambler to travel the announced distance, those that do face the problem of the egg traveling more than the 1 m/s (average upper limit for the speed that the egg can survive decelerating from) towards a very solid wall. Generally, a brake is used. #### Physical Considerations While the equation (5) is a good approximation for a single wheel scrambler, such things are a rarity. A more careful analysis is required. There are four main questions that must be considered during the design of a braking system of a scrambler: how long to make the car, where to put the center of mass, which wheels(front versus back) work better for braking and how to avoid skid. To answer these questions, we shall consider a simple model for a braking car. As can be seen from the picture, N1 and N2 are the normal forces exerted on the car by the floor for the back and front wheel axles respectively. F1 and F2 are the resulting forces of friction. The center of
\section{Introduction} Unmanned Aerial Vehicle (UAV) development has shown valuable potential in the delivery market. The companies like Amazon and Alibaba develop a considerable interest in drones delivery and have started competing about testing drones to deliver packages \cite{r1}. Various UAVs have different purposes including transportation of food, medical supplies, and packages. A key aspect of drones delivery is autonomous landing. Recently, researchers have shown an increased interest in autonomous UAV landing using fiducial markers. The purpose of using fiducial markers is to estimate the pose of the vehicle by obtaining the six-degrees of freedom \cite{r2}. A number of techniques, such as Apriltag \cite{r2} and ARTtag \cite{r5}, have been developed to adapt and explore autonomous landing. In the situation that two different UAVs can not communicate and share information, that is having no vehicle safety communication (VSC) \cite{r4}, there is an urgent need to address the safety problems caused by the collision during autonomous landing. \begin{figure}[htbp] \centerline{\includegraphics[scale=.3]{collision_avoidance.png}} \caption{Diagram of proposed framework} \label{fig} \end{figure} Due to the complexity of the business model UAVs, the conflict of “Free Flight” \cite{r3} is possible to occur and yields the dangerous movement. For the large-scale UAV, one of the greatest challenges is to estimate the spatial relation with another close UAV so that it can archive a safe landing path. Depending on the functionality of UAVs, the different priority of vehicles needs to be assigned if they are landing closely and recently. In this research, we purpose a strategy to resolve the risk of collision when two different levels of UAV landing on close paths. The Apriltags are used for navigation and estimating position due to its efficiency, less false positive rate, and robustness \cite{r2}. The two different levels of UAV are categorized into Level I and II UAV and they are landing on close platforms. To clarify the problem of having no VSC \cite{r4}, the vision-based collision avoidance method is introduced. An inexpensive detection algorithm is implemented to archive real-time decision-making. Moreover, the YoloV4 \cite{r7} deep learning approach is adopted on Level II UAV to obtain further in-depth information on object detection. The Non-Maximum Suppression (NMS) \cite{r8,r11} is utilized to avoid multiple bounding boxes so that it raises detection accuracy. The Level I UAV is labeled by the bounding box on the view of Level II UAV. The path of the bounding box can be understood by Level II UAV to determine if Level I UAV has finished landing. The Level II UAV will safely move to its landing zone after it determines the landing of Level I UAV. \section{Proposed Method} \subsection{Estimation of position} The recent Apritag detection has been improved on detection speed and localization accuracy by implementing the continuous boundary segmentation algorithm \cite{r2}. To estimate its 3D coordinate in the world coordinate system, the tag position is required to be obtained first. In the situation that the Level II UAV always sees the Apriltags, the 3D reference points of each Apriltag 4-corners and their 2D projection need to be resolved, which is considered as “Perspective-n-Point problem” (PnP). The solving PnP method is based on the calibrated camera, so it demands for the camera intrinsic matrix $A$ after the camera calibration. The closed-form solution is used to gain the intrinsic parameters of $A$ in camera calibration \cite{r6}. A primary concern of solving the PnP problem is coordinate transformation. The system consists of the pixel, image, camera, and world coordinate system. To solve the PnP problem and regulate different coordinate systems, the relative method \cite{r9} is used \begin{equation} s\: p = A \: [R \: |\: t ]\: P_w \end{equation} where $A$ is a $3\times3$ camera intrinsic matrix. The join matrix $[R | t]$ consists of rotation $R$ and translation $t$, which is obtained from Apriltag's coordinates and orientation. $P_w$ represents the 3D point with respect to world coordinate system. The corresponding 2D pixel with respect to image coordinate system denotes as $p$ and $s$ is the scaling factor \cite{r9}. \subsection{YoloV4 dataset cloud training} \begin{figure}[htbp] \centerline{\includegraphics[scale=.25]{labeling2.jpg}} \caption{Example of labeling} \label{fig} \end{figure} One of the concern about real-time detectors is the tradeoff between Graphics Processing Units(GPU) usage and detection accuracy. The improved pipeline computation implemented in YoloV4 produces high-quality object detection in real-time \cite{r7}. Therefore, YoloV4 is applied for our collision avoidance strategy to compensate for the massive memory usage during the real flight time. The advantage of YoloV4 provides faster FPS and a more accurate Average Precision detector \cite{r7}. Figure 2 shows the example of labeling Level I UAV in the training dataset. The output images will be cropped into the customized size that matches your detector algorithm. Cloud dataset training is an innovative and convenient training method. It does not require sophisticated configuration with hardware since the environment has been built up in the cloud server. The Google Colab Pro \cite{r10} was recently introduced by Google that connects with the super engine at the backend Google cloud server and extremely increases the training process. \subsection{Collision avoidance} The Level I UAV has no awareness of Level II UAV. It will perform the automatic landing on the Apriltag that is placed on the ground. To gain the path of Level I UAV’s landing path on Level II’s onboard camera, we calculate the gap of the previous and current bounding box with respect to image coordinate system. The procedure of collision avoidance requires the detection of Level I complete landing. \begin{figure}[htbp] \centerline{\includegraphics[scale=.38]{Diagram8.jpg}} \caption{Illustration of image coordinate system} \label{fig} \end{figure} Figure 3 shows the 2D image coordinate system where the top left corner is the origin. The iterative image will be processed into a 4-dimensional blob after the detection. The blob is denoted as $[x , y, w, h]$. The $x, y$ are the coordinates of the center of bounding boxes. The $w, h$ are the width and height of bounding boxes. One blog is considered as the collection of images with the same width, height, and depth. To increase detection accuracy during real flight time, we use Non-Maximum Suppression to filter out some of the bounding boxes that have poor accuracy. The filter consists of two parts. In order to minimize the candidates of filtering. First, simply select the confidence score that is higher than 0.5 in the set of predictions, which yields a new set. Let $P$ represents the set of all possible prediction, and $P’$ be the new filtered set. $P' = \{p \in P \mid c\_score(p) > 0.5\} $ \begin{figure}[htbp] \centerline{\includegraphics[scale=.22]{bounding_box.jpg}} \caption{Intersection area between bounding box 1 and 2} \label{fig} \end{figure} The highest confidence score from $P'$ will be selected and then calculate the Intersection over Union (IoU) value with all other elements from $P’$. The IoU value is the intersection area between the highest confidence score and one of the selected bounding boxes in $P’$ divided by the union areas. The Figure 4 shows the grey area as the intersection area of bounding boxes. If the IoU exceeds the IoU threshold that we define, then the selected bounding box will be removed from $P’$. Repeat the process until there is no prediction left in $P’$. The IoU threshold is $0.4$ in our experiment. \begin{figure}[ht] \subfloat[Bounding boxes without NMS]{ \begin{minipage}[c][0.9\width]{ 0.23\textwidth} \centering \includegraphics[width=1\textwidth]{NMS2.jpg} \end{minipage}} \subfloat[Bouding box with NMS]{ \begin{minipage}[c][0.9\width]{ 0.23\textwidth} \centering \includegraphics[width=1\textwidth]{NMS1.jpg} \end{minipage}} \caption{Difference between absence or presence of using NMS} \end{figure} Figure 5(a) shows the multiple bounding boxes with various confidence scores before NMS filtering. Figure 5(b) shows the result after the NMS filtering. The bounding box coordinate in real time is represented by ${(x_t, y_t) } $. The function $f(t)$ is the distance difference of current and previous bounding box \begin{equation} f(t) = \sqrt{(x_{t+\Delta t} - x_t)^2 + (y_{t+\Delta t} - y_t)^2} \end{equation} where $(x_t,y_t)$ is the current bounding box coordinate and $(x_{t+\Delta t},y_{t+\Delta t})$ is the previous bounding box coordinate. The change with respect to image coordinate in x-axis might be slightly since the landing mostly affects the bounding box in y-axis. The sampling time of vehicle is represented as $\Delta t$. The
12) [EASY] MOSFET Amplifier – Download Multiple Choice Question from here. Questions and discussions QUESTION ONE (Multiple Choice) 1- Insurance authors have traditionally defined risk as A) Any situation in which the probability of loss is one B) Any situation in which the probability of loss is zero C) Uncertainty concerning the occurrence of loss D) the probability of a loss is occurring. In addition to confusing and frustrating students, poorly-written test questions yield scores of dubious value that are inappropriate to use as a basis of evaluating student achievement. If an atom is reduced in a redox reaction, what must occur to another atom in the system? A. managing the marketing process monitoring the profitability of the company’s products and services. There are two oxygens, and oxygen has an oxidation number of -2, according to rule 3. It is not necessary to have covered all of the above topics, since a pass (50%) requires less that 50% correct answers. Electrochemistry Worksheet Pdf Free Download"> Chapter 20 Multiple Choice Choose The One Alternative That"> Full Template. To write any electrochemical cell in shorthand notation, the oxidation reaction containing 2 species is shown at left side and the reduction containing 2 species is shown at the right side. Multiple Linear Regression. a) The major site of fatty acid ß-oxidation is the peroxisomes, the mitochondria also contain enzymes for this pathway b) Within the liver peroxisomes serve to oxidise very long chain fatty acids to medium chain products. Multiple Choice. The loss of electrons from a molecule d. Chemistry Topic 9 Oxidation - Reduction •Chemistry - Mr. Half-cell is pt H2 (1 atm) H+ (1 M) Reference Electrode In SHE. There are multiple-choice and free-response tests for most units with accompanying answer keys. Choose the one alternative that best completes the statement or answers the question. , Automation Anywhere Quiz and Important MCQs for certification. Answer ALL the questions in the ANSWER BOOK. (c) energy among particles. oxidation-reduction reaction represented by the balanced equation above. A correct answer scores 1, an incorrect answer scores 0. Start studying AP Biology Chapter 9 Multiple Choice. APEF - Electrochemistry - Multiple Choice Questions A 1. all of the above. The WSET Level 1 Award in Sake examination paper consists of 30 multiple-choice questions. Section 1: Introduction 3 Exam16. Oxidation-Reduction and Electrochemistry For Students 9th - 12th. Multiple Choice Questions on Biological Nitrogen Fixation MCQ Biology - Learning Biology through MCQs Biology Multiple Choice Questions and Answers for Different Competitive Exams. Rule 5 says that the sum of oxidation numbers for neutral compounds must be 0. Alcohol nomenclature. 17: Electochemistry MULTIPLE CHOICE. Oxidation of Food: Aldehydes and Ketones.$1,000,000 b. Class 10 Multiple Choice Type Questions-Acid, Base-Salts. Strength of Materials Multiple choice Questions and Answers list of pdf book free download for Mechanical Engineering students. Topic 8 HL Past Papers Questions & Answers Download Topic 9 Oxidation & Reduction Topic9 SL & HL syllabus Download. Most organic chemistry textbooks contain a broad assortment of suitable problems, and. there's a information superhighway lack of two hydrogen atoms. Fall 2007 Section D01BG. The oxidation of polycyclic aromatic compounds was studied in systems consisting of laccase from Trametes versicolor and so-called mediator compounds. There are 0-10 primary school level questions, 0-10 elementary school level questions, 95-120 middle school level questions, and 325-350 high school questions about Oxidation-Reduction in Castle Learning. it really is therefore an oxidation. gain electrons, only B. Reduction And Oxidation Reactions. The nucleus is made of. Oxidation- Reduction Reactions 6. Questions 14-17 The standard reduction potentials for two half reactions are given above. Magnesium oxide is made up of Mg +2 ion and O-2 ion. Reduction: $$MnO_4^- \rightarrow Mn^{2+}$$. These are practice questions that may help you ensure that you understand the objectives. 1) Consider the following reaction: 3A ¬ 2B The average rate of appearance of B is given by D[B]/Dt. Answer to Question B-09. 2KB) More products from uopehelp MTH 212 Week 2 MyMathLab® Week 2 Checkpoint. Thus, E 0 Ag/AgCl = -(0. C) Most of the free energy available from the oxidation of glucose remains in pyruvate, one of the products of glycolysis. QUESTIONS in this booklet. following on from the example multiple choice questions here are two scenario questions. Salts containing carbonate anions are insoluble except for those containing alkali metals or ammonium. 0 Framework Class, AM. During the year, the company purchased goods costing $900,000. 27) The value of ΔSo for the oxidation of carbon to carbon dioxide, C(s, graphite) + O 2 (g) CO 2 (g) A) +424. oxidation with potassium. Chemical reactions - Multiple choice questions. Chapter 4 - Elasticity - Sample Questions MULTIPLE CHOICE. (a) In diamond, the oxidation number of carbon is zero. Example Exercise 17. Assigning Oxidation Numbers Quiz - ProProfs Quiz. Its displayed formula is. 2 Oxidation Numbers 6. MULTIPLE CHOICE. G4 ELA, Question 10. Each question has only ONE correct answer. During the year, the company purchased goods costing$900,000. Ag + NO 3-→ Ag+ + NO 2. 1 Multiple Choice Questions 1) Which of the following terms can be used to describe an electroche 17. Chemmy bear?. C) inferior. The MORE POSITIVE reduction potential gets to indeed be reduced IF you are trying to set up a cell that can act as a galvanic or voltaic cell (a battery in other words). 1) In which reaction does the oxidation number of oxygen increase? 1) A) HCl (aq) + NaOH (aq) NaCl (aq) + H 2 O ( l ) B) Ba(NO 3 ) 2 (aq) + K 2 SO 4 (aq) BaSO 4 (s) + 2 KNO 3 (aq) C) 2 H 2 O ( l ) 2 H 2 (g) + O 2 (g). DO NOT circle your answers directly on the examination. Electrochemistry BIG Idea Chemical energy can be converted to electric energy and electric energy to chemical energy. methyl group. Do the oxidation-reduction reactions (Redox for short) obey the law of conservation of mass? View Answer Two solutions of 0. Instructions. Independent questions stand alone. Redox reactions questions, chemistry questions for JEE and BITSAT Entrance exam with answers. A form of HIV dementia consisting of memory loss and the reduction of cognitive and. (B) Voltage decreases but remains > zero. Section 1: Introduction 3 Exam16. 5 multiple choice questions on transformations aimed to expose misconceptions. Which reaction involves neither oxidation nor reduction? (A) formation of sulfur dioxide from sulfur (B) Formation of water by action of sodium hydroxide and nitric acid (C) formation of iron(II) sulfate by action of iron on copper sulfate solution (D) photosynthesis by green plants in the sunlight (E) union of iron with sulfur. What is the percent composition of the hydrocarbon? a. At the beginning of the year, Paradise Co. Click on any topic to do the online multiple choice questions. Steel contains maximum on carbon of…? a 1,0 % b 1,7 %. Use the following information to answer the next question. A periodic table is printed in the test booklet as well as a table of information presenting various physical constants and a few conversion factors among SI units. Chapter Twelve MULTIPLE CHOICE QUESTIONS Topic: General Carbonyl Information Section: 12. (4) It is a path for the flow of positive and negative ions. Record your answers for the questions in Part B–2 and Part C in your separate answer booklet. org are unblocked. Two-part multiple-choice, with evidence responses. c) The reduction of manganese(IV) oxide, MnO 2, to manganese(II) ions, Mn2+. Use each of the following pairs of electron-half-equations to work out the ionic equation for the reaction concerned. Objective - Multiple Choice; Oxidation. Multiple Choice Questions on Biological Nitrogen Fixation MCQ Biology - Learning Biology through
Modelling, 9 (2012), Art.n. 31. Google Scholar [39] M. Farkas, "Periodic Motions," Springer-Verlag, Berlin and New York, 1994.  Google Scholar [40] P. Feng, Dynamics of a segmentation clock model with discrete and distributed delays, Int. J. Biomath., 3 (2010), 1-18.  Google Scholar [41] M. Galach, Dynamics of the tumour-Immune system competition: The effect of time delay, Int. J. App. Math. and Comp. Sci., 13 (2003), 395-406.  Google Scholar [42] C. W. Gardiner, "Handbook of Stochastic Methods," (2nd edition). Springer. 1985.  Google Scholar [43] R. Gatti, et al., Cyclic Leukocytosis in Chronic Myelogenous Leukemia: New Perspectives on Pathogenesis and Therapy, Blood, 41 (1973), 771-783. Google Scholar [44] D. T. Gillespie, A general method for numerically simulating the stochastic time evolution of coupled chemical reactions, J. of Comp. Phys., 22 (1976), 403-434.  Google Scholar [45] D. T. Gillespie, Exact stochastic simulation of coupled chemical reactions, J. of Phys. Chem., 81 (1977), 2340-2361. Google Scholar [46] P. W. Glynn, On the role of generalized semi-markov processes in simulation output analysis, Proc. of the 15th conference on Winter simulation, 1 (1983), 39-44. Google Scholar [47] R. Gunawan, Y. Cao, L. Petzold and F. J. Doyle III, Sensitivity analysis of discrete stochastic systems, Biophys. J., 88 (2005), 2530-2540. Google Scholar [48] S. A. Gourley and S.Ruan, Dynamics of the diffusive Nicholson blowflies equation with distributed delay, Proc. Roy. Soc. Edinburgh A, 130 (2000), 1275-1291.  Google Scholar [49] Y. Han and Y. Song, Stability and Hopf bifurcation in a three-neuron unidirectional ring with distributed delays, Nonlin. Dyn., 69 (2011), 357-370. Google Scholar [50] R. Jessop, "Stability and Hopf Bifurcation Analysis of Hopfield Neural Networks with a General Distribution of Delays," University of Waterloo, available at http://uwspace.uwaterloo.ca/bitstream/10012/6403/1/Jessop_Raluca.pdf. 2011. Google Scholar [51] C. H. June, Adoptive T cell therapy for cancer in the clinic, J. Clin. Invest., 117 (2007), 1466-1476. Google Scholar [52] J. M. Kaminski, J. B. Summers, M. B. Ward, M. R. Huber and B. Minev, Immunotherapy and prostate cancer, Canc. Treat. Rev., 29 (2004), 199-209. Google Scholar [53] B. J. Kennedy, Cyclic leukocyte oscillations in chronic myelogenous leukemia during hydroxyurea therapy, Blood, 35 (1970), 751-760. Google Scholar [54] D. Kirschner, J. C. Arciero and T. L. Jackson, A mathematical model of tumor-Immune evasion and siRNA treatment, Discr. Cont. Dyn. Systems, 4 (2004), 39-58.  Google Scholar [55] D. Kirschner and J. C. Panetta, Modeling immunotherapy of the tumor-Immune interaction, J. Math. Biol., 37 (1998), 235-252. Google Scholar [56] Y. Kuang, "Delay Differential Equations with Applications in Population Dynamics," Academic Press, 1993.  Google Scholar [57] Y. Kuang, Delay differential equations, Sourcebook in Theoretical Ecology, Hastings and Gross ed., University of California Press, 2011. Google Scholar [58] K. A. Kuznetsov and G. D. Knott, Modeling tumor regrowth and immunotherapy, Math. Comp. Mod., 33 (2001).  Google Scholar [59] V. A. Kuznetsov, I. A. Makalkin, M. A. Taylor and A. S. Perelson, Nonlinear dynamics of immunogenic tumors: Parameter estimation and global bifurcation analysis, Bull. Math. Biol., 56 (1994), 295-321. Google Scholar [60] M. C. Mackey and L. Glass, Oscillation and chaos in physiological control systems, Sc., 197 (1977), 287-289. Google Scholar [61] R. M. C. May and A. R. McLean, "Theoretical Ecology: Principles and Applications," Oxford University Press, USA. 2007.  Google Scholar [62] B. C. Mehta and M. B. Agarwal, Cyclic oscillations in leukocyte count in chronic myeloid leukemia, A. Hem. 63 (1980), 68-70. Google Scholar [63] J. D. Murray, "Mathematical Biology," third edition, Springer Verlag, Heidelberg, 2003.  Google Scholar [64] D. Pardoll, Does the Immune system see tumours as foreign or self?, Ann. Rev. Immun., 21 (2003), 807-839. Google Scholar [65] D. Rodriguez-Perez, O. Sotolongo-Grau, R. Espinosa, R. O. Sotolongo-Costa, J. A. Santos Miranda and J. C. Antoranz, Assessment of cancer immunotherapy outcome in terms of the Immune response time features, Math. Med. and Bio., 24 (2007), 287-300. Google Scholar [66] P. Martin, S. Martin, P. Burton and I. Roitt, "Roitt's Essential Immunology," Wiley-Blackwell, 2011. Google Scholar [67] S. Ruan, Delay differential Eequation in single species dynamics,, in, 1 (): 477.   Google Scholar [68] A. Sohrabi, J. Sandoz, J. S. Spratt and H. C. Polk, Recurrence of breast cancer: Obesity, tumor size, and axillary lymph node metastases, JAMA, 244 (1980), 264-265. Google Scholar [69] H. Tsao, A. B. Cosimi and A. J. Sober, Ultra-late recurrence (15 years or longer) of cutaneous melanoma, Cancer, 79 (1997), 2361-2370. Google Scholar [70] A. P. Vicari, G. Caux and G. Trinchieri, Tumor escape from Immune surveillance through dendritic cell inactivation, Sem. Canc. Biol., 2 (2002), 33-42. Google Scholar [71] M. Villasana and A. Radunskaya, A delay differential equation model for tumour growth, J. of Math. Bio., 47 (2003), 270-294.  Google Scholar [72] H. Vodopick, E. M. Rupp, C. L. Edwards, F. A. Goswitz and J. J. Beauchamp, Spontaneous cyclic leukocytosis and thrombocytosis in chronic granulocytic leukemia, New Engl. J. of Med., 286(1972), 284-290. Google Scholar [73] T. L. Whiteside, Tumor-induced death of Immune cells: Its mechanisms and consequences, Sem. Canc. Biol., 12 (2002), 43-50. Google Scholar [74] E. C. Zeeman, Stability of dynamical systems, Nonlin., 1 (1988), 115-155.  Google Scholar [75] C. H. Zhang and Y. Xiang-Ping, Stability and Hopf bifurcations in a delayed predator-prey system with a distributed delay, Int. J. Bifur. Chaos Appl. Sci. Eng., 19 (2009), 2283-2294.  Google Scholar show all references ##### References: [1] Min Yu, Gang Huang, Yueping Dong, Yasuhiro Takeuchi. Complicated dynamics of tumor-immune system interaction model with distributed time delay. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2391-2406. doi: 10.3934/dcdsb.2020015 [2] Shigui Ruan. Nonlinear dynamics in tumor-immune system interaction models with delays. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 541-602. doi: 10.3934/dcdsb.2020282 [3] Daoyi Xu, Yumei Huang, Zhiguo Yang. Existence theorems for periodic Markov process and stochastic functional differential equations. Discrete & Continuous Dynamical Systems, 2009, 24 (3) : 1005-1023. doi: 10.3934/dcds.2009.24.1005 [4] Urszula Foryś, Jan Poleszczuk. A delay-differential equation model of HIV related cancer--immune system dynamics. Mathematical Biosciences & Engineering, 2011, 8 (2) : 627-641. doi: 10.3934/mbe.2011.8.627 [5] Jianquan Li, Xin Xie, Dian Zhang, Jia Li, Xiaolin Lin. Qualitative analysis of a simple tumor-immune system with time delay of tumor action. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5227-5249. doi: 10.3934/dcdsb.2020341 [6] Zhenyu Lu, Junhao Hu, Xuerong Mao. Stabilisation by delay feedback control for highly nonlinear hybrid stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4099-4116. doi: 10.3934/dcdsb.2019052 [7] Tian Zhang, Huabin Chen, Chenggui Yuan, Tomás Caraballo. On the asymptotic behavior of highly nonlinear hybrid stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5355-5375. doi: 10.3934/dcdsb.2019062 [8] Wei Mao, Yanan Jiang, Liangjian Hu, Xuerong Mao. Stabilization by intermittent control for hybrid stochastic differential delay equations. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021055 [9] Fuke Wu, Yangzi Hu. Stochastic Lotka-Volterra system with unbounded distributed delay. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 275-288. doi: 10.3934/dcdsb.2010.14.275 [10] Ábel Garab. Unique periodic orbits of a delay differential equation with piecewise linear feedback function. Discrete & Continuous Dynamical Systems, 2013, 33 (6) : 2369-2387. doi: 10.3934/dcds.2013.33.2369 [11] Tomás Caraballo, Renato Colucci, Luca Guerrini. Bifurcation scenarios in an ordinary differential equation with constant and distributed delay: A case study. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2639-2655. doi: 10.3934/dcdsb.2018268 [12] Dan Liu, Shigui Ruan, Deming Zhu. Bifurcation analysis in models of tumor and immune system interactions. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 151-168. doi: 10.3934/dcdsb.2009.12.151 [13] A. R. Humphries, O. A. DeMasi, F. M. G. Magpantay, F. Upham. Dynamics of a delay differential equation with multiple state-dependent delays. Discrete & Continuous Dynamical Systems, 2012, 32 (8) : 2701-2727. doi: 10.3934/dcds.2012.32.2701 [14] Weihua Ruan. Markovian strategies for piecewise deterministic differential games with continuous and impulse controls. Journal of Dynamics & Games, 2019, 6 (4) :
for the model if that argument is not provided. - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). Defaults to `False`. truncation (bool, str or [TruncationStrategy], optional): Activates and controls truncation. Accepts the following values: - `True` or `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided. - `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size). Defaults to `False`. return_position_ids (bool, optional): Whether to include tokens position ids in the returned dictionary. Defaults to `False`. return_token_type_ids (bool, optional): Whether to include token type ids in the returned dictionary. Defaults to `True`. return_attention_mask (bool, optional): Whether to include the attention mask in the returned dictionary. Defaults to `False`. return_length (bool, optional): Whether to include the length of each encoded inputs in the returned dictionary. Defaults to `False`. return_overflowing_tokens (bool, optional): Whether to include overflowing token information in the returned dictionary. Defaults to `False`. return_special_tokens_mask (bool, optional): Whether to include special tokens mask information in the returned dictionary. Defaults to `False`. return_dict (bool, optional): Decide the format for returned encoded batch inputs. Only works when input is a batch of data. :: - If True, encoded inputs would be a dictionary like: {'input_ids': [[1, 4444, 4385, 1545, 6712],[1, 4444, 4385]], 'token_type_ids': [[0, 0, 0, 0, 0], [0, 0, 0]]} - If False, encoded inputs would be a list like: [{'input_ids': [1, 4444, 4385, 1545, 6712], 'token_type_ids': [0, 0, 0, 0, 0]}, {'input_ids': [1, 4444, 4385], 'token_type_ids': [0, 0, 0]}] Defaults to `True`. return_offsets_mapping (bool, optional): Whether to include the list of pair preserving the index of start and end char in original input for each token in the returned dictionary. Would be automatically set to `True` when `stride` > 0. Defaults to `False`. add_special_tokens (bool, optional): Whether to add the special tokens associated with the corresponding model to the encoded inputs. Defaults to `True` pad_to_multiple_of (int, optional): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). Defaults to `None`. return_tensors (str or [TensorType], optional): If set, will return tensors instead of list of python integers. Acceptable values are: - `'pd'`: Return Paddle `paddle.Tensor` objects. - `'np'`: Return Numpy `np.ndarray` objects. Defaults to `None`. verbose (bool, optional): Whether or not to print more information and warnings. Defaults to True. Returns: dict or list[dict] (for batch input): The dict has the following optional items: - **input_ids** (list[int] or list[list[int]]): List of token ids to be fed to a model. - **position_ids** (list[int] or list[list[int]], optional): List of token position ids to be fed to a model. Included when `return_position_ids` is `True` - **token_type_ids** (list[int] or list[list[int]], optional): List of token type ids to be fed to a model. Included when `return_token_type_ids` is `True`. - **attention_mask** (list[int] or list[list[int]], optional): List of integers valued 0 or 1, where 0 specifies paddings and should not be attended to by the model. Included when `return_attention_mask` is `True`. - **seq_len** (int or list[int], optional): The input_ids length. Included when `return_length` is `True`. - **overflowing_tokens** (list[int] or list[list[int]], optional): List of overflowing tokens. Included when if `max_length` is specified and `return_overflowing_tokens` is True. - **num_truncated_tokens** (int or list[int], optional): The number of overflowing tokens. Included when if `max_length` is specified and `return_overflowing_tokens` is True. - **special_tokens_mask** (list[int] or list[list[int]], optional): List of integers valued 0 or 1, with 0 specifying special added tokens and 1 specifying sequence tokens. Included when `return_special_tokens_mask` is `True`. - **offset_mapping** (list[int], optional): list of pair preserving the index of start and end char in original input for each token. For a sqecial token, the index pair is `(0, 0)`. Included when `return_overflowing_tokens` is True or `stride` > 0. - **overflow_to_sample** (int or list[int], optional): Index of example from which this feature is generated. Included when `stride` works. """ # Input type checking for clearer error def _is_valid_text_input(t): if isinstance(t, str): # Strings are fine return True elif isinstance(t, (list, tuple)): # List are fine as long as they are... if len(t) == 0: # ... empty return True elif isinstance(t[0], str): # ... list of strings return True elif isinstance(t[0], (list, tuple)): # ... list with an empty list or with a list of strings return len(t[0]) == 0 or isinstance(t[0][0], str) else: return False else: return False if not _is_valid_text_input(text): raise ValueError( "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) " "or `List[List[str]]` (batch of pretokenized examples).") if not _is_valid_text_input(text): raise ValueError( "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) " "or `List[List[str]]` (batch of pretokenized examples).") if text_pair is not None and not _is_valid_text_input(text_pair): raise ValueError( "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) " "or `List[List[str]]` (batch of pretokenized examples).") if is_split_into_words: is_batched = isinstance(text, ( list, tuple)) and text and isinstance(text[0], (list, tuple)) else: is_batched = isinstance(text, (list, tuple)) if is_batched: if isinstance(text_pair, str): raise TypeError( "when tokenizing batches of text, `text_pair` must be a list or tuple with the same length as `text`." ) if text_pair is not None and len(text) != len(text_pair): raise ValueError( f"batch length of `text`: {len(text)} does not match batch length of `text_pair`: {len(text_pair)}." ) batch_text_or_text_pairs = list(zip( text, text_pair)) if text_pair is not None else text return self.batch_encode( batch_text_or_text_pairs=batch_text_or_text_pairs, max_length=max_length, stride=stride, is_split_into_words=is_split_into_words, padding=padding, truncation=truncation, return_position_ids=return_position_ids, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_length=return_length, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_dict=return_dict, return_offsets_mapping=return_offsets_mapping, add_special_tokens=add_special_tokens, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, verbose=verbose, **kwargs) else: return self.encode( text=text, text_pair=text_pair, max_length=max_length, stride=stride, is_split_into_words=is_split_into_words, padding=padding, truncation=truncation, return_position_ids=return_position_ids, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_length=return_length, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, add_special_tokens=add_special_tokens, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, verbose=verbose, **kwargs) def encode(self, text, text_pair=None, max_length=None, stride: int=0, is_split_into_words: bool=False, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy]=False, return_position_ids=False, return_token_type_ids=True, return_attention_mask=False, return_length=False, return_overflowing_tokens=False, return_special_tokens_mask=False, return_offsets_mapping=False, add_special_tokens=True, pad_to_multiple_of: Optional[int]=None, return_tensors: Optional[Union[str, TensorType]]=None, verbose: bool=True, **kwargs) -> BatchEncoding: """ Tokenize and prepare for the model a sequence or a pair of sequences. Args: text (`str`, `List[str]` or `List[int]`): The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` method). text_pair (`str`, `List[str]` or `List[int]`, *optional*): Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` method). """ # Backward compatibility for 'max_seq_len' old_max_seq_len = kwargs.get('max_seq_len', None) if max_length is None and old_max_seq_len: if verbose: warnings.warn( "The `max_seq_len` argument is deprecated and will be removed in a future version, " "please use `max_length` instead.", FutureWarning, ) max_length = old_max_seq_len # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
#### Category : collections This is an example that I have a problem with. 0 => array:3 [ "ID" => 1 "limitStock" => "123" "reduce" => "400.00" ] 1 => array:3 [ "ID" => 1 "limitStock" => "223" "reduce" => "0.00" ] 2 => array:3 [ "ID" => 1 "limitStock" => "232" "reduce" => "0.00" ] 3 => array:3 .. Read more Laravel I have a problem finding a data inside a collection which is filled with objects. My collection is look like this: return collect([ new Aligator(‘1’, ‘name1’), new Aligator(‘2’, ‘name2’), new Aligator(‘3’, ‘name3’)]; The data which I want to search is the name, given through url with contoller. Route::get(‘/aligator/{name}’, [AligatorController::class, ‘getAligator’]; After clicking the link .. Read more I have this in my collection [ "pending" => array:2 [ "count" => 2 "amount" => 200 ] "ongoing" => array:2 [ "count" => 3 "amount" => 3000 ] "price_total" => 7000 ] what i want to achieve is like to seperate the count and amount [ "pending" => array:1 [ "amount" => 200 ] .. Read more I have a method in my controller public function studentDirectory(Request $request, Student$student) { if($request->switch == ‘personal’) { return new StudentResource($student); } } Resource Controller: public function toArray($request) { return [ ‘id’ =>$this->id, ‘name’ => $this->name, ‘address’ =>$this->address, ‘phone’ => $this->phone, ‘tags’ => TagsResource::collection($this->tags)), ]; } Now this works fine and I get .. Read more Hello dear Stack overflowers, I’m trying to compare one collection with the name ‘text1’, with another collection ‘text2’. I would like to achieve that it puts all values, that ‘text1’ and ‘text2’ both have, into another collection. I already tried using Diff and DiffKeys (see compare two collection arrays in laravel ). But I was .. Read more I want to sort two laravel collection after merging them in alphabetical order, I’ve used the following code: $result =$commonTitles->merge($newTitles)->sort(); And the result is sorted A to Z and then a to z. ["Ask","Black","Unit","ab","live","test"] The result that I expect is sorting as A a to z Z. How can I change the result? Source: .. Read more I’m working on a php project and need to use the Collection namespace from Illuminate. I have Laravel installed on my computer. However, when I try and create a new collection with: use IlluminateSupportCollection; …$collection = new Collection(); It gives me the error that it cannot find the class "IlluminateSupportCollection." I tried moving my .. Read more I need to add pagination, but it seems complicated in my case, because I get data from database with paginate, but then I modify this data and when I call links() method on the blade, I get the following exception Method IlluminateSupportCollection::links does not exist. My code in the Controller method: $transactionsByLastMonth = Transaction::where(‘created_at’, ‘>=’, .. Read more I try to paginate a sorted collection in Laravel 8, maybee any one have an idea? That’s my code:$where = Business::where(‘name’, ‘LIKE’, ‘%’ . $what . ‘%’);$businesses = $where->get()->sortByDesc(function($business) { return $business->totalReviews; })->paginate(10); // <— not working.. Collection::paginate not exists Sourc.. Read more I’ve been trying to get the difference between two collections: 1st: { "name": "Test A", "scores": [ { "name": "Values", "points": 9 }, { "name": "Algebra", "points": 6 }, { "name": "Science", "points": 5 }, { "name": "Total", "points": 20 } ] } 2nd: { "name": "Test A", "scores": [ { "name": "Values", "points": 5 .. Read more What is difference between items of collection & attributes of collection ? How should we change this in each other ? Sourc.. Read more I have a resource in this resource I have collection , now I want to show that collection pagination links how can I do it? This is my resource public function toArray($request) { return [ "name" => $this->name , "slug" =>$this->slug , "bg_image" => imageGenerate("sections" , $this->bg_image) , "bg_color" =>$this->bg_color , "items" => .. Read more I have an issue with a form data. I have many files in an array but when I have to get them in my controller I use file() but it doesn’t returns a collection of uploaded files, so I’m looking for a solution to make it possible. My client is Anguler and my API is .. Read more I have an issue with a form data. I have many files in an array but when I have to get them in my controller I use file() but it doesn’t returns a collection of uploaded files, so I’m looking for a solution to make it possible. My client is Anguler and my API is .. Read more I’m trying to figure out the best way to sort by multiple values and multiple keys at once. I have the following "sort sequence"; sort by "finished or not" sort by "out of time or not" sort by "points in total" sort by "points for your time" sort by "points for each ‘secret time control’, .. Read more Hello guys! I got a task and i need your help. Firstly i would like to show you my AppRepositoriesxyz.php App/repositories/xyz.php public function getcars(): Collection { return collect([ new Car(…) )]; There is a lot of data in it. And my task is that i have to use a Http Get Api endpoint to show .. Read more Hello guys! I have a a little problem with collections. I have never worked with these. I would like to display a collection at my welcome blade but there is a problem. My collection is not in the contrroller, the collection’s place is in the App/Repositories/xyz.php and in a function. How can i pass this .. Read more Given the following collection: $client = Client::all(); //all: [ // AppModelsClient { // #id: 1, // name: "Roark", // }, // AppModelsClient { // #id: 2, // name: "Tanika", // }, // AppModelsClient { // #id: 3, // name: "Max", // }, // AppModelsClient { // #id: 4, // name: "Sloane", // }, //], and .. Read more I have a laravel collection below from:$unitinventory = DB::select(‘call wh_inventory_list(?)’, array(‘Units’)); [ { "unit_id":"UCL2100001", "rr_id":"RR2100001", "make":"FAW", "chassis_no":"LFWSRXRJ9M1E00004", "engine_no":"CA6DM2-42E5153558354", "body_type":"TRACTOR HEAD", "horse_power":"420HP", "cabin_type":"J6P E5", "numwheels":"6W", "unit_status":"Available", "unit_location":null }, { "unit_id":"UCL2100002", "rr_id":"RR2100002", "make":"FAW", "chassis_no":"LFWSRXRJ4M1E00007", "engine_no":"CA6DM2-42E5153563283", "body_type":"TRACTOR HEAD", "horse_power":"420HP", "cabin_type":"J6P E5", "numwheels":"6W", "unit_status":"Available", "unit_location":null } ] And another collection: $modifification = collect(DB::table(‘pd_jo_bodymodification’)->get()); [ { "row_id":2, "jo_id":"JO2100003", "jo_chassisno":"LFWSRXRJ4M1E00007", .. Read more I have two collections. First Collection is all descendants (not children) from root item, flat structure, in other words, ALL items in my DB in flat organization:$descendants = $root->getDescendants(); I go through this collection using foreach: foreach ($descendants as $key => &$item) {} For each item, I do some magic, but then, I need .. Read more I have an $attendace variable contains a collection from a Laravel query builder:$attendace = [ { "row_id":65, "emp_number":"IPPH0004", "time_stamp":"01:00:00", "attendance_status":"Punch In", "date_created":"2021-10-02" }, { "row_id":68, "emp_number":"IPPH0004", "time_stamp":"07:30:00", "attendance_status":"Start Break", "date_created":"2021-10-02" }, { "row_id":69, "emp_number":"IPPH0004", "time_stamp":"08:00:00", "attendance_status":"End Break", "date_created":"2021-10-02" }, { "row_id":70, "emp_number":"IPPH0004", "time_stamp":"08:30:00", "attendance_status":"Start Break", "date_created":"2021-10-02" }, { "row_id":71, "emp_number":"IPPH0004", "time_stamp":"09:00:00", "attendance_status":"End Break", "date_created":"2021-10-02" .. Read more I am trying to get the value from a collection in PHP. $todaylog variable contains a collection from a laravel query builder:$todaylog = [ { "row_id":55, "emp_number":"IPPH0004", "timestamp":"03:30:23", "attendance_status":"Punch In", "date_created":"2021-10-01" }, { "row_id":56, "emp_number":"IPPH0004", "timestamp":"11:32:50", "attendance_status":"Start Break", "date_created":"2021-10-01" }, { "row_id":57, "emp_number":"IPPH0004", "timestamp":"11:33:09", "attendance_status":"End Break", "date_created":"2021-10-01" } ] What I have done so .. Read more i have a ressource collection : class OverviewResource extends JsonResource { public function toArray($request): array { return [ ‘first_name’ =>$this->first_name, ‘last_name’ => $this->last_name, ’email’ =>$this->email, ‘phone’ => $this->phone, ‘friends’ => OverviewResource::collection(User::getFriends()), ]; } } i have an error in this line ‘friends’ => OverviewResource::collection(User::getFriends()), when calling this function from User model public static .. Read more In the following code, the "liked_by_me" column remains unchanged but the "newAttr" column is added.$commentList->transform(function($item,$key) { $item->liked_by_me == null ?$item->liked_by_me = false : $item->liked_by_me = true;$item->liked_by_me == null ? $item->newAttr = false :$item->newAttr = true;
\section{Introduction} When we are studying a \emph{number ring} $R$, that is a subring of a number field $K$, it can be useful to understand the size of its ideals compared to the whole ring. The main tool for this purpose is the norm map which associates to every non-zero ideal $I$ of $R$ its index as an abelian subgroup $N(I)=[R:I]$. If $R$ is the \emph{maximal order}, or \emph{ring of integers}, of $K$ then this map is multiplicative, that is, for every pair of non-zero ideals $I,J\subseteq R$ we have $N(I)N(J)=N(IJ)$. If the number ring is not the maximal order this equality may not hold for some pair of non-zero ideals. For example, if we consider the quadratic order $\mathbb{Z}[2i]$ and the ideal $I=(2,2i)$, then we have that $N(I)=2$ and $N(I^2)=8$, so we have the inequality $N(I^2)> N(I)^2$. Observe that if every maximal ideal ${\mathfrak p}$ of a number ring $R$ satisfies $N({\mathfrak p}^2)\leq N({\mathfrak p})^2$, then we can conclude that $R$ is the maximal order of $K$ (see Corollary \ref{cor:submdedekind}). In Section \ref{sec:prelim} we recall some basic commutative algebra and algebraic number theory and we apply them to see how the ideal norm behaves in relation to localizations and ring extensions. In Section \ref{sec:quadquartcases} we will see that the inequality in the previous example is not a coincidence. More precisely, we will prove that in any quadratic order we have $N(IJ)\geq N(I)N(J)$ for every pair of non-zero ideals $I $ and $J$. We will say that the norm is \emph{super-multiplicative} if this inequality holds for every pair of non-zero ideals (see Definition \ref{def:sm}). We will show that this is not always the case by exhibiting an a order of degree $4$ where we have both (strict) inequalities, see Example \ref{ex:degree4}. In a quadratic order every ideal can be generated by $2$ elements and in a order of degree $4$ by $4$ elements, so we are led to wonder if the behavior of the norm is related to the number of generators and what happens in a cubic order, or more generally in a number ring in which every ideal can be generated by 3 elements. The main result of this work is the following: \begin{theorem} \label{mainthm} Let $R$ be a number ring. The following statements are equivalent: \begin{enumerate}[label=\upshape(\roman*), leftmargin=*, widest=iii] \item \label{impl:1} every ideal of $R$ can be generated by $3$ elements; \item \label{impl:2} every ring extension $R'$ of $R$ contained in the normalization of $R$ is super-multiplicative. \end{enumerate} \end{theorem} Theorem \ref{mainthm} is an immediate consequence of the following two stronger results, which are proved respectively in Sections \ref{sec:firstimpl} and \ref{sec:secondimpl}. \begin{theorem} \label{thm:firstimpl} Let $R$ be a commutative Noetherian domain of dimension $1$ where every ideal can be generated by $3$ elements. Then $R$ is super-multiplicative. Moreover, every ring extension $R'$ of $R$ such that the additive group of $R'/R$ has finite exponent is also super-multiplicative. \end{theorem} \begin{theorem} \label{thm:secondimpl} Let $R$ be a number ring with normalization $\tilde{R}$ such that for every maximal $R$-ideal $\frm$ the ideal norm of the number ring $R+\frm\tilde R$ is super-multiplicative. Then every $R$-ideal can be generated by $3$ elements. \end{theorem} \section{Preliminaries} \label{sec:prelim} A field $K$ is called \emph{number field} if it is a finite extension of $\mathbb{Q}$. In this article all rings are unitary and commutative. We will say that $R$ is a \emph{number ring} if it is a subring of a number field. A number ring for which the additive group is finitely generated is called an \emph{order} in its field of fractions. In every number ring there are no non-zero additive torsion element. Every order is a free abelian group of rank $[\Frac(R):\mathbb{Q}]$, where $\Frac(R)$ is the fraction field of $R$. \begin{proposition} Let $R$ be a number ring. Then \begin{enumerate} \item every non-zero $R$-ideal has finite index; \item $R$ is Noetherian; \item if $R$ is not a field then it has Krull dimension $1$, that is every non-zero prime ideal is maximal; \item if $S$ is a number ring containing $R$ and ${\mathfrak p}$ a maximal ideal of $R$, then there are only finitely many prime $S$-ideal ${\mathfrak q}$ above ${\mathfrak p}$, that is ${\mathfrak q}\supseteq {\mathfrak p} S$; \item $R$ has finite index in its normalization $\tilde R$. \end{enumerate} \end{proposition} For a proof and more about number rings see \cite{psh08}. Recall that for a commutative domain $R$ with field of fractions $K$, a \emph{fractional $R$-ideal} $I$ is a non-zero $R$-submodule of $K$ such that $xI\subseteq R$ for some non-zero $x\in K$. Multiplying by a suitable element of $R$, we can assume that the element $x$ in the definition is in $R$. It is useful to extend the definition of the index to arbitrary fractional ideals $I$ and $J$ taking: \[[I:J]=\dfrac{[I:I\cap J]}{[J:I\cap J]}.\] It is an easy consequence that we have $[I:J]=[I:H]/[J:H]$ for every fractional ideal $H$. In particular, if $[R:I]$ is finite we call it the \emph{norm of the ideal $I$}, and we denote it $N(I)$. In general the ideal norm is not multiplicative. \begin{lemma} \label{lemma:principalideal} Let $R$ be a number ring and let $I$ be a non-zero $R$-ideal. For every non-zero $x\in K$ we have \[ N(xR)N(I)=N(xI).\] \end{lemma} \begin{proof} As $R$ is a domain, the multiplication by $x$ induces an isomorphism $R/I\simeq xR/xI$ of (additive) groups. Hence we have $[R:xR]=[I:xI]$ and therefore $[R:xR][R:I]=[R:xI]$. \end{proof} \begin{proposition} \label{prop:lengthproduct} Let $S\subseteq R$ be an extension of commutative rings. Let $I$ be an $R$-ideal such that $[R:I]$ is finite. Then \[[R:I] = \prod_{\frm} [R_\frm: I_\frm] = \prod_\frm \# ( S / \frm )^{l_{S_\frm}( R_\frm / I_\frm )},\] where the products are taken over the maximal ideals of $S$ and $l_{S_\frm}$ denotes the length as an $S_\frm$-module. Moreover, we have that \[l_S(R/I)=\sum_\frm l_{S_\frm}(R_\frm/I_\frm).\] \end{proposition} \begin{proof} As $R/I$ has finite length as an $S$-module, there exists a composition series \[R/I=M_0\supset M_1\supset \cdots \supset M_l = 0,\] where the $M_i$ are $S$-modules such that $M_i/M_{i+1}\simeq S/\frm_i$ for some maximal $S$-ideal $\frm_i$. Now fix a maximal $S$-ideal $\frm$. Observe that $\#\set{i\ :\ \frm_i=\frm} = l_{S_\frm}(R_\frm/I_\frm)$ because all the factors isomorphic to $S/\frm_i$ disappear if we localize at $\frm\neq \frm_i$. This implies that $l_S(R/I)=\sum_\frm l_{S_\frm}(R_\frm/I_\frm)$ and that $[ R_\frm : I_\frm ] = \# ( S / \frm )^{l_{S_\frm}( R_\frm/I_\frm)}$. By \cite[2.13, p.72]{eis95} we have $[R:I] = \prod_{\frm} [R_\frm:I_\frm]$. Observe that there is no harm in taking the product over all the maximal ideal of $S$ because the module $R/I$ vanishes if we localize at a maximal ideal that does not appear in its composition series. \end{proof} \begin{proposition} \label{prop:normmult} Let $R$ be a number ring, $I$ an invertible $R$-ideal. Then for every $R$-ideal $J$ we have \[N(I)N(J)=N(IJ).\] \end{proposition} \begin{proof} Recall that if an ideal $I$ is invertible then the localization $I_\frm$ at every maximal $R$-ideal $\frm$ is a principal $R_\frm$-ideal (see \cite[11.3, p.80]{mats89}). So by Lemma \ref{lemma:principalideal} we have that $[R_\frm:J_\frm][R_\frm:I_\frm]=[R_\frm:(IJ)_\frm]$ for every $\frm$. Hence by Proposition \ref{prop:lengthproduct} \[N(IJ)=\prod_\frm [R_\frm:(IJ)_\frm] = \prod_\frm [R_\frm:I_\frm]\prod_\frm [R_\frm:J_\frm]=N(I)N(J).\] \end{proof} \begin{proposition} \label{prop:lengthlocal} Let $S\subseteq R$ be an extension of commutative domains, $\frm$ a maximal $S$-ideal and $J$ a proper ideal of the localization $R_\frm$ such that $R_\frm/J$ has finite length as an $S_\frm$-module. Then \[ \frac{R}{J\cap R} \simeq \frac{R_\frm}{J} \] as $S$-modules. Moreover, \[l_S\left( \frac{R}{J\cap R} \right) = l_{S_\frm}\left( \frac{R_\frm}{J} \right). \] \end{proposition} \begin{proof} As $R$ is a domain the localization morphism $R \to R_\frm$ composed with the projection $R_\frm \to R_\frm/J$ induces an injective morphism $R/(J\cap R)\to R_\frm/J$. As $l_{S_\frm}( R_\frm / J)$ is finite, $R/(J\cap R)$ is annihilated by some power of $\frm$ and by \cite[2.13, p.72]{eis95} we have that it is isomorphic to its localization at $\frm$. As $(J\cap R)_\frm = J$ we have that $R/(J\cap R)\simeq R_\frm/J$ as $S$-modules. In particular they have the same length as $S$-modules. By Proposition \ref{prop:lengthproduct} we have that $l_S(R_\frm/J) = \sum_{\mathfrak n} l_{S_{\mathfrak n}}((R_\frm/J)_{\mathfrak n})$, where the sum is taken over the maximal $S$-ideals. So to conclude, we need to prove that if ${\mathfrak n}\neq \frm $, then $l_{S_{\mathfrak n}}((R_\frm/J)_{\mathfrak n})=0$, which is a direct consequence of the fact that $(R_\frm/J)_{\mathfrak n}=0$ when ${\mathfrak n} \neq \frm$. \end{proof} \begin{definition} \label{def:sm} Let $R$ be a commutative ring. We will say that the ideal norm of $R$ is \emph{super-multiplicative} if for every pair
$C>0$, $M>0$, there exist $C'>0$ and $\delta_0>0$ with the following property. Let $V:\R/T \Z \to \R$ be a smooth function with $V(t)=0$ near $0$,\footnote {This neighborhood can be arbitrarily small, but this will influence the constants below that depend on $v$.} and let $A(\cdot)=A[V](\cdot)$ and $u(\cdot)=u[V](\cdot)$. Let $E_0 \in \Omega(V) \cap [M^{-1},M]$. Assume that $C^{-1}<d(u(E_0),E_0^{1/2} i)<C$. Then there exists $\epsilon_0>0$ such that for every $0<\epsilon<\epsilon_0$, for every $\kappa>0$, for every $0<\delta<\delta_0$, for every $N$ sufficiently large, for every $n$ sufficiently large, letting $V'$ be the $(\delta,N,n)$-padding of $v$, $A'(\cdot)=A[V'](\cdot)$, $u'(\cdot)=u[V'](\cdot)$, we have the following. There exists a compact set $\Lambda \subset \Omega(V') \cap [E_0-\epsilon,E_0+\epsilon]$ such that \begin{enumerate} \item $|\Lambda|>2 (1-C' \delta) \epsilon$, \item For $E \in \Lambda$, $d(u'(E),u(E))<\kappa$ and $C^{-1}<d(u'(E),E^{1/2} i)<C$, \item For $E \in \Lambda$, \be \sup_{t \in [0,T']} d(u'(E,t),i) \geq \sup_{t \in [0,T]} d(u(E,t),i), \ee \item For any $C' \delta<\gamma<1/4$, there exists a compact set $\Lambda' \subset \Lambda$ with $|\Lambda'|>\gamma \epsilon$ such that for $E \in \Lambda'$, \be \sup_{t \in [0,T']} d(u'(E,t),i) \geq \sup_{t \in [0,T]} d(u(E,t),i)+C'^{-1} \frac {\delta} {\gamma}. \ee \item For $E \in \Lambda$, \be \left |\frac {1} {T'} \int_0^{T'} d(u'(E,t),i) dt- \frac {1} {T} \int_0^T d(u(E,t),i) dt \right |<\kappa, \ee \end{enumerate} \end{lemma} \begin{proof} Let $D(E)=\begin {pmatrix} E^{1/4} & 0 \\ 0 & E^{-1/4} \end{pmatrix}$. Let $G:\R_+ \times \R/\Z \to \SL(2,\R)$ be given by $G(E,t)=D(E) R_{\delta \frac {E^{1/2}} {2 \pi} \sin^{2N} \pi t} D(E)^{-1} A(E)^N$. We have \be A'(E)=G(E,\frac {2n-1} {2n}) G(E,\frac {2n-2} {2n}) \cdots G(E,\frac {1} {2n}) G(E,0). \ee We can write for $E$ near $E_0$, \be B(E) A(E) B(E)^{-1}=R_{\theta(E)}, \ee where $B(E)=\B(A(E))$ and $\theta(E)=\Theta(A(E))$. By Lemma \ref {bla7}, $\theta$ has non-zero derivative. Thus we can write \be G(E,t)=D(E) R_{\delta \frac {E^{1/2}} {2 \pi} \sin^{2N} \pi t} D(E)^{-1} B(E)^{-1} R_{N \theta(E)} B(E). \ee Letting $Q(E)=B(E) D(E)$, we get \be \tr G(E,t)=\tr Q(E) R_{\delta \frac {E^{1/2}} {2 \pi} \sin^{2N} \pi t} Q(E)^{-1} R_{N \theta(E)} \ee Notice that $Q(E) \notin \SO(2,\R)$, since $Q(E) \cdot i \neq i$ (here we use that $B(E)^{-1} \cdot i=u(E) \neq E^{1/2} i=D(E) \cdot i$ for $E$ near $E_0$). Thus we can write $Q=R^{(1)} D^{(0)} R^{(2)}$, a product of rotation, diagonal and rotation matrices, depending analytically on $E$. Then \be \tr G(E,t)=\tr D^{(0)}(E) R_{\delta \frac {E^{1/2}} {2 \pi} \sin^{2N} \pi t} D^{(0)}(E)^{-1} R_{N \theta(E)}. \ee Write $D^{(0)}(E)=\begin{pmatrix} \lambda(E) & 0 \\ 0 & \lambda(E)^{-1} \end {pmatrix}$. We may assume that $\lambda(E)>1$. Then \be \lambda(E)=e^{d(u(E),E^{1/2} i)/2}, \ee so that $\frac {1} {2 C}<\ln \lambda(E)<\frac {C} {2}$. Then \begin{align} \tr G(E,t)=&2 \cos ((\delta E^{1/2} \sin^{2 N} \pi t)+2 \pi N \theta(E))\\ \nonumber & -(\lambda(E)-\lambda(E)^{-1})^2 \sin (\delta E^{1/2} \sin^{2 N} \pi t) \sin 2 \pi N \theta(E). \end{align} Thus \be |\tr G(E,t)-2 \cos ((\delta E^{1/2} \sin^{2 N} \pi t)+2 \pi N \theta(E))| \leq C_1 \delta \sin 2 \pi N \theta(E). \ee We conclude that if $2 N \theta(E)$ is at distance at least $C_2 \delta$ from $\Z$, then $|\tr G(E,t)|<2$. We conclude that for $\epsilon$ sufficiently small, for $N$ sufficiently large, the set of $E \in [E_0-\epsilon,E_0+\epsilon]$ such that $|\tr G(E,t)| \geq 2$ for some $t$ has Lebesgue measure at most $C_3 \delta \epsilon$. By Lemma \ref {blaparameter}, for $n$ large we will have $|\tr A'(E)|<2$ for a compact set $\Lambda(\epsilon,\delta,N,n) \subset [E_0-\epsilon,E_0+\epsilon]$ of Lebesgue measure at least $2 (1-C_4 \delta) \epsilon$. We may further assume that for $E \in \Lambda(\epsilon,\delta,N,n)$, the sequence $\{j \theta(E)\}_{0 \leq j \leq N-1}$ is $\frac {1} {100}$ dense $\mod 1$. Thus for such $E$, and any $w \in \H$, and any $0 \leq t \leq T$, there exists $0 \leq k \leq N-1$ such that $d(A(E,0,t) A(E)^k \cdot w,i) \geq d(u(E,t),i)+\frac {1} {2} d(w,u(E))$. Taking $w=u'(E,a_m)$ for some $0 \leq m \leq 2n-1$ (where $a_j$ is as in the definition of a $(\delta,N,n)$-padding), we get \be d(u'(E,t+k T+a_m,i) \geq d(u(E,t),i)+\frac {1} {2} d(u'(E,a_m),u(E)). \ee In particular, \be \sup_t d(u'(E,t),i) \geq \sup_t d(u(E,t),i)+\frac {1} {2} \max_{0 \leq m \leq 2 n-1} d(u'(E,a_m),u(E)). \ee Lemma \ref {blaparameter} shows that $u'(E,a_m)$ is near $\u(G(E,\frac {m} {2 n}))$ for $n$ large. In particular, $u'(E)$ is near $u(E)$, since $G(E,0)=A(E)$ and $u'(E,a_n)$ is near $w(E)=\u(G(E,1/2))$. We want to estimate the hyperbolic distance between $w(E)$ and $u(E)$ in $\H$. Let $w'(E)$ be the fixed point of $D^{(0)}(E) R_{\delta \frac {E^{1/2}} {2 \pi}} D^{(0)}(E)^{-1} R_{N \theta(E)}$ in $\H$. Then $w(E)= B^{-1} R^{(1)} \cdot w'(E)$. Since $u(E)=B^{-1} R^{(1)} \cdot i$, it follows that \be d(w(E),u(E))=d(w'(E),i). \ee But $w'(E)$ is the solution $z \in \H$ of the equation $a z^2+b z+c=0$, where \be a=\cos \delta E^{1/2} \sin 2 \pi N \theta(E)+ \lambda(E)^{-2} \sin \delta E^{1/2} \cos 2 \pi N \theta(E), \ee \be b=(\lambda(E)^2-\lambda(E)^{-2}) \sin \delta E^{1/2} \sin 2 \pi N \theta(E), \ee \be c=\cos \delta E^{1/2} \sin 2 \pi N \theta(E)+ \lambda(E)^2 \sin \delta E^{1/2} \cos 2 \pi N \theta(E). \ee Then \begin{align} \Im w'(E)=&\left ( 1+\frac {(\lambda(E)^2-\lambda(E)^{-2}) \sin \delta E^{1/2} \cos 2 \pi N \theta(E)} {\cos \delta E^{1/2} \sin 2 \pi N \theta(E)+ \lambda(E)^{-2} \sin \delta E^{1/2} \cos 2 \pi N \theta(E)} \right .\\ \nonumber &\left .-\frac {(\lambda(E)^2-\lambda(E)^{-2})^2 \sin^2 \delta E^{1/2} \sin^2 2 \pi N \theta(E)} {4(\cos \delta E^{1/2} \sin 2 \pi N \theta(E)+ \lambda(E)^{-2} \sin \delta E^{1/2} \cos 2 \pi N \theta(E))^2} \right )^{1/2} \end{align} Under the condition that $2 N \theta(E)$ is at distance at least $C_2 \delta$ from $\Z$, we have \be \left |\frac {(\lambda(E)^2-\lambda(E)^{-2})^2 \sin^2 \delta E^{1/2} \sin^2 2 \pi N \theta(E)} {4(\cos \delta E^{1/2} \sin 2 \pi N \theta(E)+ \lambda(E)^{-2} \sin \delta E^{1/2} \cos 2 \pi N \theta(E))^2} \right | \leq C_5 \delta^2, \ee \be |\frac {(\lambda(E)^2-\lambda(E)^{-2}) \sin \delta E^{1/2} \cos 2 \pi N \theta(E)} {\cos \delta E^{1/2} \sin 2 \pi N \theta(E)+ \lambda(E)^{-2} \sin \delta E^{1/2} \cos 2 \pi N \theta(E)}| \geq C_6^{-1} \delta \cot 2 \pi N \theta(E). \ee If $2 N \theta(E)$ is at distance $C_2 \delta<\gamma<1/4$ from $\Z$ then \be |\frac {(\lambda(E)^2-\lambda(E)^{-2}) \sin \delta E^{1/2} \cos 2 \pi N \theta(E)} {\cos \delta E^{1/2} \sin 2 \pi N \theta(E)+ \lambda(E)^{-2} \sin \delta E^{1/2} \cos 2 \pi N \theta(E)}| \geq C_7^{-1} \frac {\delta} {\gamma}, \ee so that \be d(w'(E),i) \geq C_8^{-1} \frac {\delta} {\gamma}. \ee It follows that in this case \be \sup_t u'(E,t) \geq \sup_t u(E,t)+C_9^{-1} \frac {\delta} {\gamma}. \ee For $C_2 \delta<\gamma<1/4$, let $\Lambda'(\epsilon,\delta,N,n,\gamma)$ be the set of $E \in \Lambda(\epsilon,\delta,N,n)$ such that $2 N \theta(E)$ is at distance at most $\gamma$ from $\Z$. Since $\theta$ has non-zero derivative, we have $|\Lambda'(\epsilon,\delta,N,n,\gamma)| \geq \frac {3} {2} \gamma \epsilon$, for $\epsilon$ small, $N$ sufficiently large and $n$ sufficiently large. \comm{ By the slow variation estimate, if $E \in \Lambda(\epsilon,\delta,N,n)$ and $n$ is large, then $u'(E,a_{[n/4]})$ is near $w(E)$. $\Prod_{k=0}^{[n/4]-1} C(E,\frac {k} {n})$ takes $u'(E)$ to $\tilde w(E)$ near $w(E)$. Assume that $\psi(E)$ is irrational. Then the sequence $A(E)^j \cdot \tilde w(E)$ is dense in a circle centered on $u(E)$ with hyperbolic radius $d(\tilde w(E),u(E))$, and the speed it is becoming dense only depends on $E$ and an upper bound on $d(\tilde w(E),u(E))$. Thus for such $E$ and any $0 \leq t \leq T$, we can find bounded (depending on $E$) $k(t)$ such that \be d(A(E,0,t) A(E)^{k(t)} \cdot \wilde w(E),i) \geq u(E,t)+C_9^{-1} \frac {\delta} {\gamma}. \ee If $N$ is sufficiently large, this shows that \be d(u'(E,[n/4] N T+k(t) T+t),i) \geq d(u(E,t),i)+C_9^{-1} \frac {\delta} {\gamma}. \ee A similar argument gives \be d(u'(E,[n/4] N T+k(t) T+t),i) \geq d(u(E,t),i)+C_9^{-1} \frac {\delta} {\gamma}. \ee } To conclude, let us show that if $E \in \Lambda(\epsilon,\delta,N,n)$ and $2 N \theta(E)$ is $C_2 \delta$-away from $\Z$, then \be \frac {1} {T'} \int_0^{T'} d(u'(E,t),i) dt-\frac {1} {T} \int_0^T d(u(E,t),i) dt \ee is small. The formulas for $w'$ imply that $u'(E,t)$ is at bounded hyperbolic distance from some $u(E,t')$. In fact, if $a_j \leq t \leq a_{j+1}$ then $u'(E,t)$ is at bounded hyperbolic distance from $u(E,t-a_j)$. Moreover, if $a_j \leq t \leq a_j+T$, then $A(E,0,t-a_j)^{-1} u'(E,t)$ is near $\u(G(E,\frac {j} {2 n}))$. If $\frac {j} {2 n}$ is not close to $\frac {1} {2}$, the estimates give that the fixed point of $G(E,\frac {j} {2 n})$ is close to $u(E)$, provided $N$ is large. It follows that $u'(E,t)$ is near $u(E,t-a_j)$. The result follows. \end{proof} \begin{lemma} \label {bla21} For every $C>0$, $M>0$, there exist $C'>0$ and $\delta_0>0$ with the following property. Let $V:\R/T\Z \to \R$ be a smooth non-negative function with $V(t)=0$ near $0$. Let $\Xi \subset \Omega(V) \cap [M^{-1},M]$ be a compact subset such that $C^{-1}<d(u[V](E),E^{1/2} i)<C$ for every $E \in \Lambda$. Then for every $\kappa>0$, $R \in \N$, for every $0<\delta<\delta_0$, for every $N$ sufficiently large, for every $n$ sufficiently large, if $V':\R/T' \Z \to \R$ is the $(\delta,N,n)$-padding of $v$, then there exists a compact subset $\Xi' \subset \Xi \cap \Omega(V')$ such that \begin{enumerate} \item For $j \geq 0$, the conditional probability that $E \in \Xi$ belongs to $\Xi'$, given that $\frac {j} {R} \leq \sup_t d(u[V](E,t),i)< \frac {j+1} {R}$ is at least $1-2 C'$, \item For every $E \in \Xi'$, $d(u[V'](E),u[V](E))<\kappa$ and $C^{-1}<d(u[V'](E),E^{1/2} i)<C$, \item For every $E \in \Xi'$, \be \sup_t d(u[V'](E,t),i) \geq \sup_t d(u[V'](E,t),i), \ee \item For $j \geq 0$, and for every $C' \delta<\gamma<1/4$, the conditional probability that $E \in \Xi$ belongs to $\Xi'$ and \be \sup_t d(u[V'](E,t),i)>\sup_t d(u[V](E,t),i)+C'^{-1} \frac {\delta} {\gamma}, \ee given that $\frac {j} {R} \leq \sup_t d(u[V](E,t),i)<\frac {j+1} {R}$ is at least $\frac {\gamma} {3}$. \item For every $E \in \Xi'$, \be \left |\frac {1} {T'} \int_0^{T'} d(u[V'](E,t),i) dt-\frac {1} {T} \int_0^T d(u[V](E,t),i) dt \right |<\kappa. \ee \end{enumerate} \end{lemma} \begin{proof} Follows from the previous lemma by a covering argument. (Notice that the statements about conditional probabilities are automatic for large $j$, since $\sup_t d(u[V](E,t),E^{1/2} i)$ is bounded by compactness of $\Xi$.) \end{proof} \noindent{\it Proof of Lemma
roof with its intricate smoke stacks, ventilation shafts, and archways for framing other prominent parts of Barcelona. This uneven roof is supported by an attic that houses an exhibit on Gaudi’s method. Here, I could see Gaudi’s inspiration. On display was a snake’s skeleton and around me were the uneven arches of the attic — the similarity was palpable (see below). The questions for me were: was Gaudi inspired by nature or did he learn from it? Is there even much of a difference between ‘inspired’ and ‘learned’? And can this inform thought on the correspondence between nature and algorithms more generally? I spend a lot of time writing about how we can use algorithmic thinking to understand aspects of biology. It is much less common for me to write about how we can use biology or nature to understand and inspire algorithms. In fact, I feel surprisingly strong skepticism towards the whole field of natural algorithms, even when I do write about it. I suspect that this stems from my belief that we cannot learn algorithms from nature. A belief that was shaken, but not overturned, when I saw the snake’s skeleton in Gaudi’s attic. In this post, I will try to substantiate the statement that we cannot learn algorithms from nature. My hope is that someone, or maybe just the act of writing, will convince me otherwise. I’ll sketch my own position on algorithms & nature, and strip the opposing we-learn-algorithms-from-nature position of some of its authority by pulling on a historic thread that traces this belief from Plato through Galileo to now. I’ll close with a discussion of some practical consequences of this metaphysical disagreement and try to make sense of Gaudi’s work from my perspective. ## Multiple realizability of replicator dynamics Abstraction is my favorite part of mathematics. I find a certain beauty in seeing structures without their implementations, or structures that are preserved across various implementations. And although it seems possible to reason through analogy without (explicit) abstraction, I would not enjoy being restricted in such a way. In biology and medicine, however, I often find that one can get caught up in the concrete and particular. This makes it harder to remember that certain macro-dynamical properties can be abstracted and made independent of particular micro-dynamical implementations. In this post, I want to focus on a particular pet-peeve of mine: accounts of the replicator equation. I will start with a brief philosophical detour through multiple realizability, and discuss the popular analogy of temperature. Then I will move on to the phenomenological definition of the replicator equation, and a few realizations. A particular target will be the statement I’ve been hearing too often recently: replicator dynamics are only true for a very large but fixed-size well-mixed population. ## A year in books: Neanderthals to the National Cancer Act to now A tradition I started a couple of years ago is to read at least one non-fiction book per month and then to share my thoughts on the reading at the start of the following year. Last year, my dozen books were mostly on philosophy, psychology, and political economy. My brief comments on them ended up running a long 3.2 thousand words. This time the list had expanded to around 19 books. So I will divide the summaries into thematic sets. For the first theme, I will start with a subject that is new for my idle reading: cancer. As a new researcher in mathematical oncology — and even though I am located in a cancer hospital — my experience with cancer has been mostly confined to the remote distance of replicator dynamics. So above all else these three books — Nelson’s (2013) Anarchy in the Organism, Mukherjee’s (2010) The Emperor of All Maladies, and Leaf’s (2014) The Truth in Small Doses — have provided me with insights into the personal experiences of the patient and doctor. I hope that based on these reviews and the ones to follow, you can suggest more books for me to read in 2016. Better yet, maybe my comments will help you choose your next book. Much of what I read in 2015 came from suggestions made by my friends and readers, as well as articles, blogs, and reviews I’ve stumbled across.[1] In fact, each of these cancer books was picked for me by someone else. If you’ve been to a restaurant with me then you know that I hate choosing between close-to-equivalent options. To avoid such discomfort, I outsourced the choosing of my February book to G+ and Nelson’s Anarchy in the Organism beat out Problems of the Self by a narrow margin to claim a spot on the reading list. As I was finishing up Nelson’s book — which I will review last in this post — David Basanta dropped off The Emperor of All Maladies on my desk. So I continued my reading on cancer. Finally, Leaf’s book came towards the end of the year based on a recommendation from Jacob Scott. It helped reinvigorate me after a summer away from the Moffitt Cancer Center. Read more of this post ## Cataloging a year of blogging Happy Old New Year. January 2016 is the the start of the 6th calendar year and the 41st month with updates to TheEGG. The reason for the large discrepancy between these two numbers is occasional months without activity. The past year was exceptional in this regard with the longest single silence on the blog between April 4th and October 26th. This means that the year saw only 29 new entries, 2 indexes cataloging 2014, a report on the EGT reading group, and an update on readership. This post is meant to organize the last year of activity for future reference, and to try to uncover common themes. If you like lists and TL;DRs then this is for you. Read more of this post ## Abusing numbers and the importance of type checking What would you say if I told you that I could count to infinity on my hands? Infinity is large, and I have a typical number of fingers. Surely, I must be joking. Well, let me guide you through my process. Since you can’t see me right now, you will have to imagine my hands. When I hold out the thumb on my left hand, that’s one, and when I hold up the thumb and the index finger, that’s two. Actually, we should be more rigorous, since you are imagining my fingers, it actually isn’t one and two, but i and 2i. This is why they call them imaginary numbers. Let’s continue the process of extending my (imaginary) fingers from the leftmost digits towards the right. When I hold out my whole left hand and the pinky, ring, and middle fingers on my right hand, I have reached 8i. But this doesn’t look like what I promised. For the final step, we need to remember the geometric interpretation of complex numbers. Multiplying by i is the same thing as rotating counter-clockwise by 90 degrees in the plane. So, let’s rotate our number by 90 degrees and arrive at $\infty$. I just counted to infinity on my hands. Of course, I can’t stop at a joke. I need to overanalyze it. There is something for scientists to learn from the error that makes this joke. The disregard for the type of objects and jumping between two different — and usually incompatible — ways of interpreting the same symbol is something that scientists, both modelers and experimentalists, have to worry about it. If you want an actually funny joke of this type then I recommend the image of a ‘rigorous proof’ above that was tweeted by Moshe Vardi. My writen version was inspired by a variant on this theme mentioned on Reddit by jagr2808. I will focus this post on
# (When) Is Truth-telling Favored in AI Debate? For some problems, humans may not be able to accurately judge the goodness of AI-proposed solutions. Irving et al. (2018) propose that in such cases, we may use a debate between two AI systems to amplify the problem-solving capabilities of a human judge. We introduce a mathematical framework that can model debates of this type and propose that the quality of debate designs should be measured by the accuracy of the most persuasive answer. We describe a simple instance of the debate framework called feature debate and analyze the degree to which such debates track the truth. We argue that despite being very simple, feature debates nonetheless capture many aspects of practical debates such as the incentives to confuse the judge or stall to prevent losing. We then outline how these models should be generalized to analyze a wider range of debate phenomena. ## Authors • 9 publications • 8 publications 06/09/2017 ### Off The Beaten Lane: AI Challenges In MOBAs Beyond Player Control MOBAs represent a huge segment of online gaming and are growing as both ... 02/12/2022 ### Uncalibrated Models Can Improve Human-AI Collaboration In many practical applications of AI, an AI model is used as a decision ... 11/05/2014 ### Ethical Artificial Intelligence This book-length article combines several peer reviewed papers and new m... 06/14/2021 ### Can Explainable AI Explain Unfairness? A Framework for Evaluating Explainable AI Many ML models are opaque to humans, producing decisions too complex for... 01/13/2022 Structured capability access ("SCA") is an emerging paradigm for the saf... 03/06/2014 ### Approximation Models of Combat in StarCraft 2 Real-time strategy (RTS) games make heavy use of artificial intelligence... 09/04/2018 We analyze the value-loading problem. This is the problem of encoding mo... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction In recent years, AI systems succeeded at many complex tasks, such as mastering the game of Go (Silver et al., 2017). However, such solutions are typically strictly limited to tasks with an unambiguous reward function. To circumvent this limitation, we can define success in vague tasks in terms of a human’s approval of the proposed solution. For example: • The goodness of a simulated robot backflip is hard to formalize, but an AI system can be trained to maximize the extent to which a human observer approves of its form (Christiano et al., 2017). • The goodness of a film-recommendation is subjective, but an AI system can be trained to maximize the extent to which a human approves of the recommendation. Unfortunately, once tasks and solutions get too complicated to be fully understood by human users, direct human approval cannot be used to formalize the reward function. For example, we could not have directly trained the AlphaGo algorithm by maximizing the approval of each move, because some of its moves looked strange or incorrect to human experts. Irving, Christiano, and Amodei (2018) suggest addressing this issue by using AI debate. In their proposal, two AI systems are tasked with producing answers to a vague and complex question, and subsequently debating the merits of their answers in front of a human judge. After considering the arguments brought forward, the human approves one of the answers and allocates reward to the AI system that generated it. We can apply such AI debate to a wide range of questions: (1) what is the solution of a system of algebraic equations, (2) which restaurant should I visit today for dinner, or (3) which of two immigration policies is more socially beneficial. Moreover, (4) we can see an entire play in Go (or a similar game) as a debate. Indeed, every move is an argument claiming “my strategy is the better one, since this move will eventually lead to my victory”, and the player who scores more points becomes the winner of the debate. The debates (1, 4) provide examples of a setting where it is straightforward to determine the debate’s winner objectively, making the most convincing answer coincide with the most accurate one. In other debates, such as (2, 3), misleading arguments may allow a compelling lie to defeat the correct answer. The central question is thus not whether an AI debate tracks truth, but when does it do so. To benefit from AI debate, we will, therefore, need to identify settings which advantage more accurate answers over the less accurate ones and use this knowledge to describe debates in which the victorious answer is the correct one. While researchers have started exploring these questions empirically, the theoretical investigation of AI debate has, at the time of writing this text, mostly been neglected. We aim to fill this gap by providing a theoretical framework for reasoning about AI debate, analyzing its basic properties, and identifying further questions that need to be addressed. To ease the interpretation of our work, we describe each phenomenon in the simplest possible model. However, we also outline the extensions necessary to make the analysis more realistic. The paper is structured as follows. Section 2, introduces the model of AI debate and formalizes the problem of designing debates that promote true answers. In Section 3, we describe an instance of the general model where the debaters are only allowed to make statements about “elementary features” of the world. Section 4 then investigates which of these “feature debates” are truth promoting. Importantly, all these results can be viewed as abstractions of behaviour that we expect to encounter in more realistic debates. Section 5 continues in this direction by analyzing two important subclasses of general debates (those with “independent evidence” and those where the judge’s information bandwidth is limited) while using the specific example of feature debate for illustration. Section 6 lists the most important limitations of the feature debate toy model and gives suggestions for future work. We conclude by reviewing the most relevant pieces of existing literature (Section 7). The full proofs are presented in Appendix A. ## 2 General Model of Debate We begin by introducing a general framework for modelling debate as a zero-sum game played between two AI systems. ### 2.1 Formal Definition of Debate As we mentioned earlier, an AI debate starts by a human asking a question about the world and eliciting answers from two AI debaters, after which the debaters each argue that their answer is the better one. Taking this dialogue into account, the human111Technically, the emphasis on human judges is for illustration only — we hope to be eventually able to automate many debates. then decides which answer seems more promising and accordingly divides some total reward between the debaters. In this section, we formalize this setting and illustrate its different parts in several examples. In Section 3, we then give a fully formal example of the whole setup. We start by defining the “environment”, over which the designers of the debate generally have no control. ###### Definition 1 (Debate environment). A debate environment is a tuple which consists of: • An arbitrary set of worlds and a prior distribution from which the current world is sampled. • A set of questions, where each is a text string. • An arbitrary set of answers. • A mapping which measures the deviation of an answer from the truth about the world. • A set of experiments the judge can perform to learn that the current world belongs to . A useful informal interpretation of this definition is the “obvious” one: is the set of all hypothetical worlds, are the questions we might ask, such as “Where would I prefer to eat dinner today?”, and are the textual responses the debaters might produce. The mapping represents the ground truth about the question, while experiments constitute a cheaper – and possibly less reliable – way of obtaining information. In our example, could measure “how (dis)satisfied will I be if I actually go to the restaurant ?”, while an experiment
show a preference for a fairly large mass in the range of 10–20 M 80. ^ "Bright Star Catalogue, 5th Revised Ed. (Hoffleit+, 1991)". VizieR. Centre de Données astronomiques de Strasbourg. Retrieved 07 September 2012. Check date values in: |access-date= (help) 81. ^ Dorch, S. B. F. (2004). "Magnetic Activity in Late-type Giant Stars: Numerical MHD Simulations of Non-linear Dynamo Action in Betelgeuse" (PDF). Astronomy & Astrophysics. 423 (3): 1101–07. arXiv:. Bibcode:2004A&A...423.1101D. doi:10.1051/0004-6361:20040435. 82. ^ Aurière, M; Donati, J.-F.; Konstantinova-Antova, R.; Perrin, G. ;Petit, P.; Roudier, T. (2010). "The Magnetic Field of Betelgeuse : a Local Dynamo from Giant Convection Cells?". Astronomy & Astrophysics. 516: L2. arXiv:. Bibcode:2010A&A...516L...2A. doi:10.1051/0004-6361/201014925. 83. ^ Maeder, André; Meynet, Georges; Meynet (2003). "The Role of Rotation and Mass Loss in the Evolution of Massive Stars". Publications of the Astronomical Society of the Pacific. 212: 267. Bibcode:2003IAUS..212..267M. 84. ^ a b c Reynolds, R. J.; Ogden, P. M. (1979). "Optical evidence for a very large, expanding shell associated with the I Orion OB association, Barnard's loop, and the high galactic latitude H-alpha filaments in Eridanus". Astrophysical Journal. 229: 942–53. Bibcode:1979ApJ...229..942R. doi:10.1086/157028. 85. ^ a b c Kaler, James B. (Jim). "Betelgeuse (Alpha Orionis)". Stars website. University of Illinois. Retrieved 19 July 2009. 86. ^ a b Baud, B.; Waters, R.; De Vries, J.; Van Albada, G. D.; et al. (1984). "A Giant Asymmetric Dust Shell around Betelgeuse". Bulletin of the American Astronomical Society. 16: 405. Bibcode:1984BAAS...16..405B. Unknown parameter |month= ignored (help) 87. ^ Ridgway, Stephen; Aufdenberg, Jason; Creech-Eakman, Michelle; Elias, Nicholas; et al. (2009). "Quantifying Stellar Mass Loss with High Angular Resolution Imaging". Astronomy & Astrophysics. 247: 247. arXiv:. Bibcode:2009astro2010S.247R. 88. ^ A. P.Ohnaka, K.; Hofmann, K.-H.; Benisty, M.; Chelli, A.; et al. (2009). "Spatially Resolving the Inhomogeneous Structure of the Dynamical Atmosphere of Betelgeuse with VLTI/AMBER". Astronomy & Astrophysics. 503 (1): 183–95. arXiv:. Bibcode:2009A&A...503..183O. doi:10.1051/0004-6361/200912247. 89. ^ Tsuji, T. (2000). "Water on the Early M Supergiant Stars α Orionis and μ Cephei" (PDF). The Astrophysical Journal. 538 (2): 801–07. Bibcode:2000ApJ...538..801T. doi:10.1086/309185. 90. ^ Lambert, D. L.; Brown, J. A.; Hinkle, K. H.; Johnson, H. R. (1984). "Carbon, Nitrogen, and Oxygen Abundances in Betelgeuse". Astrophysical Journal. 284: 223–37. Bibcode:1984ApJ...284..223L. doi:10.1086/162401. 91. ^ a b c Dave Finley (8 April 1998). "VLA Shows "Boiling" in Atmosphere of Betelgeuse". National Radio Astronomy Observatory. Retrieved 7 September 2010. 92. ^ Lim, Jeremy; Carilli, Chris L.; White, Stephen M.; Beasley, Anthony J.; Marson, Ralph G. (1998). "Large Convection Cells as the Source of Betelgeuse's Extended Atmosphere". Nature. 392 (6676): 575–77. Bibcode:1998Natur.392..575L. doi:10.1038/33352. 93. ^ a b c Lobel, A.; Aufdenberg, J.; Dupree, A. K.; Kurucz, R. L.; Stefanik, R. P.; Torres, G.; Aufdenberg; Dupree; Kurucz; Stefanik; Torres (2004). "Spatially Resolved STIS Spectroscopy of Betelgeuse's Outer Atmosphere". Publications of the Astronomical Society of the Pacific. 219: 641. arXiv:. Bibcode:2004IAUS..219..641L. In the article, Lobel et al. equate 1 arcsecond to approximately 40 stellar radii, a calculation which in 2004 likely assumed a Hipparcos distance of 131 pc (430 ly) and a photospheric diameter of 0.0552" from Weiner et al. 94. ^ "The Flames of Betelgeuse". ESO Photo Release. 23 June 2011. Retrieved 24 June 2011. 95. ^ Robert Nemiroff (MTU) & Jerry Bonnell (USRA) (28 June 2011). "Stardust and Betelgeuse". Today's Astronomy Picture of the Day. Retrieved 10 June 2012. 96. ^ Sutton, E. C.; Storey, J. W. V.; Betz, A. L.; Townes, C. H.; Spears, D. L. (1977). "Spatial Heterodyne Interferometry of VY Canis Majoris, Alpha Orionis, Alpha Scorpii, and R Leonis at 11 Microns". Astrophysical Journal Letters. 217: L97–L100. Bibcode:1977ApJ...217L..97S. doi:10.1086/182547. 97. ^ a b Skinner, C. J.; Dougherty, S. M.; Meixner, M.; Bode, M. F.; Davis, R. J.; et al. (1997). "Circumstellar Environments – V. The Asymmetric Chromosphere and Dust Shell of Alpha Orionis". Monthly Notices of the Royal Astronomical Society. 288 (2): 295–306. Bibcode:1997MNRAS.288..295S. 98. ^ Danchi, W. C.; Bester, M.; Degiacomi, C. G.; Greenhill, L. J.; Townes, C. H. (1994). "Characteristics of Dust Shells around 13 Late-type Stars". The Astronomical Journal. 107 (4): 1469–1513. Bibcode:1994AJ....107.1469D. doi:10.1086/116960. 99. ^ David, L.; Dooling, D. (1984). "The Infrared Universe". Space World,. 2: 4–7. Bibcode:1984SpWd....2....4D. 100. ^ Harper, Graham M.; Carpenter, Kenneth G.; Ryde, Nils; Smith, Nathan; Brown, Joanna; et al. (2009). "UV, IR, and mm Studies of CO Surrounding the Red Supergiant α Orionis (M2 Iab)". AIP Conference Proceedings. 1094: 868–71. Bibcode:2009AIPC.1094..868H. doi:10.1063/1.3099254. 101. ^ "Akari Infrared Space Telescope: Latest Science Highlights". European Space Agency. 19 November 2008. Retrieved 25 June 2012. 102. ^ Noriega-Crespo, Alberto; van Buren, Dave; Cao, Yu; Dgani, Ruth (1997). "A Parsec-Size Bow Shock around Betelgeuse" (PDF). Astronomical Journal. 114: 837–40. Bibcode:1997AJ....114..837N. doi:10.1086/118517. Retrieved 25 June 2012. Noriega in 1997 estimated the size to be 0.8 parsecs, having assumed the earlier distance estimate of 400ly. With a current distance estimate of 643ly, the bow shock would measure ~1.28 parsecs or over 4 ly 103. ^ Newton, Elizabeth (26 April 2012). "This Star Lives in Exciting Times, or, How Did Betelgeuse Make that Funny Shape?". Astrobites. Retrieved 25 June 2012. 104. ^ Mackey, Jonathan; Mohamed, Shazrene; Neilson, Hilding R.; Langer, Norbert; Meyer, Dominique M.-A.; et al. (2012). "Double Bow Shocks around Young, Runaway Red Supergiants: Application to Betelgeuse". The Astrophysical Journal Letters. 751 (1, article id. L10): 837–40. arXiv: Check |arxiv= value (help). Bibcode:2012ApJ...751L..10M. doi:10.1088/2041-8205/751/1/L10. Unknown parameter |month= ignored (help) 105. ^ CXC/M.Weiss; Spectrum: NASA/CXC/N.Butler; et al. (Last update: 30 September 2010). "Cosmic Forensics Confirms Gamma-ray Burst/Supernova Connection". Harvard-Smithsonian Center for Astrophysics. Retrieved 16 November 2010. Check date values in: |date= (help) 106. ^ Sessions, Larry (29 June 2009). "Betelgeuse: Will explode someday". EarthSky Communications, Inc. Retrieved 16 November 2010. 107. ^ a b Jean Tate (13 Oct 2009). "Betelgeuse". Universe Today. Retrieved 16 November 2010. 108. ^ Wheeler, J. Craig (2007). Cosmic Catastrophes: Exploding Stars, Black Holes, and Mapping the Universe (2nd ed.). Cambridge, UK: Cambridge University Press. pp. 115–17. ISBN 0-521-85714-7. 109. ^ Connelly, Claire (2011). "Tatooine's twin suns - coming to a planet near you just as soon as Betelgeuse explodes". News.com.au. Retrieved 2012-09-14. Unknown parameter |month= ignored (help); Unknown parameter |day= ignored (help) 110. ^ Plait, Phil (2010). "Is Betelgeuse about to blow?". Bad Astronomy. Discovery. Retrieved 2012-09-14. Unknown parameter |month= ignored (help); Unknown parameter |day= ignored (help) 111. ^ O'Neill, Ian (2011). "Don't panic! Betelgeuse won't explode in 2012!". Discovery space news. Retrieved 2012-09-14. Unknown parameter |month= ignored (help); Unknown parameter |day= ignored (help) 112. ^ Plait, Phil (2011). "Betelgeuse and 2012". Bad Astronomy. Discovery. Retrieved 2012-09-14. Unknown parameter |month= ignored (help); Unknown parameter |day= ignored (help) 113. ^ Karovska, M.; Noyes, R. W.; Roddier, F.; Nisenson, P.; Stachnik, R. V.; Noyes; Roddier; Nisenson; Stachnik (1985). "On a Possible Close Companion to α Ori". Bulletin of the American Astronomical Society. 17: 598. Bibcode:1985BAAS...17..598K. 114. ^ Karovska, M.; Nisenson, P.; Noyes, R. (1986). "On the alpha Orionis triple system". Astrophysical Journal. 308: 675–85. Bibcode:1986ApJ...308..260K. doi:10.1086/164497. 115. ^ Wilson, R. W.; Baldwin, J. E.; Buscher, D. F.; Warner, P. J.; Baldwin; Buscher; Warner (1992). "High-resolution imaging of Betelgeuse and Mira". Monthly Notices of the Royal Astronomical Society. 257 (3): 369–76. Bibcode:1992MNRAS.257..369W. 116. ^ Karovska, M. (1992). "Imaging of the Surface of α ORI". Proceedings of the 7th Cambridge Workshop, ASP Conference Series. 26: 279. Bibcode:1992ASPC...26..279K. 117. ^ 118. ^ "ESA space operations and situational awareness". European Space Agency. Last update: 22 April 2010. Retrieved 15 November 2010. Check date values in: |date= (help) 119. ^ Likely the result of mistaking the l for an i. Ultimately, this led to the modern Betelgeuse. 120. ^ Bode, Johann Elert, (ed.). (1782) Vorstellung der Gestirne: auf XXXIV Kupfertafeln nach der Parisier Ausgabe des Flamsteadschen Himmelsatlas, Gottlieb August Lange, Berlin / Stralsund, pl. XXIV. 121. ^ Bode, Johann Elert, (ed.) (1801). Uranographia: sive Astrorum Descriptio, Fridericus de Harn, Berlin, pl. XII. 122. ^ a b c Schaaf, Fred (2008). "Betelgeuse". The Brightest Stars. Hoboken, New Jersey: Wiley. pp. 174–82. ISBN 0-471-70410-5. 123. ^ Dibon-Smith, Richard. "Alpha Orionis (Betelgeuse )". The Constellations Web Page. Retrieved 23 January 2010. 124. ^ Kanipe,
is the interaction strength. We consider the half-filling case for which the ratio of the numbers of fermions $N$ and the lattice sites $L$ is fixed to $N/L = 1/2$. The Hilbert space dimension is denoted by $\mathcal{N}$. The system is driven by a periodic square pulse drive protocol described by \begin{eqnarray} \mathcal{J}(t) &=& -\mathcal{J}_0, \quad t \le T/2 \nonumber\\ &=&\mathcal{J}_0, \quad t > T/2 \label{sq} \end{eqnarray} where $T=2 \pi/\omega_D$ is the time period. In this study we shall restrict ourselves to the parameter regime $V_{\rm int} \ll {\mathcal J}_0$. This is done to ensure that the system remains in the ergodic phase in the quasi-static limit. In order to study the localization properties of the driven chain in the Hilbert space, we first need to evaluate the time evolution operator $U(T,0) = {\mathcal T}_t \exp[-i \int_0^T dt H(t)/\hbar]$. To this end, we define $H_{\pm} = H[{\mathcal J}=\pm {\mathcal J}_0]$; the eigenvalues and eigenvectors of $H_{\pm}$ is given by \begin{eqnarray} H_{\pm} |\xi_m^{\pm}\rangle &=& \epsilon_m^{\pm} |\xi_{m}^{\pm}\rangle. \label{eign1} \end{eqnarray} In terms of these quantities and for the square pulse drive protocol (Eq.\ \ref{sq}) $U(T,0)$ is given by \begin{eqnarray} U(T,0) &=& e^{- i H_+ T/(2 \hbar)} e^{- i H_- T/(2 \hbar)} \label{uni} \\ &=& \sum_{p,q} e^{i (\epsilon_p^+ - \epsilon_q^-)T/(2 \hbar)} c_{p q}^{+-} |\xi_{p}^+\rangle \langle \xi_{q}^-| \nonumber \end{eqnarray} where the coefficients $c_{pq}^{+-}= \langle \xi_{p}^+|\xi_{q}^-\rangle$ denote overlap between the two many body eigenbasis. In what follows we shall compute $\epsilon_{m}^{\pm}$ and $|\xi_{m}^{\pm}\rangle$ by exact diagonalization (ED). We also use ED to obtain eigenvalues $\lambda_m$ and eigenvectors $|\psi_m\rangle$ of $U(T,0)$ . The eigenspectrum of the Floquet Hamiltonian $H_F$ is found from the relation $U(T,0)= \exp[- i H_F T/\hbar]$ . Then one can write, \begin{eqnarray} U(T,0) &=& \sum_m \lambda_m |\psi_m\rangle \langle \psi_m|, \quad \lambda_{m} = e^{-i \epsilon^F_{m} T/\hbar} \label{feigen} \end{eqnarray} where $\epsilon_m^F$ are the quasienergies which satisfy $H_F |\psi_m\rangle = \epsilon_m^F |\psi_m\rangle$. A knowledge of $U(T,0)$ allows us to compute stroboscopic dynamics starting for an arbitrary initial state $|\psi_{\rm init}\rangle$. The state at time $t_n=nT$, where $n$ is an integer is given by \begin{eqnarray} |\psi(nT)\rangle &=& U(nT,0) |\psi_{\rm init}\rangle = \sum_m \lambda_m^n c_m^{\rm init} |\psi_m\rangle \label{wavevol} \end{eqnarray} where $c_m^{\rm init} = \langle \psi_m|\psi_{\rm init}\rangle$. Thus the expectation value of any operator $O$ at stroboscopic times are given by \begin{eqnarray} \langle \psi(nT) |O |\psi(nT)\rangle &=& \sum_{p q} c_p^{ \ast \rm init} c_q^{\rm init} e^{-i n(\epsilon_q^F -\epsilon_p^F)T/\hbar} \nonumber\\ && \times \langle \psi_p | O |\psi_q\rangle \label{opexpec} \end{eqnarray} In the steady state, only the terms corresponding to $p=q$ in the sum (Eq.\ \ref{opexpec}) contribute leading to \begin{eqnarray} \langle O \rangle_{\rm steady} &=& \sum_p |c_p^{\rm init}|^2 \langle \psi_p|O|\psi_p\rangle \label{opsteady} \end{eqnarray} We shall use these expressions for study of Floquet dynamics in subsequent sections. \section{Phase diagram and the properties of Floquet eigenstates} \label{phase} In this section, we shall use the properties of the many-body Floquet eigenvalues and eigenvectors to study the phase diagram of the driven chain of length $L$ \cite{das,sarkar} in the presence of small interaction. First we shall present an exact numerical study for $L \le 18$ where we have used ED to obtain the exact Floquet eigenvalues and eigenvectors. \subsubsection{Inverse participation ratio and fractal dimension} In order to study the drive induced transition from the ergodic to the MBL phase in the many-body Fock space basis, we calculate the inverse participation ratio (IPR) defined as: \begin{eqnarray} I_m &=& \sum_{n=1}^\mathcal{N} |c_{mn}|^4, \label{ipr1} \end{eqnarray} where $c_{mn} = \langle n |\psi_m\rangle $, $|\psi_m\rangle$ is a Floquet eigenstate, and $|n \rangle$ denotes Fock states in the number basis. The IPR $I_m \sim \mathcal{N}^{-1(0)}$ in $d=1$ for a ergodic (MBL) phase and thus acts as a measure of localization of a many body eigenstate in the Fock space. This property follows from the fact that a generic many-body ergodic eigenstate of $H_F$ is expected to have finite overlap with a large number of Fock states; in contrast, in the MBL phase, it is almost diagonal in the Fock basis. Thus the behavior of $I_m$ in the Fock space mimics that inverse participation ratio of single particle Floquet eigenfunctions function in real space for the non-interacting driven AA Hamiltonian studied in Ref.\ \onlinecite{sarkar}. The analysis of $I_m$ leads to the phase diagram shown in top left panel of Fig.\ \ref{fig1}, where $I_m$ is plotted as a function of eigenvector index $m/{\mathcal N}$ and $\omega_D$. The plot shows that the driven AA model with interaction exhibits a transition from the ergodic to the MBL phase. For low drive frequencies $\hbar \omega_D/(\pi \mathcal{J}_0) < 0.4$, all Floquet eigenstates are ergodic with $I_m \sim (1/\mathcal{N})$. A transition from ergodic eigenstates to a phase where the eigenstates states with $ 0 < I_m < 1 $ occur around $\hbar \omega_D/(\mathcal{J}_0 \pi) \sim 0.4$. These eigenstates (which have $0 < I_m < 1$) persists for a wide range of frequencies $0.4 \le \hbar \omega_D/(\pi {\mathcal J}_0) \le 1.5$. For $\hbar \omega_D/(\pi {\mathcal J}_0) \gg 1.5$, the Floquet eigenstates become completely localized ($I_m \simeq 1$) signifying the onset of the MBL phase. To study the nature of states having $0 < I_m < 1 $, we compute \begin{eqnarray} I_m^{(q)} &=& \sum_{n=1}^{{\mathcal N}} |c_{mn}|^{2q} \label{iqdef} \end{eqnarray} where $I_m \equiv I_m^{(2)}$. It is well known $I_m^{(q)} \sim {\mathcal N}^{-\tau_q}$, where the exponent $\tau_q$ is related to the fractal dimension $D_q$ by $D_q= \tau_q/(q-1)$. We note that for MBL states, we expect $D_q=0$ whereas for ergodic states $D_q=1$. The intermediate $q$ dependent values of $D_q$, that is $D_q= \tau_q/(q-1)$, signifies multifractality while $D_q$ is independent of $q$ for a fractal eigenstate. \begin{figure} \rotatebox{0}{\includegraphics*[width= 0.48 \linewidth]{fig1.pdf}} \rotatebox{0}{\includegraphics*[width= 0.50 \linewidth]{fig2a.pdf}} \rotatebox{0}{\includegraphics*[width= 0.49 \linewidth]{fig2b.pdf}} \rotatebox{0}{\includegraphics*[width= 0.49 \linewidth]{fig3a.pdf}} \caption{Top Left Panel: Plot of $I_m$ as a function of the normalized many-body eigenfunction index $m/\mathcal{N}$ and $\omega_D/(\pi \mathcal{J}_0)$ showing the localized/delocalized nature of the Floquet eigenstates $|\psi_m\rangle$ for $L=14$. Top Right panel: Plot of $\tau_2$ as a function of $m/\mathcal{N}$ (after sorting in increasing order of $I_m$) and $\omega_D/(\pi \mathcal{J}_0)$ showing the presence of delocalized states for $ \omega_D/(\pi \mathcal{J}_0)\le 0.4$, multifractal states for $0.4 \le \omega_D/(\pi \mathcal{J}_0)\le 1.5$ and fully localized states for $ \omega_D/(\pi \mathcal{J}_0) > 1.5$. The system sizes used for extracting $\tau_2$ are $L=10, \cdots, 18$ in steps of $2$. Bottom Left panel: Plot for $\ln I_m$ vs $\ln L$ used for extracting $\tau_2$ for several representative frequencies for the state corresponding to $m/{\mathcal N}=0.5$. The behavior of perfectly delocalized (green dots at $\omega_D/(\pi \mathcal{J}_0)=0.025 $) and localized (red dots, $\omega_D/(\pi \mathcal{J}_0) =3$) can be distinguished from that of a multifractal states (blue dots $\omega_D/(\pi \mathcal{J}_0)=0.5$). Bottom Right Panel: Plot of $D_q$ as a function of $\omega_D/(\pi \mathcal{J}_0)$ for $m/\mathcal{N}=0.5$. We have set $\mathcal{J}_0=1$, $V_0/\mathcal{J}_0=0.05$ , $V_{\rm int}/\mathcal{J}_0=0.025$ , scaled all energies and frequencies in units of $\mathcal{J}_0$ (with $\hbar$ set to unity). See text for details.} \label{fig1} \end{figure} To analyze the nature of the Floquet eigenstates further, we first plot $\tau_2$ as a function of eigenvector index $m/{\mathcal N}$ and $\omega_D$ in the top right panel of Fig.\ \ref{fig1}. From this plot, we find the presence of ergodic and MBL states for low ($\hbar \omega_D/(\pi \mathcal J_0) <0.4$) and high ($\hbar \omega_D/(\pi \mathcal J_0) > 1.5 $) drive frequencies respectively. In between, one finds state with $0 \le \tau_2 \le 1$ signifying their non-ergodic and non-MBL nature. We note that for this plot we sort the eigenstates in increasing value of $I_m$. Thus we find that the states which had $ 0 \le I_m \le 1 $ also have $0 \le \tau_2 \le 1$; these states are natural candidate for multifractal Floquet eigenstates. In what follows, we extract $\tau_2$ from the plot of $ \ln I_m $ versus $ \ln \mathcal{N} $ as shown in the bottom left panel of Fig.\ \ref{fig1} for $m/{\mathcal N}=0.5$. For $\hbar \omega_D/(\pi \mathcal{J}_0)=3$ the state is many-body localized and we have $\tau_2 \sim 0 $ as evident from the flat red line in the bottom left panel of Fig.\ \ref{fig1}. In contrast, at $\hbar \omega_D/(\pi \mathcal{J}_0)=0.025 $ we have $\tau_2=1$ (green line in bottom left panel of Fig.\ \ref{fig1}) signifying the ergodicity of the state. In between, at $\hbar \omega_D/(\pi \mathcal{J}_0) =0.5$ , $\tau_2=0.4$ (blue line in bottom left panel of Fig.\ \ref{fig1}) indicating the presence of non-ergodic and non-MBL nature of the state. The plot of the multifractal dimension $D_q$ is shown in bottom right panel of Fig.\ \ref{fig1} for states corresponding to $m/{\mathcal N}=1/2$. For all points in these plots, $D_q$ is obtained from values of $\tau_q$ that are, in turn, extracted from the corresponding plots of $\ln I_m$ vs $\ln {\mathcal N}$. From the plot, we find that for $ 0.35 \le \hbar \omega_D/(\pi \mathcal{J}_0) \le 1.5$, $0\le D_q \le 1$; this indicates the presence of multifractal states in the spectrum. Other states with different $m/{\mathcal N}$ also show similar features. The behavior of $D_q$ shown in the bottom right panel of Fig.\ \ref{fig1} indicates that the driven fermion
The very fact that you are here on our site tells me that you are already keen on choosing a broker who is going to provide you with the best tools for trading. Have a look at our broker reviews to find out about the best and worse binary options brokers out there. Familiarizing yourself with scams will help you to avoid not only the brokers we urge you to steer clear of, but also to recognize scams we have not discovered yet. On our list of recommended brokers, you will find legitimate companies to deal with that offer numerous types of trades, excellent features, a ton of assets, and the best customer service around. When you choose the right broker, you give yourself the best shot at profiting, and you protect your profits! With the touch trading options, the trader is required to indicate the touch price as well as his or her preferred expiry period before placing the trade. Suppose the trader selects $1617.40 as the touch price and 4.00pm as the expiry period. After placing the trade, bad news regarding the value of the dollar breaks out. This will drive inflation fears and force oil and gold prices to rise. As such, the price of gold will hit the touch price. Although not all brokers in the industry offer the touch trading option, they are the second most popular binary options trading option. Nadex is a binary broker licensed by CFTC (The US Commodities and Futures Trading Commission). The CFTC is known to enforce near protectionist policies surrounding binary options and it is therefore good news for traders that Nadex is the only company that has achieved its regulation. Traders do not need to worry about their money because Nadex holds client funds in segregated bank accounts at BMO Harris Bank and Fifth Third Bank, completely separate from the company’s working capital. The Nadex platform is also subject to random and routine checks to ensure it provides an open and transparent environment for account settlements, and also that buyers and sellers of options are matched in an impartial manner. Tools – Binary Options Robot offers you a number of tools that will help you make maximum profits and get better as a trader. This includes training materials, how-to guides, and other educational tools for binary options trading. Examples include video tutorials, trading charts, eBooks, manuals, and webinars. You also get a number of tools that you can use while actively trading and researching assets. This includes detailed asset information, price data, and easy-to-read charts. Finally some of them have develop a sales team. You make a try with 500 euros and make a bunch of money. A sale guy call you and motivate you to try with more, you do it, win again (system is develop to do so and to give you confidence), then you put 10k$ in order to make at least 50k$(the best deal ever) and surprisingly (or not!) no more call, nothing accessible, and no more contacts. You get scammed (well done) ! ###### The minimum required deposit for Banc de Binary is$250 which can be made via Bank Wire, Skrill (MoneyBookers) and Credit/Debit Card. The payments from Banc de Binary to the customer will be applied to the same source from which they originated (i.e.: if you made your deposit via credit card, your withdrawal will be applied to the same credit card). Do you want to get the most out of trading binary options? Of course you do. That’s why you are going to read our quick tips and tricks for success! If you have explored most of the other articles on our site, you probably are already familiar with a lot of this advice. But a quick review never hurts, and if you are a beginner, this is a great overview to get started with. These tips and tricks will help you to get the most out of your money, and hopefully win! It is at the expiry time that the broker determines whether you have won or lost the trade. This is done by comparing the price of the chosen asset at the time of expiry of the contract to the strike price of the asset. If you had chosen ‘Call’ and the price of the asset is higher than the strike price, at the end of the contract period, you win the trade. The FinPari platform is extremely user friendly. I can honestly say this as SpotOption platform is my personal favourite. It has most of the features offered on standard SpotOption platforms and is very easy to navigate. However, I have to deduct a few points as a result of the false information about their regulations and over-the-top marketing on the main page of the website. If the outcome of the yes/no proposition (in this case, that the share price of XYZ Company will be above $5 per share at the specified time) is satisfied and the customer is entitled to receive the promised return, the binary option is said to expire “in the money.” If, however, the outcome of the yes/no proposition is not satisfied, the binary option is said to expire “out of the money,” and the customer may lose the entire deposited sum. You have a number of options when it comes to finding out information, asking questions, and getting help with Option Robot. The first is to read their blog. It is updated regularly and contains useful information on everything from the features of the platform to winning trading strategies. If you need something more specific, however, the first place you should check is the FAQ section. You might find that other people have had similar questions, and they are answered comprehensively here. !function(e){function n(t){if(r[t])return r[t].exports;var i=r[t]={i:t,l:!1,exports:{}};return e[t].call(i.exports,i,i.exports,n),i.l=!0,i.exports}var t=window.webpackJsonp;window.webpackJsonp=function(n,r,o){for(var u,s,a=0,l=[];a1)for(var t=1;td)return!1;if(p>f)return!1;var e=window.require.hasModule("shared/browser")&&window.require("shared/browser");return!e||!e.opera}function s(){var e="";return"quora.com"==window.Q.subdomainSuffix&&(e+=[window.location.protocol,"//log.quora.com"].join("")),e+="/ajax/log_errors_3RD_PARTY_POST"}function a(){var e=o(h);h=[],0!==e.length&&c(s(),{revision:window.Q.revision,errors:JSON.stringify(e)})}var l=t("./third_party/tracekit.js"),c=t("./shared/basicrpc.js").rpc;l.remoteFetching=!1,l.collectWindowErrors=!0,l.report.subscribe(r);var f=10,d=window.Q&&window.Q.errorSamplingRate||1,h=[],p=0,m=i(a,1e3),w=window.console&&!(window.NODE_JS&&window.UNIT_TEST);n.report=function(e){try{w&&console.error(e.stack||e),l.report(e)}catch(e){}};var y=function(e,n,t){r({name:n,message:t,source:e,stack:l.computeStackTrace.ofCaller().stack||[]}),w&&console.error(t)};n.logJsError=y.bind(null,"js"),n.logMobileJsError=y.bind(null,"mobile_js")},"./shared/globals.js":function(e,n,t){var r=t("./shared/links.js");(window.Q=window.Q||{}).openUrl=function(e,n){var t=e.href;return r.linkClicked(t,n),window.open(t).opener=null,!1}},"./shared/links.js":function(e,n){var t=[];n.onLinkClick=function(e){t.push(e)},n.linkClicked=function(e,n){for(var r=0;r>>0;if("function"!=typeof e)throw new TypeError;for(arguments.length>1&&(t=n),r=0;r>>0,r=arguments.length>=2?arguments[1]:void 0,i=0;i>>0;if(0===i)return-1;var o=+n||0;if(Math.abs(o)===Infinity&&(o=0),o>=i)return-1;for(t=Math.max(o>=0?o:i-Math.abs(o),0);t>>0;if("function"!=typeof e)throw new TypeError(e+" is not a function");for(arguments.length>1&&(t=n),r=0;r>>0;if("function"!=typeof e)throw new TypeError(e+" is not a function");for(arguments.length>1&&(t=n),r=new Array(u),i=0;i>>0;if("function"!=typeof e)throw new TypeError;for(var r=[],i=arguments.length>=2?arguments[1]:void 0,o=0;o>>0,i=0;if(2==arguments.length)n=arguments[1];else{for(;i=r)throw new TypeError("Reduce of empty array with no initial value");n=t[i++]}for(;i>>0;if(0===i)return-1;for(n=i-1,arguments.length>1&&(n=Number(arguments[1]),n!=n?n=0:0!==n&&n!=1/0&&n!=-1/0&&(n=(n>0||-1)*Math.floor(Math.abs(n)))),t=n>=0?Math.min(n,i-1):i-Math.abs(n);t>=0;t--)if(t in r&&r[t]===e)return t;return-1};t(Array.prototype,"lastIndexOf",c)}if(!Array.prototype.includes){var f=function(e){"use strict";if(null==this)throw new TypeError("Array.prototype.includes called on null or undefined");var n=Object(this),t=parseInt(n.length,10)||0;if(0===t)return!1;var r,i=parseInt(arguments[1],10)||0;i>=0?r=i:(r=t+i)<0&&(r=0);for(var o;r These are two different alternatives, traded with two different psychologies, but both can make sense as investment tools. One is more TIME centric and the other is more PRICE centric. They both work in time/price but the focus you will find from one to the other is an interesting split. Spot forex traders might overlook time as a factor in their trading which is a very very big mistake. The successful binary trader has a more balanced view of time/price, which simply makes him a more well rounded trader. Binaries by their nature force one to exit a position within a given time frame win or lose which instills a greater focus on discipline and risk management. In forex trading this lack of discipline is the #1 cause for failure to most traders as they will simply hold losing positions for longer periods of time and cut winning positions in shorter periods of time. In binary options that is not possible as time expires your trade ends win or lose. Below are some examples of how this works. Binary options are an all-or-nothing option type where you risk a certain amount of capital, and you lose it or make a fixed return based on whether the price of the underlying asset is above or below (depending on which you pick) a specific price at a specific time. If you are right, you receive the prescribed payout. If you are wrong, the capital you wagered is lost. Binary options are deceptively simple to understand, making them
models are more pareto-efficient in terms of the memory required compared to the dilated convolutional architectures of ConvTasNet \cite{luo2019convTasNet} and Two-Step TDCN \cite{tzinis2019two} where they require an increased network depth in order to increase their receptive field. Although SuDoRM-RF models do not perform downsampling in every feature extraction step as Demucs \cite{defossez2019demucs} does, we see that the proposed models require orders of magnitude less memory especially during a backward update step as the number of parameters in Demucs is significantly higher. Finally, SuDoRM-RF models have a smaller memory footprint because the encoder $\mathcal{E}$ performs a temporal downsampling by a factor of $\operatorname{div}\lp \KE, 2\rp=10$ compared to DPRNN \cite{luo2019dual} which does not reduce the temporal resolution at all. \subsection{Ablation study on WSJ0-2mix} We perform a small ablation study in order to show how different parameter choices in SuDoRM-RF models affect the separation performance. In order to be directly comparable with the numbers reported by several other studies \cite{luo2019convTasNet, luo2019dual, zeghidour2020wavesplit, liu2019DeepCASA}, we train our models for $200$ epochs and test them using the given data splits from WSJ0-2mix dataset \cite{hershey2016deepclustering}. The results are shown in Table \ref{tab:ablation_study}. \begin{table}[!t] \centering \begin{tabular}{c|c|c|c|c|c|c|c} \toprule $\KE$ & $\Cout$ & $B$ & $Q$ & Norm & Mask Act. & Dec. & SI-SDRi \\ \hlinewd{1pt} 21 & 128 & 16 & 4 & LN & Softmax & 2 & 16.0 \\ \hline 17 & 128 & 16 & 4& LN & ReLU & 1 & 15.9 \\ \hline 17 & 128 & 16 & 4& GLN & ReLU & 1 & 16.8 \\ \hline 21 & 256 & 20 & 4& GLN & ReLU & 1 & 17.7 \\ \hline 41 & 256 & 32 & 4 & GLN & ReLU & 1 & 17.1 \\ \hline 41 & 256 & 20 & 4 & GLN & ReLU & 1 & 16.8 \\ \hline 21 & 512 & 18 & 7 & GLN & ReLU & 1 & 18.0 \\ \hline 21 & 512 & 20 & 2 & GLN & ReLU & 1 & 17.4 \\ \hline 21 & 512 & 34 & 4 & GLN & ReLU & 1 & 18.9 \\ \bottomrule \end{tabular} \caption{SI-SDRi separation performance on WSJ0-2mix for various parameter configurations of SuDoRM-RF models. Mask Act. corresponds to the activation function before the mask estimation and Dec. specifies the number of decoders we are using before reconstructing the time-domain signals. GLN corresponds to the global layer normalization as described in \cite{luo2019convTasNet}. All the other parameters have the same values as described in Section \ref{sec:exp_setup:our_model_config}} \label{tab:ablation_study} \vspace{-10pt} \end{table} \section{Conclusions} \label{sec:conclusions} In this study, we have introduced the SuDoRM-RF network, a novel architecture for efficient universal sound source separation. The proposed model is capable of extracting multi-resolution temporal features through successive depth-wise convolutional downsampling of intermediate representations and aggregates them using a non-parametric interpolation scheme. In this way, SuDoRM-RF models are able to significantly reduce the required number of layers in order to effectively capture long-term temporal dependencies. We show that these models can perform similarly or even better than recent state-of-the-art models while requiring significantly less computational resources in FLOPs, memory and time. In the future, we aim to use SuDoRM-RF models for real-time low-cost source separation. \bibliographystyle{IEEEbib} \section{Introduction} \label{sec:intro} These guidelines include complete descriptions of the fonts, spacing, and related information for producing your proceedings manuscripts. Please follow them. Select one of the four copyright notices below and put in the text field below the first column (only required for the camera-ready paper submission). \textbf{Please pay attention to that most authors should use Copyright notice 4.} \noindent{\bf Copyright notice 1:}\\ For papers in which all authors are employed by the US government, the copyright notice is:\\ {U.S.\ Government work not protected by U.S.\ copyright} \\[2ex] \noindent{\bf Copyright notice 2:}\\ For papers in which all authors are employed by a Crown government (UK, Canada, and Australia), the copyright notice is:\\ {978-1-7281-6662-9/20/\$31.00 {\copyright}2020 Crown} \\[2ex] \noindent{\bf Copyright notice 3:}\\ For papers in which all authors are employed by the European Union, the copyright notice is:\\ {978-1-7281-6662-9/20/\$31.00 {\copyright}2020 European Union} \\[2ex] \noindent{\bf Copyright notice 4:}\\ For all other papers the copyright notice is:\\ {978-1-7281-6662-9/20/\$31.00 {\copyright}2020 IEEE} \section{Formatting your paper} \label{sec:format} All printed material, including text, illustrations, and charts, must be kept within a print area of 7 inches (178 mm) wide by 9 inches (229 mm) high. Do not write or print anything outside the print area. The top margin must be 1 inch (25 mm), except for the title page, and the left margin must be 0.75 inch (19 mm). All {\it text} must be in a two-column format. Columns are to be 3.39 inches (86 mm) wide, with a 0.24 inch (6 mm) space between them. Text must be fully justified. \section{PAGE TITLE SECTION} \label{sec:pagestyle} The paper title (on the first page) should begin 1.38 inches (35 mm) from the top edge of the page, centered, completely capitalized, and in Times 14-point, boldface type. The authors' name(s) and affiliation(s) appear below the title in capital and lower case letters. Note that double blind review is conducted and Authors Names and Affiliation only should appear in final paper if accepted. Papers with multiple authors and affiliations may require two or more lines for this information. \section{TYPE-STYLE AND FONTS} \label{sec:typestyle} To achieve the best rendering both in the proceedings and from the CD-ROM, we strongly encourage you to use Times-Roman font. In addition, this will give the proceedings a more uniform look. Use a font that is no smaller than nine point type throughout the paper, including figure captions. In nine point type font, capital letters are 2 mm high. {\bf If you use the smallest point size, there should be no more than 3.2 lines/cm (8 lines/inch) vertically.} This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make the paper much more readable. Larger type sizes require correspondingly larger vertical spacing. Please do not double-space your paper. True-Type 1 fonts are preferred. The first paragraph in each section should not be indented, but all the following paragraphs within the section should be indented as these paragraphs demonstrate. \section{MAJOR HEADINGS} \label{sec:majhead} Major headings, for example, "1. Introduction", should appear in all capital letters, bold face if possible, centered in the column, with one blank line before, and one blank line after. Use a period (".") after the heading number, not a colon. \subsection{Subheadings} \label{ssec:subhead} Subheadings should appear in lower case (initial word capitalized) in boldface. They should start at the left margin on a separate line. \subsubsection{Sub-subheadings} \label{sssec:subsubhead} Sub-subheadings, as in this paragraph, are discouraged. However, if you must use them, they should appear in lower case (initial word capitalized) and start at the left margin on a separate line, with paragraph text beginning on the following line. They should be in italics. \section{PRINTING YOUR PAPER} \label{sec:print} Print your properly formatted text on high-quality, 8.5 x 11-inch white printer paper. A4 paper is also acceptable, but please leave the extra 0.5 inch (12 mm) empty at the BOTTOM of the page and follow the top and left margins as specified. If the last page of your paper is only partially filled, arrange the columns so that they are evenly balanced if possible, rather than having one long column. In LaTeX, to start a new column (but not a new page) and help balance the last-page column lengths, you can use the command ``$\backslash$pagebreak'' as demonstrated on this page (see the LaTeX source below). \section{PAGE NUMBERING} \label{sec:page} Please do {\bf not} paginate your paper. Page numbers, session numbers, and conference identification will be inserted when the paper is included in the proceedings. \section{ILLUSTRATIONS, GRAPHS, AND PHOTOGRAPHS} \label{sec:illust} Illustrations must appear within the designated margins. They may span the two columns. If possible, position illustrations at the top of columns, rather than in the middle or at the bottom. Caption and number every illustration. All halftone illustrations must be clear black and white prints. Colors may be used, but they should be selected so as to be readable when printed on a black-only printer. Since there are many ways, often incompatible, of including images (e.g., with experimental results) in a LaTeX document, below is an example of how to do this \cite{Lamp86}. \begin{figure}[htb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{image1}} \centerline{(a) Result 1}\medskip \end{minipage} \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{image3}} \centerline{(b) Results 3}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{image4}} \centerline{(c) Result 4}\medskip \end{minipage} \caption{Example of placing a figure with experimental results.} \label{fig:res} \end{figure} \vfill \pagebreak \section{FOOTNOTES} \label{sec:foot} Use footnotes sparingly (or not at all!) and place them at the bottom of the column on the page on which
runs within the same strain (P<0.05). 1.91 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.75 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.90 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.91 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R91.75 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.90 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.95 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.90 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.95 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.73 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.89 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.92 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R101.70 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.95 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 2.00 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.80 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.97 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.76 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.89 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.98 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). Titratable acidity (%)MRS Growth on M17 and de Man, Rogosa, and Sharpe (MRS) medium. (log cfu/mL) R10.58 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.49 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.41 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.41 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.238 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 4.086 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.538 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.427 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R20.71 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.51 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.52 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.55 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.359 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 4.255 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.821 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.412 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R30.84 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.56 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.54 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.57 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.467 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 4.500 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 6.023 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.382 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R40.61 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.58 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.53 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.53 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.289 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 4.488 Different uppercase superscript letters show differences between the strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 5.700 Different uppercase superscript letters show differences between the strains within the same
model, the ordering wave vectors in UNi$_2$Al$_3$ and CeCuSn originate from the development of the same type of CDW on the hexagonal lattice. \begin{table} \caption{\label{tablececusn} Calculation of magnetic ordering wave vectors satellites in the hexagonal system CeCuSn based on the Ch(n) model developed for UNi$_{2}$Al$_{3}$. The value in bold is the experimental wave vector observed. Note the extra division by two in the average, indicating the formation of a SDW.} \begin{ruledtabular} \begin{tabular}{c c c} Atoms & Average & 1$\pm {\tau}$\\ \hline\\ Ch(2)+Ch(3) &$ \frac{2/5+1/5}{4}$ & 0.15000\\ \\ Ch(2)+Ch(4) & $\frac{2/5+1/11}{4}$ & 0.12273\\ \\ Ch(2)+Ch(3)+Ch(4) & $\frac{2/5+1/5+1/11}{6} $& \textbf{0.11515}\\ \end{tabular} \end{ruledtabular} \end{table} \subsection{\label{sect:MnSi}MnSi} MnSi has attracted a lot of attention recently because it can be considered a topological insulator\cite{neubauer2009topological} and exhibits a Skyrmion lattice phase upon application of a magnetic field\cite{day2009exotic}. Its crystal structure\cite{nakanishi1980origin} belongs to the tetrahedral $P2_13$ \#198 space group with both Mn and Si ions sitting at the (4a) symmetry position $\left<u,u,u\right>$, and the three coplanar sites perpendicular to the $\left<1,1,1\right>$ direction $\left<-u+1/2,-u,u+1/2\right>$, $\left<-u,u+1/2,-u+1/2\right>$,$\left<u+1/2,-u+1/2,-u\right>$ with $u_{Mn}=0.138$ and $u_{Si}=0.845$. Below $T_N$=29.5K, the onset of a helical SDW is observed\cite{muhlbauer2009skyrmion}$^,$\cite{grigoriev2006magnetic} with a wavelength of 190\AA \; along the $\left<1,1,1\right>$ direction which can be viewed, in reduced reciprocal lattice units, as $2\pi/190 = h \; 2\pi/a$ with a=4.56\AA \; and h=0.024. The crystal structure has no inversion symmetry, but the IC satellites at $G=0$ are symmetric. The projection of the (4a) lattice sites onto a unit vector along the $\left<1,1,1\right>$ leads to two periodicities: $1/(\sqrt{3}u)$ and $\sqrt{3}/(1-u)$, the latter coming from the three coplanar sites. For the Mn sites, the resulting wave vectors are 4.18370 and 2.00934 respectively. For the Si sites, the results are 0.683255 and 11.17452 (12$\pm$0.82548). By inspection, only the Mn coplanar sites, with 2.00934 has a small enough satellite wave vector to build the helical magnetic structure. If the experimental magnetic satellites are the third harmonic of the fractional part 3$\times$0.00934=0.028, which is close to the 0.024 published, an error of 16\%. The existence of the third harmonic indicates a squarer distribution for the helical structure in MnSi, from a Fourier series point of view. The first harmonic at the satellite position $\tau$=0.00934 near each reciprocal lattice wave vector could be observed experimentally with a higher resolution, like with resonant synchrotron X-ray probe for example. As an alternative analysis, with a weighted average of the satellites 0.00934 x + (1-x) 0.18370 = 0.024 leads to x=92\% mixing for the Mn sites. Nesting of the wave vectors does not lead to the observed value. Considering the other chiral pair for the Mn sites along the $\left<-1,-1,-1\right>$ direction with u $\rightarrow$ 1-u, $1/(\sqrt{3}(1-u))$ and $\sqrt{3}/u$ leads to the negative wave vectors -0.66978 and -12.5511 respectively. Taking the sum of the four satellite wave vectors for the Mn sites translated to 0, without averaging, leads to 0.1837 + 0.00934 - 0.66978 - 0.5511 = -1.02784, which gives a satellite near 0.028 also near to 0.024 seen experimentally. This latter result is obtained without a center of mass transformation but is a residual momentum from two chiral unit cells considering inversion in the crystal structure. \subsection{\label{sect:chrom}Chromium} Cr has long been known to exhibit IC ordering and has been intensely studied by neutron\cite{fincher1979magnetic}$^,$\cite{koehler1966antiferromagnetism} and X-ray scattering experiments\cite{gibbs1988high}$^,$\cite{hill1995x}. In reciprocal space, a low temperature SDW is observed at Q=(0.9515,0,0) with $\tau$=0.0485 near the AF (1,0,0) zone center, a CDW at 2Q= (1.903,0,0) or (2-2$\tau$,0,0) is observed as well as another CDW harmonic at 4Q, which can be relabeled as (4-4$\tau$,0,0), also observed at 4Q-2=(2$\pm$4$\tau$,0,0) positions. A 3Q wave vector harmonic has also been observed\cite{Pynn3Qcr}. In the following analysis, simple crystallographic considerations can help gain insight about the experimental IC wave vectors observed. Cr has a BCC crystal structure Im$\bar{3}$m $\#$229 with atoms at $\left<0,0,0\right>$ and $\left<1/2, 1/2, 1/2\right>$ above T=312K and experience structural phase transitions below T=122K\cite{janner1980symmetry} to a tetragonal phase I4/mmm \#139 also with atoms at the (2a) symmetry positions $\left<0,0,0\right>$ and $\left<1/2, 1/2, 1/2\right>$ where the longitudinal incommensurate SDW develops. The tetragonal distortion does not change the reduced unit cell. Some recent theory describes Cr as a spin-split metal\cite{PhysRevB.41.6828} where spin-up and spin-down bands move in opposite directions. The phenomenological model presented here also has broken inversion symmetry. \begin{table} \caption{\label{tablecr} Calculation of ordering wave vectors from the Cr(n) Bragg Plane expansion along the $\vec{a}$-direction in Cr metal (see Fig. \ref{figurecrbp}). The values in bold are near the experimental wave vectors satellites near (1,0,0).} \begin{ruledtabular} \begin{tabular}{c c c} Atoms & Average & Avg. Q\\ \hline\hline\\ Cr(1)+Cr(2) & $\frac{2+1}{2}$ & 1.5\\ \\ Cr(1)+Cr(2)+Cr(3) & $\frac{2+1+2/3}{3}$ & 1.22222\\ \\ (1)+(2)+(3)+(4) &$ \frac{2+1+2/3+1/2}{4}$ & \textbf{1.04166}\\ \\ (1)+(2)+(3)+(4)+(5) & $\frac{2+1+2/3+1/2+2/5}{5} $& \textbf{0.91333}\\ \\ (1)+(2)+(3)+(4)+(5)+(6) & $\frac{2+1+2/3+1/2+2/5+1/3}{6} $& 0.81666\\ \end{tabular} \end{ruledtabular} \end{table} \begingroup \squeezetable \begin{table} \caption{\label{tablecrrelabel} Relabeling of ordering wave vectors from the Cr(n) cumulative Bragg plane expansion along the $\vec{a}$-direction in Cr metal (see Fig. \ref{figurecrbp}). The values in bold indicate averaging by group of three, six and nine Bragg planes that correspond to values of $q_0$ seen experimentally. Averaging above twelve Bragg planes slowly converges to zero as shown in Fig.\ref{figurecrgroupof3}} \begin{ruledtabular} \begin{tabular}{c c c c} Atoms & Average & Harmonic & q$_0$\\ \hline\\ Cr(1)+Cr(2) &$ \frac{2+1}{2}$ & 1+12q$_0$ & {0.0417}\\ \\ Cr(1)+Cr(2)+Cr(3) &$ \frac{2+1+2/3}{3}$ & 1+6q$_0$ & \textbf{0.0370}\\ \\ (1)+(2)+(3)+(4) &$ \frac{2+1+2/3+1/2}{4}$ & 1+q$_0$ & {0.0417}\\ \\ (1)+...+(5) & $\frac{...+2/5}{5} $& 1-2q$_0$ & {0.0433}\\ \\ (1)+...+(6) & $\frac{...+1/3}{6} $& 1-4q$_0$ & \textbf{0.0458}\\ \\ (1)+...+(7) & $\frac{...+2/7}{7} $& 1-6q$_0$ & 0.0432\\ \\ (1)+...+(8) & $\frac{...+1/4}{8} $& 1-8q$_0$ & 0.0401\\ \\ (1)+...+(9) & $\frac{...+2/9}{9} $& 1-10q$_0$ & \textbf{0.0371}\\ \\ (1)+...+(10) & $\frac{...+1/5}{10} $& 1-12q$_0$ & 0.03451\\ \\ (1)+...+(11) & $\frac{...+2/11}{11} $& 1-14q$_0$ & 0.0322\\ \end{tabular} \end{ruledtabular} \end{table} \endgroup The Bragg plane expansion in Cr appears in Fig.\ref{figurecrbp}. Being spaced by 1/2 the Bragg planes at Cr(n) give a 2/n wave vector. Table \ref{tablecr} shows the cumulative averaging from each Bragg plane addition. There are two wave vectors with experimental relevancy: the 1+$\tau_0$=1.04167 gives a satellite at 1-$\tau_0$=0.95833 which is 0.7\% from 0.9515 of the observed Q-satellite experimental value\cite{fincher1979magnetic}, and the second remarkable value in the table is at 0.91333=1-2$\tau_0$ with $\tau_0$=0.04333, although not seen experimentally at the (1,0,0) wave vector but at the 2-2$\tau$=1.9030 wave vector, is nevertheless 0.5\% from the calculated value at 2-2$\tau_0$=1.91333 with $\tau_0$=0.04333 from Table \ref{tablecr}. \begin{figure} \includegraphics[width=7.5cm,trim={0 0cm 0 0cm},clip]{figurecrbp.pdf} \caption{\label{figurecrbp}Bragg Plane expansion, labeled Cr(n), for the calculation of the CDW and SDW in Cr. The third dimension of the crystal structure is projected onto the plane. All the planes are spaced by 1/2 lattice spacing from the reference atom at the left of the figure.} \end{figure} \begin{figure} \includegraphics[width=7.5cm,trim={0 0cm 0 0cm},clip]{figurecrgroupof3.pdf} \caption{\label{figurecrgroupof3}Graph of q$_0$ from Table \ref{tablecrrelabel} for the partial Bragg plane expansion in Chromium. Note the grouping by 3 of the first 6 sums near the experimental q$_0$=$\pm$0.04. The lowest values in the groupings are reminiscent of the experimental satellite wave vectors (containing 3 and 6 Bragg planes respectively) for high and low temperature.} \end{figure} It is interesting to expand this analysis further along the $\vec{a}$-axis as listed in Table \ref{tablecrrelabel}. The SDW satellites observed at 1$\pm$q$_0$ appear in the table as well as other even multiples of q$_0$ unobserved. The relative stability of q$_0$ in the table is remarkable and contains satellites near the experimental values for the CDW and SDW observed in Cr. The transition in Cr from $\delta=0.037$ to $\delta=0.048$ observed experimentally\cite{hill1995x} on decreasing temperature would come from a three Bragg plane averaging at high T (a doubling of the BCC/tetragonal unit cell when including the reference atom) to six at low T (a tripling and a half of the BCC/tetragonal unit cell). There is a periodicity by a group of three Bragg planes in the Table. It is the doubling of these SDW wave vectors in the table that would generate the CDW harmonics at 2$\pm(2n)\tau$ seen in the X-ray experiment\cite{hill1995x}. From the table, there is a direct connection between the doubling of the 1$\pm$q$_0$ and the 2$\pm$2q$_0$ satellites (labeled 2Q and 4-2Q in the X-ray paper). For the other satellites in the table, the doubling of 1$\pm$(2n)q$_0$ leads to the 2$\pm$(4n)q$_0$ satellites. \subsection{\label{conclusion}Conclusion} The phenomenological method for calculating the IC unit cells applies favorably to itinerant electrons systems (itinerant f-electrons and
### Does my simulation code crash the twedit++ when parameter scan is added for the first time? 131 views 0 3 months ago by I have a problem regarding crashing of twedit++ when I try to add parameter scan (see the image; question mark indicates the chrash event). What do, how to solve? The demo cc3d codes seem to work, play, and add all the parameter scans in the world. The only solution that I can think of at the moment is to do my code again and/or reinstall the compucell3d&twedit. Yours, Pauli Community: CompuCell3D 0 3 months ago by Hi, I changed the folder of my simulation to compucell...\Book_ChapterDemos etc. , and it I got the scan working! Splendid! I wonder how I can give rights to other folders for cc3d sim runs? B.r., Pauli me Europe 0 3 months ago by What platform are you running on? Windows, Mac, linux, ...? It sounds like the parameter scan task is trying to write to a directory you didn't have access to. I've also noticed crashes when you try to save the file after defining the parameter scans. In my case though the files are written correctly and can be re-loaded into twedit, or run from the command line. Windows. I use the same directory where my compucell3d is. I have still experienced problems in starting to run the simulation code in my new directory inside this folder. The program does not crash, but it does not play the simulation code. I have attached the run file. processCommandLineOptions opts= [] args= [] FILE LIST= [] pixmap= <PyQt5.QtGui.QPixmap object at 0x0000000005586278> INIT SYNC self.modifiedPluginsDataStringList= [] pluginModules= ['PluginCCDCPPHelper', 'PluginCCDMLHelper', 'PluginCCDProject', 'PluginCCDPythonHelper', 'PluginCompuCell3D'] plugin name= PluginCCDCPPHelper plugin name= PluginCCDMLHelper ['C:\\CompuCell3D-64bit\\lib\\PythonDeps', 'C:\\CompuCell3D-64bit\\Twedit++5', 'C:\\CompuCell3D-64bit\\Twedit++5\\Plugins', 'C:\\CompuCell3D-64bit\\Python27\\python27.zip', 'C:\\CompuCell3D-64bit\\Python27\\DLLs', 'C:\\CompuCell3D-64bit\\Python27\\lib', 'C:\\CompuCell3D-64bit\\Python27\\lib\\plat-win', 'C:\\CompuCell3D-64bit\\Python27\\lib\\lib-tk', 'C:\\CompuCell3D-64bit\\Python27', 'C:\\CompuCell3D-64bit\\Python27\\lib\\site-packages', 'C:\\CompuCell3D-64bit\\Python27\\lib\\site-packages\\win32', 'C:\\CompuCell3D-64bit\\Python27\\lib\\site-packages\\win32\\lib', 'C:\\CompuCell3D-64bit\\Python27\\lib\\site-packages\\Pythonwin', 'C:\\CompuCell3D-64bit\\lib\\python', 'C:\\CompuCell3D-64bit\\pythonSetupScripts'] plugin name= PluginCCDProject plugin name= PluginCCDPythonHelper plugin name= PluginCompuCell3D ************PLUGIN NAME= PluginCompuCell3D name=PluginCompuCell3D fileName=C:\CompuCell3D-64bit\Twedit++5\Plugins\PluginCompuCell3D.py author=Maciej Swat autoactivate=True deactivateable=False version=0.9.0 className=CC3DApp packageName=__core__ longDescription=This plugin provides functionality to link Twedit with CompuCell3D version= 0.9.0 className= CC3DApp pluginClass= <class 'PluginCompuCell3D.CC3DApp'> PORT= -1 __initActions CC3D CONSTRUCTOR WILL TRY TO ACTIVATE <PluginCompuCell3D.CC3DApp object at 0x0000000005921678> ACTIVATED ************PLUGIN NAME= PluginCCDProject name=PluginCCDProject fileName=C:\CompuCell3D-64bit\Twedit++5\Plugins\PluginCCDProject.py author=Maciej Swat autoactivate=True deactivateable=False version=0.9.0 className=CC3DProject packageName=__core__ shortDescription=Plugin to manage CC3D Projects longDescription=This plugin provides functionality that allows users to manage *.cc3d projects version= 0.9.0 className= CC3DProject pluginClass= <class 'PluginCCDProject.CC3DProject'> libpng warning: iCCP: known incorrect sRGB profile libpng warning: iCCP: known incorrect sRGB profile libpng warning: iCCP: known incorrect sRGB profile WILL TRY TO ACTIVATE <PluginCCDProject.CC3DProject object at 0x0000000005921D38> ACTIVATED ************PLUGIN NAME= PluginCCDPythonHelper name=PluginCCDPythonHelper fileName=C:\CompuCell3D-64bit\Twedit++5\Plugins\PluginCCDPythonHelper.py author=Maciej Swat autoactivate=True deactivateable=True version=0.9.0 className=CC3DPythonHelper packageName=__core__ shortDescription=Plugin which assists with CC3D Python scripting longDescription=This plugin provides provides users with CC3D Python code snippets - making Python scripting in CC3D more convenient. version= 0.9.0 className= CC3DPythonHelper pluginClass= <class 'PluginCCDPythonHelper.CC3DPythonHelper'> WILL TRY TO ACTIVATE <PluginCCDPythonHelper.CC3DPythonHelper object at 0x0000000005921438> ************PLUGIN NAME= PluginCCDCPPHelper name=PluginCCDCPPHelper fileName=C:\CompuCell3D-64bit\Twedit++5\Plugins\PluginCCDCPPHelper.py author=Maciej Swat autoactivate=True deactivateable=True version=0.9.0 className=CC3DCPPHelper packageName=__core__ shortDescription=Plugin assists with CC3D C++ module development scripting longDescription=This plugin provides provides users with CC3D C++ code generator and code snippets - making CC3D C++ plugin and steppable development more convenient. version= 0.9.0 className= CC3DCPPHelper pluginClass= <class 'PluginCCDCPPHelper.CC3DCPPHelper'> WILL TRY TO ACTIVATE <PluginCCDCPPHelper.CC3DCPPHelper object at 0x000000000588B558> ACTIVATED ************PLUGIN NAME= PluginCCDMLHelper name=PluginCCDMLHelper fileName=C:\CompuCell3D-64bit\Twedit++5\Plugins\PluginCCDMLHelper.py author=Maciej Swat autoactivate=True deactivateable=True version=0.9.0 className=CC3DMLHelper packageName=__core__ shortDescription=Plugin which assists with CC3D Python scripting longDescription=This plugin provides provides users with CC3D Python code snippets - making Python scripting in CC3D more convenient. version= 0.9.0 className= CC3DMLHelper pluginClass= <class 'PluginCCDMLHelper.CC3DMLHelper'> WILL TRY TO ACTIVATE <PluginCCDMLHelper.CC3DMLHelper object at 0x000000000588B798> ACTIVATED _settingName= RecentProjects recentItems= [u'C:\\CompuCell3D-64bit\\pauli\\aggregation\\PKD_paper_codes\\isoCyst\\isoCyst.cc3d', u'C:\\CompuCell3D-64bit\\Demos\\BookChapterDemos_ComputationalMethodsInCellBiology\\VascularTumor\\VascularTumor.cc3d', u'C:\\CompuCell3D-64bit\\pauli\\aggregation\\PKD_paper_codes\\tubule\\PKD.cc3d', u'C:\\CompuCell3D-64bit\\pauli\\movements\\m19\\okok\\peitto.cc3d', u'C:\\CompuCell3D-64bit\\pauli\\aggregation\\PTAtoRV-STANDARD\\aggregration\\aggregration.cc3d', u'C:\\CompuCell3D-64bit\\Demos\\BookChapterDemos_ComputationalMethodsInCellBiology\\okok\\peitto.cc3d', u'C:\\CompuCell3D-64bit\\Demos\\BookChapterDemos_ComputationalMethodsInCellBiology\\cellsorting2\\cellsorting.cc3d', u'C:\\CompuCell3D-64bit\\Demos\\BookChapterDemos_ComputationalMethodsInCellBiology\\cellsorting\\cellsorting.cc3d'] GOT SUSTOM SETTINGS : C:\CompuCell3D-64bit\Demos\BookChapterDemos_ComputationalMethodsInCellBiology\okok\Simulation\_settings.sqlite projParent= peitto.cc3d __openXMLPythonInEditor pdh.cc3dSimulationData.xmlScript= C:\CompuCell3D-64bit\Demos\BookChapterDemos_ComputationalMethodsInCellBiology\okok\Simulation\peitto.xml __openXMLPythonInEditor pdh.cc3dSimulationData.xmlScriptResource.path= C:\CompuCell3D-64bit\Demos\BookChapterDemos_ComputationalMethodsInCellBiology\okok\Simulation\peitto.xml opening file C:\CompuCell3D-64bit\Demos\BookChapterDemos_ComputationalMethodsInCellBiology\okok\Simulation\peitto.xml opening file C:\CompuCell3D-64bit\Demos\BookChapterDemos_ComputationalMethodsInCellBiology\okok\Simulation\peitto.py opening file C:\CompuCell3D-64bit\Demos\BookChapterDemos_ComputationalMethodsInCellBiology\okok\Simulation\peittoSteppables.py projParent= peitto.cc3d RESOURCENAME projParent= peitto.cc3d CurrentItem= peitto.cc3d parent= None getFullPath= projParent= peitto.cc3d projectFullPath= C:\CompuCell3D-64bit\Demos\BookChapterDemos_ComputationalMethodsInCellBiology\okok\peitto.cc3d TRY TO FIGURE OUT PORT CHECKING PORT= 47406 CHECKING PORT= 47407 established empty port= 47407 self.cc3dPath= C:\CompuCell3D-64bit\compucell3d.bat Executing Popen command with following arguments= ['C:\\CompuCell3D-64bit\\compucell3d.bat', '--port=47407', '-i', u'C:\\CompuCell3D-64bit\\Demos\\BookChapterDemos_ComputationalMethodsInCellBiology\\okok\\peitto.cc3d'] starting CC3D ['C:\\CompuCell3D-64bit\\player5', 'C:\\CompuCell3D-64bit\\lib\\python', 'C:\\CompuCell3D-64bit\\Python27\\python27.zip', 'C:\\CompuCell3D-64bit\\Python27\\DLLs', 'C:\\CompuCell3D-64bit\\Python27\\lib', 'C:\\CompuCell3D-64bit\\Python27\\lib\\plat-win', 'C:\\CompuCell3D-64bit\\Python27\\lib\\lib-tk', 'C:\\CompuCell3D-64bit\\Python27', 'C:\\CompuCell3D-64bit\\Python27\\lib\\site-packages', 'C:\\CompuCell3D-64bit\\Python27\\lib\\site-packages\\win32', 'C:\\CompuCell3D-64bit\\Python27\\lib\\site-packages\\win32\\lib', 'C:\\CompuCell3D-64bit\\Python27\\lib\\site-packages\\Pythonwin'] compucell3d.pyw: type(argv)= <type 'list'> compucell3d.pyw: argv= ['--port=47407', '-i', 'C:\\CompuCell3D-64bit\\Demos\\BookChapterDemos_ComputationalMethodsInCellBiology\\okok\\peitto.cc3d', '--currentDir=C:\\CompuCell3D-64bit'] CONSOLE PARENT= <UI.UserInterface.DockWidget object at 0x000000000AF34678> self.baseFont.fixedPitch()= True TWEDIT socket.socketDescriptor()= <sip.voidptr object at 0x000000000AF2E850> TRY TO FIGURE OUT PORT CHECKING PORT= 47406 CHECKING PORT= 47407 CHECKING PORT= 47408 established empty port= 47408 CALL establishConnection self.__fileName= C:\CompuCell3D-64bit\Demos\BookChapterDemos_ComputationalMethodsInCellBiology\okok\peitto.cc3d GOT SUSTOM SETTINGS : C:\CompuCell3D-64bit\Demos\BookChapterDemos_ComputationalMethodsInCellBiology\okok\Simulation\_settings.sqlite currentIteration= [3] multiplicativeFactors= [1] iterationId= 3 Error Parsing Parameter scan file Traceback (most recent call last): File "C:\CompuCell3D-64bit\player5\compucell3d.pyw", line 258, in <module> error_code = main(sys.argv[1:]) File "C:\CompuCell3D-64bit\player5\compucell3d.pyw", line 239, in main mainWindow.viewmanager.processCommandLineOptions(cml_args) File "C:\CompuCell3D-64bit\player5\Plugins\ViewManagerPlugins\SimpleTabView.py", line 687, in processCommandLineOptions self.__runSim() File "C:\CompuCell3D-64bit\player5\Plugins\ViewManagerPlugins\SimpleTabView.py", line 1815, in __runSim self.prepareSimulation() File "C:\CompuCell3D-64bit\player5\Plugins\ViewManagerPlugins\SimpleTabView.py", line 1775, in prepareSimulation File "C:\CompuCell3D-64bit\player5\Plugins\ViewManagerPlugins\SimpleTabView.py", line 1015, in __loadSim File "C:\CompuCell3D-64bit\player5\Plugins\ViewManagerPlugins\SimpleTabView.py", line 1169, in __loadCC3DFile psu.replaceValuesInSimulationFiles(_pScanFileName=pScanFilePath, _simulationDir=customOutputPath) File "C:\CompuCell3D-64bit\pythonSetupScripts\ParameterScanUtils.py", line 757, in replaceValuesInSimulationFiles self.replaceValuesInXMLFile(_parameterScanDataMap = parameterScanDataMap , _xmlFile=fullFilePath) File "C:\CompuCell3D-64bit\pythonSetupScripts\ParameterScanUtils.py", line 674, in replaceValuesInXMLFile elem=self.getXMLElementFromAccessPath(root_element,accessPathList) File "C:\CompuCell3D-64bit\pythonSetupScripts\ParameterScanUtils.py", line 629, in getXMLElementFromAccessPath tmpElement=tmpElement.getFirstElement(arg[0],attrDict) if attrDict is not None else tmpElement.getFirstElement(arg[0]) AttributeError: 'NoneType' object has no attribute 'getFirstElement' CLOSING LOCAL SOCKET "SIMULATION FINISHED" ​ written 3 months ago by Pauli Tikka 0 12 weeks ago by Could you post s short version of your simulation where the problem occurs? 0 12 weeks ago by Hi, Here is a short video about the crash (that occurs around 56s in the video) when I start to add the parameter scan to my cc3d model in twedit++5. So the player goes on, but the twedit++5 crashes after attempting to add the scan. Best, Pauli File attached: adding parameter scan results to crash.mp4 (2.12 MB) 0 10 weeks ago by What to do, here is what I also get?: 0 10 weeks ago by The path to the .piff file is odd, there is a single slash whereas all the other directory delimiters are back slashes. Check your code for a bad path to the piff file. Also, take a look in the the .cc3d file and see if the piff file is listed there and make sure the path has the correct delimiters. It looks like you are on a Windows system and Windows is "supposed" to be able to handle slashes or backslashes in paths, but that could still be the problem. If you are using a .piff file you can include it in the .cc3d file by importing into the project in Twedit++. Use "Add resource", tell it the file is a piff, then browse to the file. If the file is already in the Simulation directory Twedit++ will complain that it can't copy the file. Regardless, you just need to make sure there is a copy in the Simulation folder and that that is included in the project's .cc3d file. That way when the various files are all copied for the parameter scan the .piff file goes with them. So, in general, the .piff file should be in the Simulation folder for the CC3D project. 0 10 weeks ago by Hi, I did what you recommended (see peitto.cc3d), and inserted the piff file to .cc3d file separately. I encountered some problems (see below), but was able to add the parameter scan, and modify it manually line-by-line. So I now have parameter scan, but If I want to add new elements to scan, I need to write them directly to the xml code myself. Next question would be, how to save the result file in the parameter_scan subfolder (and not a folder, so that the previous results would not wipe the one iteration round older file/s)? E.g. what to do in steppables.py file to lines like(?): FileName = "C:/pyintro/positions.csv" # you can add some specific facts to the file if needed self.File = open(FileName,"w",0) # the 0 is for initial memory 'flush', needed for total print of the file self.File.write("cell_no,time_mcs,x_position_(px),y_position_(px),z_position_(px)\n") Best, Pauli File attached: peitto.cc3d (407 Bytes) File attached: errorlog_param_tikka16318.txt (3.22 KB) 1 That is the message the parameter scan process gives when the scan is complete. If you have old folders in the directory structure then when the scan tries to create the folder for a new scan it fails. The scan process can't differentiate between old and unneeded scan folders and new folders created during the current scan process. You just need to delete all those old numbered folders before restarting a scan. If you restart a scan you also have to edit the scan xml file and set the pointers back to zero. For example, from one of my scan files; <Parameter CurrentIteration="3" Name="pbpk_kGutabs" Type="PYTHON_GLOBAL" ValueType="float"> CurrentIteration needs to be reset back to zero. written 10 weeks ago by James Sluka Hi, Thanks! I got it now. In addition, I can save the different iteration files under different names to the same scan folder. Regards, Pauli written 9 weeks ago by Pauli Tikka Ah no!! The crashing problem reoccurred, but in a different place and context. I still wanted to scan parameters automatically via scan editor, but the twetedit program crashes immediately when I push ok after inserting the values in the 'scannable parameters' window. What to do, continue with manual