text
stringlengths
0
782k
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fahir. Creating geometry for a computer game or a movie is a very long and arduous task. For instance, if we would like to populate a virtual city with buildings, it would cost a ton of time and money, and of course, we would need quite a few artists. This piece of work solves this problem in a very elegant and convenient way. It learns the preference of the user then creates and recommends a set of solutions that are expected to be desirable. In this example, we are looking for tables with either one leg or crossing legs. It should also be properly balanced, therefore if we see any of these criteria, we'll assign a high score to these models. These are the preferences that the algorithm should try to learn. The orange bars show the predicted score for new models created by the algorithm. A larger value means that the system expects the user to score this high and the blue bars mean uncertainty. Generally, we are looking for solutions with a large orange and small blue bars. This means that the algorithm is confident that a given model is in line with our preferences, and we get exactly what we were looking for. Another balanced table designs with one leg or crossed legs. Interestingly, since we have these uncertainty values, one can also visualize country examples where the algorithm is not so sure but would guess that we wouldn't like the model. It's super cool that it is aware how horrendous these designs look. It may have a better eye than many of the contemporary art curators out there. There are also examples where the algorithm is very confident that we are going to hate a given example because of its legs or unbalancedness and would never recommend such a model. So indirectly, it also learns how a balanced piece of furniture should look like without ever learning the concept of gravity or doing any kind of architectural computation. The algorithm also works on buildings and after learning our preferences, it can populate entire cities with geometry that is in line with our artistic vision. Excellent piece of work. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. There are so many applications of deep learning, I was really excited to put together a short, but really cool list of some of the more recent results for you Fellow Scholars to enjoy. Machine Learning provides us an incredible set of tools. If you have a difficult problem at hand, you don't need to handcraft an algorithm for it. It finds out by itself what is important about the problem and tries to solve it on its own. If the problem domains, they perform better than human experts. What's more, some of these algorithms find out things that you could earn a PhD with 10 years ago. Here goes the first stunning application. Toxicity detection for different chemical structures by means of deep learning. It is so efficient that it could find toxic properties that previously required decades of work by humans who are experts of their field. First one, mitosis detection from large images. Mitosis means that cell nuclei are undergoing different transformations that are quite harmful and quite difficult to detect. The best techniques out there are using convolutional neural networks and are outperforming professional radiologists at their own task. Unbelievably. Kaggle is a company that is dedicated to connecting companies with large data sets and data scientists who write algorithms to extract insight from all this data. If you take only a brief look, you see an incredibly large swath of applications for learning algorithms. Almost all of these were believed to be only for humans, very smart humans. And learning algorithms, again, emerge triumphant on many of these. For instance, they had a great competition where learning algorithms would read a website and find out whether paid content is disguised there as real content. Next up on the list, hallucination or sequence generation. It looks at different video games, tries to learn how they work and generates new footage out of thin air by using a recurrent neural network. Because of the imperfection of the 3D scanning procedures, many 3D scan furnitures are too noisy to be used as is. However, there are techniques to look at these really noisy models and try to figure out how they should look by learning the symmetries and other properties of real furnitures. These algorithms can also do an excellent job at predicting how different fluids behave in time and are therefore expected to be super useful in physical simulation in the following years. And on the list of highly sophisticated scientific topics, there is this application that can find out what makes a good selfie and how good your photos are if you really want to know the truth. Here is another application where a computer algorithm that we call deep-queue learning plays pong against itself and eventually achieves expertise. The machines are also grading student essays. At first one would think that this cannot possibly be a good idea. And as it turns out, their judgment is more consistent with the reference grades than any of the teachers who were tested. This could be an awesome tool for saving a lot of time and assisting the teachers to help their students learn. This kind of blows my mind. It would be great to take a look at an actual dataset if it is public and the issued grades. So if any of you fellow scholars have seen it somewhere, please let me know in the comment section. And these results are only from the last few years and it's really just scratching the surface. There are literally hundreds of more applications we haven't even talked about. We are living extremely exciting times indeed. I am eager to see and perhaps be a small part of this progress. There are tons of reading and viewing materials in the description box. Check them out. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. What could be a more delightful way to celebrate New Year's Eve than reading about new breakthroughs in machine learning research? Let's talk about an excellent new paper from the Google DeepMind Guys. In machine learning, we usually have a set of problems for which we are looking for solutions. For instance, here's an image, please tell me what is seen on it. Here's a computer game, please beat level 3. One problem, one solution. In this case, we are not looking for one solution, we are looking for a computer program, an algorithm that can solve any number of problems of the same kind. This work is based on a recurrent neural network, which we discussed in a previous episode. In short, it means that it tries to learn not to want something, but the sequence of things. And in this example, it learns to add two large numbers together. As a big number can be imagined as a sequence of digits. This can be done through a sequence of operations. It first reads the two input numbers and then carries out the addition, keeps track of the carrying digits and goes on to the next digit. On the right, you can see the individual comments executed in the computer program it came up with. It can also learn how to rotate images of different cars around to obtain a frontal pose. This is also a sequence of rotation actions until the desired output is reached. Learning more rudimentary sorting algorithms to put numbers in a sending order is also possible. One key difference between recurrent neural networks and this is that these neural programmer interpreters are able to generalize better. What does this mean? This means that if the technique can learn from someone how to sort a set of 20 numbers, it can generalize its knowledge to much longer sequences. So it essentially tries to learn the algorithm behind sorting from a few examples. Previous techniques were unable to achieve this and as we can see, it can deal with a variety of problems. I am absolutely spellbound by this kind of learning because it really behaves like a novice human user would. Making it what experts do and trying to learn and understand the logic behind their actions. Happy new year to all of you fellow scholars. May it be ample, enjoy and beautiful papers. May our knowledge grow according to Moore's law and of course may the force be with you. Thanks for watching and for your generous support and I'll see you next year.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fahir. This is a quick report on what is going on with Two Minute Papers. We are living extremely busy times as I am still working full time as a doctoral researcher and we also have a baby on the way. We are currently a bit over 30 episodes in and I am having an amazing time explaining these concepts and enjoying the right tremendously. One of the most beautiful aspects of Two Minute Papers is the community forming around it, with extremely high quality comments and lots of civil, respectful discussions. I learned a lot from you Fellow Scholars. Thanks for that. Really awesome. The growth numbers are looking amazing for a YouTube channel of this size and of course any help in publicity is greatly appreciated. If you are a journalist and you feel that this is a worthy cause, please write about Two Minute Papers. If you are not a journalist, please try showing the series to them. Or just show it to your friends. I am sure that many, many more people would be interested in this and sharing is a great way to reach out to new people. The Patreon page is also getting lots of generous support that I would only expect from much bigger channels. I don't even know if I deserve it. But thanks for hanging in there, I feel really privileged to have supporters like you Fellow Scholars. You're the best. And we have some amazing times ahead of us. So thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is too many papers with Karojona Ifei. Some people say that the most boring thing is watching paint dry. They have clearly not seen this amazing research work that makes it possible to simulate the entire process of painting on a canvas. We have covered plenty of papers in fluid simulations and this is no exception. I admit that I am completely addicted and just can't help it. Maybe I should seek professional assistance. Although as there is a lot of progress in simulating the motion of fluids and paint is a fluid, then why not simulate the process of painting on a canvas? The simulations with this technique are so detailed that even the bristle interactions are taken into consideration, therefore one can capture artistic brush stroke effects like stabbing. Stabbing despite the horrifying name basically means shoving the brush into the canvas and rotating it around to get a cool effect. The fluid simulation part includes paint adhesion and is so detailed that it can capture the well-known impasto style where paint is applied to the canvas in such large chunks. They are so thick that one can see all the strokes that have been made and all this is done in real time. Amazing results. Traditional techniques cannot even come close to simulating such sophisticated effects. And as it happened many times before in computer graphics, just put a powerful algorithm into the hands of great artists and enjoy the majestic creations they give birth to. Wow, a two minute paper sapisote that's actually on time. Great. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zona Ifehir. Genetic algorithms help us solve problems that are very difficult if not impossible to otherwise write programs for. For instance, in this application we have to build a simple car model to traverse this terrain. Put some number of wheels on it somewhere, head a set of triangles as a chassis and off you go. This is essentially the DNA of a solution. The farther it goes, the better the car is and the goal is to design the best car you possibly can. First, the algorithm will try random solutions and as it has no idea about the concept of a car or gravity, it will create a lot of bad solutions that don't work at all. However, after a point it will create something that is at least remotely similar to a car which will immediately perform so much better than the other solutions in the population. A genetic algorithm then creates a new set of solutions, however, now, not randomly. It respects a rule that we call survival of the fittest, which means that the best existing solutions are taken and mixed together to breed new solutions that are also expected to do well. Like in evolution in nature, mutations can also happen, which means random changes are also applied to the DNA code of a solution. We know from nature that evolution works extraordinarily well and the more we run this genetic optimization program, the better the solutions get. It's quite delightful for a programmer to see their own children trying vigorously and succeeding at solving a difficult task, even more so if the programmer wouldn't be able to solve this problem by himself. Let's run a quick example. We start with a set of solutions. The DNA of a solution is a set of zeros and ones which can encode some decision about the solution whether we turn left or right in a maze or it can also be an integer or an unreal number. We then compute how good these solutions are according to our taste in the example with cars how far these designs can get. Then we take, for instance, the best three solutions and combine them together to create a new DNA. Some of the better solutions may remain in the population unchanged. Then, probabilistically, random mutations happen to some of the solutions which help us explore the vast search space better. Reans and repeat and there you have it, genetic algorithms. I have also coded up a version of Roger Allsings' Evo Liza problem where the famous Monalisa painting is to be reproduced by a computer program with a few tens of triangles. The goal is to paint a version that is as faithful to the original as possible. This would be quite a difficult problem for humans but apparently a genetic algorithm can deal with this really well. The code is available for everyone to learn, experiment and play with and it's super fun. And if you're interested in the concept of evolution, maybe read the excellent book, The Blind Watchmaker by Richard Dawkins. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, we have some delightful news. Elon Musk and Sam Altman founded a nonprofit artificial intelligence research company that they call OpenAI. The funders have committed over $1 billion for this cause. Their goal is to make progress towards super intelligence, leveraging their nonprofit nature to make sure that such a breakthrough will be done in the controlled and beneficial way. As of the current state of things, most of the bigger companies with strong AI groups publish their work regularly, but as we get closer to artificial general intelligence and super intelligence, it is a question how much they will share. We have talked already about how enormously powerful a super intelligence could become, and how important it is to make sure that it is developed in a safe way, make sure to check that video out, a link is in the description box. It is really mind blowing. So everything they create will be open, therefore the first question that came to my mind, is it really good that anyone will be able to create an AI? What about users who are interested in doing it in a way that is harmful and dangerous to others? This was subject to a lot of debate and one of the conclusions is that at the same time most people are sensible and they expect that the number of friendly AI's will overpower the bad guys. We don't know if it's the best case scenario, but it is definitely better than the case of one company owning the only super intelligence. At OpenAI they already have researchers of their own and their research projects will be completely open. This is amazing because it rarely happens with companies as they usually want to retain the intellectual property of their projects. Amazon web services are also donating a huge amount of resources for the company. The fact that just like at research institutions, the researchers can publicly share their work may be a big deciding factor when recruiting, which is according to the founders already going really well. So, the light will know indeed. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Carlos Zona Ifehir. When using cryptography, we'd like to safely communicate over the internet in the presence of third parties. To be able to do this and many other important applications, we need random numbers. But what does it exactly mean that something is random? Randomness is the lack of any patterns and predictability. People usually use coin flips as random events. That is a coin flip really random. If we had a really smart physicist who can model all the forces that act upon the coin, he would easily find out whether it's going to be heads or tails. Strictly speaking, a coin flip is therefore not random. What about random numbers generated with computers? Computers are a collection of processing units that run programs. If one knows the program code that generates the random numbers, they are not random anymore because it doesn't happen by chance and it is possible to predict. John von Neumann famously said, Anyone who considers erythematical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number. There are only methods to produce random numbers and a strict erythmatic procedure, of course, is not such a method. Some websites offer high quality random numbers that are generated from atmospheric noise. Practically speaking, this, of course, sounds adequate enough. If someone wants to break the encryption of our communications, they would have to be able to model the physics and initial conditions of every single thunderbolt, which means processing millions of discharges per day. This is practically impossible. So it seems reasonable to say that random events are considered random because of our ignorance, but because they are, strictly speaking, unpredictable. You just need to be smart enough and the notion of randomness fades away in the light of your intelligence. Or so it seemed for physicists for a long time. Imagine if someone who has never heard about magnetism would see many magnets attracting each other and some added magnet powder. This person would most definitely say it's magic happening. However, if you know about magnetism, you know that things don't happen randomly, there are very simple laws that can predict all this movement. In this case, magnetic forces we can loosely call a hidden variable. So we have a phenomenon that we cannot predict and we are keen to say it's random. In reality, it is not. There is just a hidden variable that we don't know of that is responsible for this behavior. We have the very same phenomenon if we look inside of an atom. Quantum level effects happen according to the physics of extremely small things and we again find behaviors that seem completely random. We know some of the trends just like we know which roads in our city are expected to have a huge traffic jam every morning, but we cannot predict where every single individual car is heading. We have it the same way with extremely small particles. We are keen to say that a behavior seems completely random because nothing that we know or measure would explain it. Other people would immediately say, wait, you don't know everything. Maybe these quantum effects are not random as there may be hidden things, hidden variables that you don't know of which make up for the behavior. We can't just say this or that is random. It is much, much more likely that our knowledge is insufficient to predict what is happening as electromagnetic forces seemed magical to scientists a few hundred years ago. So is quantum mechanics completely random or does it only seem random? It is probably one of the most difficult questions ever asked. How can you find out that something you measure that seems random is really completely random and not just the act of forces that you don't know of? And hold on to your chair because this is going to blow your mind. A simple and intuitive statement of Bell's theorem states that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. This means that he proved that the behavior scientist experience in quantum mechanics are really random. They cannot be explained by any theory you could possibly make up. Simple one or complicated doesn't matter. This discovery is absolutely insane. You can definitely prove that the crappy theory someone quickly made up doesn't explain a behavior. But how can you prove that it is completely impossible to build such a theory that does? No matter how hard you try, how smart you are, you can't do it. This is such a mind-bogglingly awesome theorem. And please note that we definitely lose out on some details and generality because of the fact that we use intuitive words to discuss these results as opposed to the original derivation with covariances between measurements. On our imaginary list of the wonders of the world, monuments created not by the hands, but the minds of humans. This should definitely be among the best of them. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehair. We'll start with a quick recap on metropolis light transport and then discuss a cool technique that builds on top of it. If we would like to see how digitally modeled objects would look like in real life, we would create a 3D model of the desired scene, assign material models to the objects within, and use a photorealistic rendering algorithm to finish the job. It simulates rays of light that connect the camera to the light sources in the scene and compute the flow of energy between them. Initially, after a few rays will only have a rough idea on how the image should look like, therefore our initial results will contain a substantial amount of noise. We can get rid of this by simulating the path of millions and millions of rays that will eventually clean up our image. This process, where a noisy image gets clearer and clearer, we call convergence, and the problem is that this can take excruciatingly long, even up to hours, to get a perfectly clear image. With the simple algorithms out there, we generate these light paths randomly. This technique we call path tracing. However, in the scene that you see here, most random paths can't connect the camera and the light source because this wall is in the way obstructing many of them. Light paths like these don't contribute anything to our calculations and are ultimately a waste of time and resources. After generating hundreds of random light paths, we finally found a path that connects the camera with the light source without any obstructions. When generating the next path, it would be a crime not to use this knowledge to our advantage. A technique called metropolis light transport will make sure to use this valuable knowledge and upon finding a bright light path, it will explore other paths that are nearby to have the best shot at creating valid, unobstructed connections. If we have a difficult scene at hand, metropolis light transport gives us way better results than traditional, completely random paths sampling techniques such as path tracing. This scene is extremely difficult in a sense that the only source of light is coming from the upper left and after the light goes through multiple glass spheres, most of the light paths that we generate will be invalid. As you can see, this is a valiant effort with random path tracing that yields really dreadful results. Metropolis light transport is extremely useful in these cases and therefore should always be the weapon of choice. However, it is more expensive to compute than traditional random sampling. This means that if we have an easy scene on our hands, this smart metropolis sampling doesn't pay off and performs worse than a naive technique in the same amount of time. So, on easy scenes, traditional random sampling, difficult scenes, metropolis sampling, super simple, super intuitive, but the million dollar question is how to mathematically formulate and measure what an easy and what a difficult scene is. This problem is considered extremely difficult and was left open in the metropolis light transport paper in 2002. Even if we knew what to look for, we would likely get an answer by creating a converged image of the scene, which, without the knowledge of what algorithm to use, may take up to days to complete. But, if we have created the image, it's too late, we would need this information before we start this rendering process. This way we can choose the right algorithm on the first try. With this technique that came more than 10 years after the metropolis paper, it is possible to mathematically formalize and quickly decide whether a scene is easy or difficult. The key insight is that in a difficult scene, we often experience that a completely random ray is very likely to be invalid. This insight, with two other simple metrics, gives us all the knowledge we need to decide whether a scene is easy or difficult. And the algorithm tells us what mixture of the two sampling techniques we exactly need to use to get beautiful images quickly. The more complex light transport algorithms get, the more efficient they become, but at the same time, we are wallowing in parameters that we need to set up correctly to get adequate results quickly. This way we have an algorithm that doesn't take any parameters, you just fire it up and forget about it. Like a good employee, it knows when to work smart and when a dumb solution with a lot of firepower is better. And it was tested on a variety of scenes and found close to optimal settings. Implementing this technique is remarkably easy. Someone who is familiar with the basics of light transport can do it in less than half an hour. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zonai-Fehir. Now I'll eat my head if this is going to be two minutes, but I really hope you fellow scholars are going to like this little discussion. Neil deGress Tyson described a cool thought experiment in one of his talks. He mentioned that the difference of the human and the monkey DNA is really small, a one-digit percentage. For simplicity, let's say it is 1%. For this 1% difference, there's a huge difference in the intellect of humans and apes. The smartest chimpanzee you can imagine can do tasks like clapping his hands to a given simple rhythm or strike a match. Compared to the average chimpanzee, such an animal would be an equivalent of Einstein or John von Neumann. He can clap his hands. What is that for humans? Children can do that. Even before they start studying, they can effortlessly do something that rivals the brightest minds monkeys could ever produce. Imagine if there were species that is the same 1% difference away from us humans in the same direction. What could they be capable of? Their small children would be composing beautiful symphonies, perfect harmonization for hundreds of instruments. Or they would be deriving everything in the history of physics from Newton's laws to quantum electrodynamics. And their parents would be like, oh, look at what little Jimmy did. That's adorable. And they would put it on the fridge with a magnet, just like we do with the adorable little scribbles of our children. Just thinking about the possibilities gives me chills. Now, let's transition into neural networks. An artificial neural network is a crude approximation of the human brain that we can simulate on a computer to recognize images, paint in the style of famous artists or learn to play video games and a number of different very useful things. The number of connections that we can simulate on a graphical card of our computer grows closely to what's predicted in Moore's law, which means that the computing capacity that we have in our home computer doubles every few years. It's pretty crazy if you think about it, but most of your fellow scholars have phones in your pockets that have more computing capacity than NASA had to lend on the moon. As years go by, there will be more and more connections in these artificial neural networks, and they don't have to adhere to stringent constraints like our brains do, such as fitting into the human cranium. A computer can be the size of a building or even bigger. Computers also transmit data with a speed of light, which is way faster than the transfer capabilities of the human brain. Nick Bostrom asked a lot of leading AI researchers on the speed of progress in this field, and the conclusion of the study was basically that the question is not can we achieve human level intelligence, but when we will achieve it. However, the number of connections is not everything, as an artificial neural network is by far not a one-on-one copy of the human brain. We need something more than this. A very promising possible next frontier to conquer is called recursive self-improvement. Recursive self-improvement means that we tell the program to instead of work on an ordinary task like do better image recognition, we would order it to work on improving its own intelligence. Ask the program itself to rewrite its code to be more efficient and more general. So we have a program with a ton of computational resources working on getting smarter, and as it suddenly gets just a bit smarter, we then have a smarter machine that can again be asked to improve its own intelligence. But it is now more capable of doing that, therefore if we do this many times, leaps are going to get bigger and bigger as an intelligent mind can do more to improve itself than an insect can. This way, we may end up with an intelligence explosion, which means a possible exponential increase in capabilities. And if this is the case, talking about human level intelligence is completely irrelevant. During this process, given enough resources, the system may go from the intelligence of an insect to something way beyond the capabilities of the most intelligent person who ever lived in about a second or less. It would come up with way better solutions in milliseconds than anything you've seen on two minute papers and there's plenty of brilliant works out there. And of course, it could also develop never before seeing superweapons to unleash an unprecedented destruction on Earth. We wouldn't know if it would do it, but it is capable of doing that, which is quite alarming. I am not surprised that Elon Musk compares creating an artificial superintelligence to summoning the demon. And he offered $10 million to research a safe way to develop this technology, which is obviously not nearly enough, but it is an excellent way to raise awareness. Now the classical argument on how to curb such a superintelligence if one recognizes that it is up to no good, people say that, well, I'll unplug it, or maybe lock it away from the internet. The problem is that people assume that they can do it. We can lock it up in any way we can think of, but there's only so much we can do because as Neil deGrasse Tyson argued, even the smartest human who ever lived would be a blabbering, drooling idiot compared to such an intelligence. How easy is it for a grown adult to fool a child? A piece of cake. The intelligence gap between us and the superintelligence is more than a thousand times that. It's even more pathetic than a child or even a dog who tries to fool us. We humans can anticipate threats like wielding weapons or locking dangerous animals into cages. And so can superintelligent beings also anticipate our threats. Only way better. It can trick you by pretending to be broken and when the engineer goes there to fix the code, the manipulation can begin. It could also communicate with gravitational waves or any kind of thing that we cannot even fathom, just as an ant has no idea about our radio waves. And we don't need to characterize superintelligent beings as an adversary. The road to hell is paved with good intentions. It may very well be possible that we assign it a completely benign task that anyone could agree with and it would end up in a disaster in a way we cannot anticipate. Imagine assigning it the task of maximizing the number of paperclips. Nick Basrum argues that it would at first maybe create better blueprints and factory lines. And after some point it may run out of resources on earth. Then in order to maximize the number of paperclips, it would recognize that humans contain lots of useful atoms. So eradicating humanity would only be logical to maximize the number of paperclips. Think about another task, creating the best approximation of the number pi. One can approximate to the most decimals by using more resources, to have more resources one builds more and bigger computers. At some point it runs out of space and eradicate humans because they are in the way of creating more computers. Or it may eradicate humans way before that because it knows that they are capable of shutting you down. And if you get shut down, there's going to be less digits or paperclips. So again, it's only logical to kill them. The task will be done but no one will be there anymore to say thank you. It is a bit like a movie where there's an intelligent car and the driver is in a car-chase situation, shouting, we're too slow and fuel is running out. Please throw out all excessive useless weights. And along some empty bottles, the person would be subsequently ejected from the vehicle. We don't know what is going to be the next invention of mankind, but we know what's going to be the last one, artificial superintelligence. It has the potential to either eradicate humanity or solve all of its problems. It is both the deadliest weapon that will ever exist and the key to eternal life. We need to be vigilant about the fact that we have tons of money invested in artificial intelligence research, but barely any to make sure we are doing it in a controlled and ethical way. This task needs some of the brightest minds of our generation and perhaps even the next one. And this needs to happen before we get there. When we are there, it's already too late. I highly recommend an absolutely fantastic article on Wade Batwai about this or Nick Bastram's amazing book, Superintelligence. There are tons of other reading materials in the description box for the more curious fellow scholars out there. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir, and it is time for some minds to be blown. We're going to talk about a philosophy paper. Before we start, a quick definition, an ancestor simulation is a hypothetical computer simulation that is detailed enough that the entities living within are conscious. Imagine a computer game that you play that doesn't contain mere digital characters, but fully conscious beings with feelings, aspirations, and memories. There are many interesting debates among philosophers on crazy, elusive topics like prove to me that I'm not in a dream, or I'm not just a brain in a bottle, somewhere that is being fed sensory inputs. Well, good luck. In his paper, Nick Bostrom, philosopher, offers us a refreshing take on this topic and argues that at least one of these three propositions is true. Almost all advanced civilizations go extinct before achieving technological maturity. There's a strong convergence among technologically mature civilizations in that none of them are interested in creating ancestor simulations. And here's the bomb. We are living in a simulation. At least one of these propositions is true, so if you say no to the first two, then the third is automatically true. You cannot categorically reject all three of these because if two are false, then the third follows. Also, the theory doesn't tell which of the three is true. Let's talk briefly about the first one. The argument is not that we go extinct before being technologically advanced enough to create such simulations. It means that all civilizations do. This is a very sad case, and even though there is research on the fact that war is receding, there's a clear trend that we have less warfare than we've had hundreds of years ago. I've linked a video on this here from Kurzgesagt. It is still possible that humanity eradicates itself before reaching technological maturity. We have an even more powerful argument that maybe all civilizations do. Such a crazy proposition. Second point, all technologically mature civilizations categorically reject ancestor simulations. Maybe they have laws against it because it's too cruel and unethical to play with sentient beings. But the fact that there is not one person in any civilization in any age who creates such a simulation, not one criminal mastermind anywhere ever. This also sounds pretty crazy. And if none of these are true, then there is at least one civilization that can run a stupendously large number of ancestor simulations. The future nerd guy just goes home, grabs a beer, starts his computer in the basement and fires up not a simple computer game, but a complete universe. If so, then there are many more simulated universes than real ones, and then with a really large probability, we're one of the simulated ones. Richard Dawkins says that if this is the case, we have a really disciplined nerd guy, because the laws of physics are not changing at a whim, we have no experience of everyone suddenly being able to fly. And as the closing words of the paper states with graceful eloquence, in the dark forest of our current ignorance, it seems sensible to apportioned one's credence roughly evenly between 1, 2 and 3. Please note that this discussion is a slightly simplified version of the manuscript, so it's definitely worth reading the paper if you're interested. Give it a go. As always, I've put a link in the description box. There is no conclusion here, no one really knows what the answer is. This is open to debate, and this is what makes it super interesting. And now, my personal opinion. It's just an opinion, it may not be true, it may not make sense, and may not even matter. Just my opinion. I'd go with the second. The reason for that is that we already have artificial neural networks that outperform humans on some tasks. They are still not general enough, which means that they are good at doing something like the deep blue is good at chess, but it's not really useful for anything else. However, the algorithms are getting more and more general, and the number of neurons that are being simulated on a graphical card in your computer are doubling every few years. They will soon be able to simulate so many more connections than we have, and I feel that creating an artificial super intelligent being should be possible in the future that is so potent that it makes a universe simulation pale in comparison. What such a thing could be capable of? It's already getting too long, I just can't help myself. You know what? Let's discuss it in a future 2 minute papers episode. I'd love to hear what you fellow scholars think about these things. If you feel like it, please leave your thoughts in the comments section below. I'd love to read it. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejol and Ifehir. This one is going to be huge, certainly one of my favorites. This work is a combination of several techniques that we have talked about earlier. If you don't know some of these terms, it's perfectly okay. You can remedy this by clicking on the pop-ups or checking the description box, but you'll get the idea even watching only this episode. So first, we have a convolutional neural network. This helps processing images and understanding what is depicted on an image. And a reinforcement learning algorithm. This helps creating strategies or to be more exact, it decides what the next action we make should be, what buttons we push on a joystick. So this technique mixes together these two concepts and we call it deep-queue learning, and it is able to learn to play games the same way as a human would. It is not exposed to any additional information in the code. All it sees is the screen and the current score. When it starts learning to play an old game, Atari Breakout, at first, the algorithm loses all of its lives without any signs of intelligent action. If we wait a bit, it becomes better at playing the game, roughly matching the skill level of an adapt player. But here's the catch. If we wait for longer, we get something absolutely spectacular. It finds out that the best way to win the game is digging a tunnel through the bricks and hit them from behind. I really didn't know this, and this is an incredible moment. I can use my computer, this box next to me that is able to create new knowledge, find out new things I haven't known before. This is completely absurd. Science fiction is not the future, it is already here. It also plays many other games. The percentages show the relation of the gamescores compared to a human player. Half 70% means it's great, and above 100% it's superhuman. As a follow-up work, scientists at DeepMind started experimenting with 3D games, and after a few days of training, it could learn to drive on ideal racing lines and pass others with ease. I've had my driving license for a while now, but I still don't always get the ideal racing lines right. Bravo. I have heard the complaint that this is not really intelligence because it doesn't know the concept of a ball or what it is exactly doing. Edgar Dijkstra once said, the question of whether machines can think is about as relevant as the question of whether submarines can swim. Beyond the fact that rigorously defining intelligence leans more into the domain of philosophy than science, I'd like to add that I am perfectly happy with effective algorithms. We use these techniques to accomplish different tasks, and they are really good problem solvers. In the breakout game, you, as a person, learn the concept of a ball in order to be able to use this knowledge as a machinery to perform better. If this is not the case, whoever knows a lot, but can't use it to achieve anything useful, is not an intelligent being but an encyclopedia. What about the future? There are two major unexplored directions. The algorithm doesn't have long-term memory, and even if it had, it wouldn't be able to generalize its knowledge to other similar tasks. Super exciting directions for future work. Thanks for watching and for your generous support, and I'll see you next time.
In this paper, we address an important open problem in material modeling. What happens when light scatters multiple times on rough material surfaces? In image synthesis, accurately describing light matter interactions is important for materials to obtain a realistic look. However, the multiple light matter interactions that we can see in this figure are absent from many surface appearance models. Here's an example of this problem. Rendering white glass should be simple, but we can see that the rougher the glass is, the darker its appearance becomes. Even though it should be simple, modeling the appearance of glass that is at the same time rough and white is almost impossible with the current material models. Many material models, such as those rough dielectric plates, are called micro-facet materials, because the underlying mathematical model assumes that their interfaces are made of microscopic imperfections that we call facets. Those facets are too small to be visible, but the way they are statistically oriented changed the way light interacts with the material causing its rough appearance. Many rendering systems model only the contribution of the first bounds of light. The contribution of multiple bounces is unknown and it is simply set to zero as if it were neglectable. However, on very rough microsurface, the amount of light that scatters multiple times is significant and should not be neglected to avoid energy loss and the noticeable darkening of the material appearance. In summary, modeling rough materials correctly with multiple scattering is a challenging problem. Our multiple scattering model presented in this paper opens up the possibility of modeling rough materials correctly in a practical manner. Beyond fixing the darkening problem, our goal is to derive a physically based model that is able to make accurate predictions compared to reference data. More specifically, we derive the multiple scattering component of a specific kind of microsurface, the Smith microsurface model, because it is based on simple assumptions and makes accurate predictions for single scattering, it has received widespread industrial adoption and is considered the academic state of the art in computer graphics for modeling many materials. But can we extend this model for multiple scattering and could it be practically incorporated into a classic BSDF plugin? These are the questions we are interested in. Our main insight is to transform this surface scattering problem into a volume scattering problem which is easier to solve. To achieve that, we show that the Smith microsurface model can be derived as a special case of the microflake theory for volumes. We deformulate the Smith microsurface as a volume, with additional constraints to enforce the presence of a sharp interface. This volumetric analogy is very convenient because we know how to compute the light scattering in volumes. It depends on two functions that we derive for this new kind of volume. The first one is the free path distribution which tells us how long array can travel in a medium before finding an intersection. On the microsurface, the equivalent question is what is the height of the next intersection? Once an intersection is found, we need to know in which direction the light is scattering again. This is given by a volumetric phase function which depends on both the base material of the surface and the distribution of the microfacets. We derive the phase function for three different surface materials, diffuse, conductive, and dielectric, and common microfacet distributions such as Beckman and GGX. Now that we know the free path and the phase function of this volumetric model, we know exactly how the light scatters in the medium. From the light propagated in this medium emerges a distribution that has all the expected properties of a classic surface BSDF. It is energy-conserving and reciprocal. Furthermore, it is exactly the classic single scattering BSDF based on the Smith-Microsoft surface model but with the addition of higher order scattering. Now that we know that the model is mathematically correct, we are interested in its predictive power. How accurate is this new model? To answer this question, we need some reference data to compare the predictions of the model to. A common way to validate models is to compare their predictions to simulated data obtained by ray tracing triangulated surfaces. On contrary to real-world acquisition, the surface used in the simulation has known material and statistics and the collected data are free of noise. There is thus no degrees of freedom left to match the parameters of the model to the simulation. This is why this validation procedure is widely used in the field of optical physics and therefore we chose this to validate our model. We generated random surfaces with known Beckman statistics and did the ray tracing simulation on them. By comparing the predictions of our multiple scattering model to the results of the ray tracing simulation, we found our BSDF model to accurately predict both the albedo and angular distribution of the accident energy among the scattering orders and this for a large variety of materials, roughnesses, ni-sortropy and inclinations. In our supplemental material, we provide an exhaustive set of such validation results. To make the model practical, we implement two procedures, evaluation and important sampling. Since the BSDF is the expectation of all the paths that can be traced on the microsurface, important sampling can be done straightforwardly by generating one path. We construct an unbiased, tohastic estimate by tracing one path and evaluating the face functions at each intersection with next event estimation as in classical path tracing. With important sampling and this tohastic evaluation, we have everything required to implement a classic BSDF plugin. Furthermore, our implementation is analytic and does not use per BSDF pre-computed data, which makes our BSDFs usable with textured orbitals, roughness and ni-sortropy. In the supplemental materials, we provide a document describing a tutorial implementation for various materials and ready to use plugins for the Mitsuba Physically-Based Render. Now let's have a look at some results. This image shows a collection of bottles with micro-faceted materials. The energy loss is significant if multiple scattering is neglected, especially on dielectrics. Without multiple scattering, rough transmittance appears unnatural, which is hard to compensate for by tuning parameters. With our multiple scattering model, we simulate the expected appearance of rough glass and metals without tuning any parameters. Our model is robust and behaves as expected even with high roughness values. We can see that the model avoids the darkening effects and even produces interesting emerging effects like color saturation. This can be observed on this rough diffuse material. Since the absorption spectrum of the material is repeatedly multiplied after each bounce on the microsurface, the reflected color appears more saturated after multiple bounces. This emerging effect can also be seen on this gold conductor material. The unsaturated single scattering gold conductor appears strangely dull. Thanks to our model, the introduction of multiple scattering restores the shiny appearance expected from gold. Note that since our model is parametric and does not depend on any pre-computed data, we fully support textured input, which is important for creating visually rich images. As an example, this is a dielectric with textured roughness and anisotropy. Thanks for watching.
Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifaher. Reinforcement learning is a technique that can learn how to play computer games or any kind of activity that requires a sequence of actions. We are not interested in figuring out what we see on an image because the answer is one thing. We are always interested in a sequence of actions. The input for reinforcement learning is a state that describes where we are and how the world looks around us and the algorithm outputs the optimal next action to take. In this case, we would like a digital doctor run and leap over and onto obstacles by choosing the optimal next action. It is quite difficult as there are a lot of body parts to control in harmony. The algorithm has to be able to decide how to control leg forces, spine curvature, angles for the shoulder, elbow, hip and knees. And what is really amazing is that if it has learned everything properly, it will come up with exactly the same movements as we would expect animals to do in real life. So this is how reinforcement learning works. If you do well, you get a reward and if you don't, you get some kind of punishment. These rewards and punishments are usually encoded in the score. If your score is increasing, you know you've done something right and you try to self-reflect and analyze the last few actions to find out which of them were responsible for this positive change. The score would be, for instance, how far the dog could run on the map without falling and at the same time it also makes sense to minimize the amount of effort to make it happen. So, reinforcement learning in a nutshell, it is very similar to how a real-world animal or even a human would learn. If you're not doing well, try something new and if you're succeeding, remember what you did that led to your success and keep doing that. In this technique, dogs were used to demonstrate the concept, but it's worth noting that it also works with bipeds. Reinforcement learning is typically used in many control situations that are extremely difficult to solve otherwise, like controlling a quadrocopter properly. It's quite delightful to see such a cool work, especially given that there are not so many uses of reinforcement learning in computer graphics yet. I wonder why that is. Is it that not so many graphical tasks require a sequence of actions? Or maybe we just need to shift our mindset and get used to the idea of formalizing problems in a different way so we can use such powerful techniques to solve them. It is definitely worth the effort. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejolenei Fehr. Cryptography helps us communicate securely with someone in the presence of third parties. We use this one we do for instance online banking or even as mundane tasks as reading our Gmail. One of the simplest ways of doing cryptography is using the Caesar Cypher. We have a message and each letter we shift with the same amount. Okay, wait. What does shifting mean? Changing the letter A by 1 becomes B and shifting E by 1 becomes F and so on. The amount of shifting doesn't have to be exactly 1. It can be anything as long as we shift all letters in the message with the same amount. If we would run out of the alphabet for instance by shifting the last letter Z by 1, we get A the first letter back. There's a special case of Caesar Cypher's that we call Roth 13 that has an interesting property. It means that we shift the entirety of the message by 13 letters. Let's encrypt a message with Roth 13. We obtain some gibberish. Okay. Now let's pretend that this gibberish is again a message that we would like to encrypt. We get the original message back. Why is that? Since there is 26 letters in the basic Latin alphabet, we first shift by 13, then doing it again, we shift by 13 letters, which is a total of 26, therefore we went around the clock and ended up where we started. Metematicians like to describe this concisely by saying that the inverse of the Roth 13 function is itself. If you call it again, you end up with the same message. We know the statistical probabilities of different letters in the English language. For instance, we know that the letter E is relatively common and Z is pretty rare. If we shift our alphabet by a fixed amount, the probabilities will remain the same only for different letters. Therefore this cipher is quite easy to break, even automatically, with a computer. This is anything but secure communication. The one-time pad encryption is one step beyond this, where we don't shift each letter with the same amount, but with different amounts. This list of numbers to use for shifting is called a pad, because it can be written on a pad of paper, and it has to be as long as the message itself. Why one time? Why paper? No worries, we're going to find out soon enough. If we use this technique, we'll enjoy a number of beneficial properties. For instance, take a look at this example with a one-time pad. We have two Vs in the encrypted output, but the first V corresponds to an H and the second V corresponds to a P. Therefore if I see a V in the encrypted output, I have no idea which letter it was in the input. Computing statistical probabilities doesn't make any sense here, and we're powerless in breaking this. So even if you can intercept this message as a third party, you have no idea what it is about. It's very easy to prove mathematically that the probability of the message being happy is the very same probability as hello, or A, B, C, D, E, or actually any gibberish. The one-time pad is the only known technique that has optimal perfect secrecy, meaning that it is impossible to crack as long as it is used correctly. This is mathematically proven. It is not a surprise that it had seen plenty of use during the Second World War. So what does it mean to use it correctly? Several things. Pads need to be delivered separately from the message itself. For instance, you walk up to the recipient and give them the pad in person. The exchange of the pads is a huge problem if you are on the internet or at war. Now you must also be worried that the pad must not be damaged if you lose just one number. The remainder of your message is going to be completely garbled up. You're done. The key in the pad needs perfectly random numbers, no shortcuts. Getting perfectly random numbers is anything but a trivial task and is subject to lots of discussion. One-time pads have actually been broken because of this. There's an excellent episode on a well-known channel called V-SOS on what random really means. Make sure to check it out. The pad has to be destroyed upon use and should never be reused. So if you do all this, you're using it correctly. In the age of the internet, it is not really practical because you cannot send a delivery guy with the secret pad next to every message you send on the internet. So in a nutshell, one-time pad is great, but it is not practical for large-scale real-time communication from afar. And as crazy as it sounds, if a civilization can find a method to do practical communication with perfect cryptography, their communication will look indistinguishable from noise. This is amazing. There's tons of ongoing debates on the fact that we're being exposed to tons of radio signals around the earth. Why can we still not find any signs of extraterrestrial communication? Well, there you have the answer. And this is going to blow your mind. If practical, perfect cryptography is mathematically possible, the communication of any sufficiently advanced civilization is indistinguishable from noise. They may be transmitting their diabolical plans through us this very moment, and all we would hear is white noise. Crazy isn't it? Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Carlos Jean-Layfahier. A neural network is a very loose model of the human brain that we can program in a computer. Or, it's perhaps more appropriate to say that it is inspired by our knowledge of the inner workings of a human brain. Now, let's note that artificial neural networks have been studied for decades by experts. And the goal here is not to show all aspects, but one intuitive, graphical aspect that is really cool and easy to understand. Take a look at these curves on a plane. These curves are a collection of points, and these points you can imagine as images, sounds, or any kind of input data that we try to learn. The red and blue curves represent two different classes. The red can mean images of trains, and the blue, for instance, images of bunnies. Now, after we have trained the network from this limited data, which is basically a bunch of images of trains and bunnies, we will get new points on this plane, new images, and we would like to know whether this new image looks like a train or a bunny. This is what the algorithm has to find out. And this we call a classification problem, to which a simple and bad solution would be simply cutting the plane in half with a line. Images belonging to the red regions will be classified as the red class, and the blue regions as the blue class. Now, as you can see, the red region cuts into the blue curve, which means that some trains will be misclassified as bunnies. It seems that if we look at the problem from this angle, we cannot really separate the two classes perfectly with a straight line. However, if we use a simple neural network, it will give us this result. Hey, but that's cheating. We were talking about straight lines, right? This is anything but a straight line. A key concept of neural networks is that they create an inner representation of the data model and try to solve the problem in that space. What this intuitively means is that the algorithm will start transforming and warping these curves, where their shapes start changing, and it finds that if we do well with this warping step, we can actually draw a line to separate these two classes. After we undo this warping and transform the line back to the original problem, it will look like a curve. Really cool, isn't it? So these are actually lines only in a different representation of the problem, who said that the original representation is the best way to solve a problem. Take a look at this example with the entangled spirals. When we separate these with a line, not a chance, but the answer is not a chance with this representation. But if one starts warping them correctly, there will be states where they can easily be separated. However, there are rules in this game. For instance, one cannot just rip out one of the spirals here and put it somewhere else. These transformations have to be homeomorphisms, which is a term that mathematicians like to use. It intuitively means that the warpings are not too crazy, meaning that we don't tear apart important structures. And as they remain intact, the warped solution is still meaningful with respect to the original problem. Now comes the deep learning part. Deep learning means that the neural network has multiple of these hidden layers and can therefore create much more effective inner representations of the data. From an earlier episode, we've seen in an image recognition test that as we go further and further into the layers, first we'll see an edge detector and there's a combination of edges, object parts emerge. And in the later layers, a combination of object parts create object models. Let's take a look at this example. We have a bullseye here, if you will, and you can see that the network is trying to warp this to separate it with a line, but in vain. However, if we have a deep neural network, we have more degrees of freedom, more directions and possibilities to warp this data. And if you think intuitively, if this were a piece of paper, you could put your finger behind the red zone and push it in, making it possible to separate the two regions with a line. Let's take a look at the one-dimensional example to better see what's going on. This line is the one-de-equivalent of the original problem, and you can see that the problem becomes quite trivial if we have the freedom to do this kind of transformation. We can easily encounter cases where the data is very severely tangled, and we don't know how good the best solution can be. There is a very heavily academic subfield of mathematics called Noth Theory, which is the study of tangling and untangling objects. It is subject to a lot of snarky comments for not being well, too exciting or useful. What is really mind-blowing is that Noth Theory can actually help us study these kinds of problems, and it may ultimately end up being useful for recognizing traffic signs and designing self-driving cars. Now, it's time to get our hands dirty. Let's run a neural network on this dataset and see what happens. If we use a low number of neurons and one layer, you can see that it is trying ferociously, but we know that it is going to be a fruitless endeavor. Upon increasing the number of neurons, magic happens. And we know exactly why. Yeah! Thanks so much for watching and for your generous support. I feel really privileged to have supporters like you fellow scholars. Thank you and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir. Neural networks can be used to learn a variety of things, for instance, to classify images, which means that we'd like to find out what breed the dog is that we see on the image. This work uses a combination of two techniques, a neural network variant that is more adapted to the visual mechanisms of humans and is therefore very suitable for processing and classifying images. This variant we call a convolutional neural network. Here's a great web application where you can interactively train your own network and see how it improves at recognizing different things. This is a dataset where the algorithm tries to guess which class these smudgy images are from. If trained for long enough, it can achieve a classification accuracy of around 80%. The current state of the art in research is about 90%, which is just 4% off of humans who have performed the same classification. This is already insanity. We could be done right here, but let's put this on steroids. As you remember from an earlier episode, sentences are not one thing, but they are a sequence, a sequence of words. Therefore they can be created by recurrent neural networks. Now I hope you see where this is going. We have images as an input and sentences as an output. This means that we have an algorithm that is able to look at any image and summarize what is being seen on the image. Buckle up because you're going to see some wicked results. It can not only recognize the construction worker, it knows that he's in a safety vest and is currently working on the road. It can also recognize that a man is in the act of throwing a ball. A black and white dog jumps over a bar. It is not at all trivial for an algorithm to know what over and under means, because it is only looking at a 2D image that is the representation of the 3D world around us. And there are, of course, hilarious failure cases. Well, a baseball bat. Well, close enough. There is a very entertaining web demo with the algorithm and all kinds of goodies that are linked in the description box. Check them out. The bottom line is that what we thought was science fiction five years ago is now reality in machine learning research. And based on how fast this field is advancing, we know that we're still only scratching the surface. Thanks for watching and I'll see you next time. Oh, and before you go, you can now be a part of two minute papers and support the series on Patreon. A video with more details is coming soon. Until then, just click on the link on the screen if you're interested. Thank you.
With the help of science, humans are capable of creating extraordinary things. Two-minute papers is a series where I explain the latest and greatest research in a way that is understandable and enjoyable to everyone. We talk about really exciting topics like machine learning techniques to paint in the style of famous artists, light simulation programs to create photorealistic images on a computer, fluid and smoke simulations that are so high quality that they are used in the movie industry, animating the movement of digital creatures on a computer, building bridges with flying machines, and many more extremely exciting topics. Research papers are for experts, but two-minute papers is for everyone. Creating each of these videos is a lot of work. I do almost everything on my own. Creating these topics, audio recordings, audio engineering, and putting the videos together. And my wife, Felicia, designs these beautiful thumbnails for each of them. And now you can become an active supporter of two-minute papers. If you help with only one dollar per month, you help more than a few thousand advertisement views on a video. It's insanity and it's tremendously helpful. And you also get really cool perks like accessing upcoming episodes earlier or deciding the topic of the next two-minute papers video. Two-minute papers is never going to be behind the paywall. It will always be free for everyone. I feel that it's just so honest. I create videos and if you like them, you can say, hey, I like what you're doing. Here's some help. That's really awesome. If you'd like to help, just click on the Patreon link at the end of this video or in the description box below. Or if you're watching this on the Patreon website, click become a patron and select an amount. And I am tremendously grateful for your support. Also, if you're already a supporter of the show and feel that you need this amount to make ends meet, no worries. You can just cancel the subscription at any time. And if you don't want to spend a dime or you can't afford it, it's completely okay. I'm very happy to have you around. And please, stay with us and let's continue our journey of science together. Let's show the world how cool science and research really is. Thanks for watching and I'm looking forward to greeting you in our growing club of fellow scholars. Hey.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir, where we expand our knowledge in science and research. Blackboard style lecture videos are really popular on YouTube nowadays. Khan Academy is an excellent example of that, where you get the feeling that someone is sitting next to you and teaching you, not like someone who is addressing you formally from the podium. Without question, these kinds of videos can augment textbooks quite well. However, they are often not easily searchable. This piece of work tries to take this one step beyond. The input is a video and a transcript, and the output of the algorithm is an interactive lecture note, where you can not only see the most important points during the lecture, but you can also click on some of them to see full derivations of the expressions. Let's outline the features that one would like to see in a usable outlining product. It has to be able to find milestones that are at the end of each derivation to present them to the user. If you study it mathematics, you know how mathematical derivations go. Following the train of thought of the teacher is not always trivial. It's also important to find meaningful groupings for a derivation. This involves finding similarities between drawings, trying to find out the individual steps and doing a segmentation to get a series of images out of it. And finally, the technique has to be good at interliving drawings and formulae with written text in an appealing and digestible way. It is very easy to mess up with this step as the text has to describe the visuals. Even though I wish tools like this existed when I was an undergrad student, it is still important to just study, study and study and expand one's knowledge. If textbooks like this start to appear, I'll be the first in line and I'll not be reading, I'll be devouring them. Also, think about how smart the next generation will be with awesome studying materials like these. Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karoizsol Naifahir. Today we are going to talk about a great algorithm that takes the facial expression of one human and transfers it onto someone else. First, there is a calibration step where the algorithm tries to capture the geometry and the reflectance properties of both faces. The expression transfer comes after this, which is fraught with difficulties. It has to be able to deal with changes in the geometry. The reflectance properties of the face, the illumination in the room, and finally changes in pose and expressions. All of this at the same time and with an negligible time delay. The difficulty of the problem is further magnified by the fact that we humans know really well how a human face is meant to move, therefore even the slightest inaccuracies are very easily caught by our eyes. Add this to the fact that one has to move details like additional wrinkles to a foreign face correctly, and it's easy to see that this is an incredibly challenging problem. And the resulting technique not only does the expression transfer quite well, but is also robust for lighting changes. However, it is not robust for occlusions, meaning that errors should be expected when something gets in the way. Problems also arise if the face is turned away from the camera, but the algorithm recovers from these erroneous states rapidly. What's even better, if you use this technique you can also cut back on your plastic surgery and hair plantation costs. How cool is that? This new technique promises tons of new possibilities. Beyond the obvious impersonation and reenactment fund for the motion picture industry, the authors propose the following in the paper. Imagine another setting in which you could reenact a professionally captured video of somebody in business attire with a new real-time face capture of yourself sitting in casual clothing on your sofa. Hell yeah! Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. This is going to be a more in-depth episode of the series. Photo-realistic rendering means that we create a 3D model of a scene on a computer and we run a light simulation program that shows how it would look like in reality. These programs simulate rays of light that connect the camera to the light sources and the scene and compute the flow of energy between them. If you have missed our earlier episode on Metropolis Light Transport and if you're interested, make sure to watch it first, I've put a link in the description box. This time, let's go one step beyond classical light transport algorithms and talk about a gradient domain rendering technique and how we can use it to create photo-realistic images quicker. First of all, what is a gradient? The gradient is a mathematical concept. Let's imagine an elevation map of a country where there are many hills and many flat regions. And imagine that you are an ambitious hill climber who is looking for a challenge, therefore you would always like to go in a direction that seems to be the highest elevation increase. The biggest rock that you can climb nearby. The gradient is a bunch of arrows that always point in the direction of the largest increase on the map. Here with blue, you can see the elevation map with the mountains and below it with red, the gradient of this elevation map. This is where you should be going if you are looking for a challenge. It is essentially a guidebook for aspiring hill climbers. One more example with a heat map. The blue or colors denote colder, the reddish colors show the warmer regions. If you are freezing, the gradients will show you where you should go to warm up. So if you have the elevation map, it is really easy to create the gradients out of it. But what if we have it the other way around? This would mean that we only have the guidebook, the red arrows, and from that we would like to guess what the blue elevation map looks like. It's like a crossword puzzle, only way cooler. In mathematics, we call this procedure solving the Poisson equation. So let's try to solve it by hand. I look at the middle where there are no arrows pointing in this direction, only once that point out of here. Meaning that there is an increase outwards, therefore this has to be a huge hole. If I look at the corners, I don't see very long arrows, meaning that there is no real change in these parts, therefore it must be a flat region. So we can solve this Poisson equation and recreate the map from the guidebook. To see what this is good for, let's jump right into the gradient domain render. Imagine that we have this simple scene with a light source, an object that occludes the light source, and the camera looking down on this shadow edge. Let's rip out this region and create a close-up of it. Imagine that the light regions are large hills on the elevation map, and the shadow edge is the ground level below those. These algorithms were looking to shoot as many rays as possible towards the brighter regions, but not this one. The gradient domain algorithm is looking for gradients, abrupt changes in the illumination, if you will. You can see these wide red pairs next to each other. These are the places where the algorithm concentrates. If we compute the difference between them, we get the gradients of our elevation map. In these regions, the difference is zero, therefore we would have infinitely small arrows, and from the previous examples, we solve the Poisson equation to get the blue map back from the red arrows. The small arrows mean that we have a completely flat region, so we can recognize that we have a wide wall in the background by just looking at a few places, we don't need to explore every inch of it, like previous algorithms do. And as you can see at the shadow edge, the algorithm is quite interested in this change. In our gradients, there will be a large red arrow pointing from the white to the red dot, because we are going from the darkness to a light region. After solving the Poisson equation, we recognize that there should be a huge jump here. So in the end, with this technique, we can often get a much better idea of the illumination in the scene than we did with previous methods that just try to explore every single inch of it. The result is improved output images with much less noise, even though the gradient domain renderer computed much less raised than the previous random algorithm. Excellent piece of work, bravo! Now that we understand what gradients and Poisson's equation is, let's play a quick game together and try to learn these mathematical concepts from the internet like an undergrad student would do. And before you run away and terror, this is not supposed to be pleasant. I'll try to make a point after reading this. In mathematics, the gradient is a generalization of the usual concept of derivative of a function in one dimension to a function in several dimensions. If f of x1 to xn is a differentiable scalar valued function of standard Cartesian coordinates in Euclidean space, its gradient is the vector whose components are the n partial derivatives of f. It is thus a vector valued function. Now let's proceed to Poisson's equation. In mathematics, Poisson's equation is a partial differential equation of elliptic type with broad utility in electrostatics, mechanical engineering and theoretical physics. It is used, for instance, to describe the potential energy field caused by a given charge or mass density distribution. This piece of text is one of the reasons why I started two-minute papers. I try to pull the curtains and show that difficult mathematical and scientific concepts often conceal very simple and intuitive ideas that anyone can understand. And I am delighted to have you by my side on this journey. This was anything but two minutes. I incorporated a bit more details for you to have a deeper understanding of this incredible work. I hope you don't mind. Let me know if you liked it in the comments section below. Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Artificial neural networks are very useful tools that are able to learn and recognize objects on images or learn the style of Van Gogh and paint new pictures in his style. Today we're going to talk about recurrent neural networks. So what does the recurrent part mean? With an artificial neural network, we usually have a one-to-one relation between the input and the output. This means that one image comes in and one classification result comes out whether the image depicts a human face or a train. With recurrent neural networks, we can have a one-to-many relation between the input and the output. The input would still be an image, but the output would not be a word, but a sequence of words, a sentence that describes what we see on the image. For a many-to-one relation, a good example is sentiment analysis. This means that a sequence of inputs, for instance, a sentence, is classified as either negative or positive. This is very useful for processing movie reviews, where we'd like to know whether the user liked or hated the movie without reading pages and pages of discussion. And finally, recurrent neural networks can also deal with many-to-many relations, translating an input sequence into an output sequence. Examples of this can be machine translations that take an input sentence and translate it to an output sentence in a different language. For another example of a many-to-many relation, let's see what the algorithm learned after reading Tolstoy's War and Peace novel by asking it to write exactly in that style. It should be noted that generating a new novel happens letter by letter, so the algorithm is not allowed to memorize words. Let's look at the results at different stages of the training process. The initial results are, well, gibberish. But the algorithm seems to recognize immediately that words are basically a big bunch of letters that are separated by spaces. If we wait a bit more, we see that it starts to get a very rudimentary understanding of structures. For instance, a quotation mark that you have opened must be closed, and a sentence can be closed by a period, and it is followed by an uppercase letter. Later, it starts to learn shorter and more common words, such as fall, debt, the, for, me. If we wait for longer, we see that it already gets a grasp of longer words, and smaller parts of sentences actually start to make sense. Here's a piece of Shakespeare that was written by the algorithm after reading all of his works. You see names that make sense, and you really have to check the text thoroughly to conclude that it's indeed not the real deal. It can also try to write math papers. I had to look for quite a bit until I realized that something is fishy here. It is not unreasonable to think that it can very easily deceive a non-expert reader. Can you believe this? This is insanity. It is also capable of learning the source code of the Linux operating system and generate new code that looks quite sensible. This can also try to continue the song Let It Go from the famous Disney movie Frozen. So recurrent neural networks are amazing tools that open up completely new horizons for solving problems where either the inputs or the outputs are not one thing, but a sequence of things. And now, signing off with a piece of recurrent neural network wisdom. Well, your wit is in the care of side and death. Bear this in mind wherever you go. Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. Simulating the behavior of water and other fluids is something we have been talking about in the series. However, we are now interested in modeling the interactions between two fluid interfaces that are potentially made of different materials. During these collisions, deformations and topology changes happen that are very far from trivial to simulate properly. The interesting part about this technique is that it uses graph theory to model these interface changes. Graph theory is a mathematical field that studies relations between, well, different things. Graphs are defined by vertices and edges where the vertices can represent people on your favorite social network and any pair of these people who know each other can be connected by edges. Graphs are mostly used to study and represent discrete structures. This means that you either know someone or you don't, there is nothing in between. For instance, the number of people that inhabit the Earth is an integer. It is also a discrete quantity. However, the surface of different fluid interfaces is a continuum. It is not really meant to be described by discrete mathematical tools such as graphs. And, well, that's exactly what happened here. Even though the surface of a fluid is a continuum, when dealing with topological changes, an important thing we'd like to know is the number of regions inside and around the fluid. The number of these regions can increase or decrease over time, depending on whether multiple materials split or merge. And surprisingly, graph theory has proved to be very useful in describing this kind of behavior. The resulting algorithm is extremely robust, meaning that it can successfully deal with a large number of different materials. These include merging and wobbling droplets, piling plastic bunnies, and swirling spheres of glue. Beautiful results! If you liked this episode, please don't forget to subscribe and become a member of our growing club of fellow scholars. Please come along and join us on our journey and let show the world how cool research really is. Thanks so much for watching and I'll see you next time!
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir. A Glockenspiel is a percussion instrument that consists of small pieces of metal that are tuned to emit a given musical note when they are struck. In order to achieve these sounds, this instrument is usually manufactured as a set of metal bars. Researchers at Harvard, Columbia University and MIT became interested in designing a computer algorithm to obtain different shapes that lead to the same sounds. And if it's possible, then one should be able to mill or 3D print these shapes and see whether the computation results are in line with reality. The algorithm takes an input material, a target shape and a location where we'd like to strike it, and a frequency spectrum that describes the characteristics of the sound we are looking for. Furthermore, the algorithm also has to optimize how exactly the stand of the piece looks like to make sure that no valuable frequencies are dampened. Here's an example to show how impactful the design of this stand is and how beautiful sustain the sound is if it is well optimized. You'll see a set of input shapes specified by the user that are tuned to standard musical notes and below them, the optimized shapes that are as similar as possible, but with the constraint of emitting the correct sound. The question is how should the technique change your target shape to match the sound that you specified? One can also specify what overtones the sound should have. An overtone means that besides the fundamental tone that we play, for instance on a guitar, higher frequency sounds are also emitted producing a richer and more harmonious sound. In this example, the metal piece will emit higher octaves of the same note. If you have a keen ear for music, you will hear and appreciate the difference in the sounds. In summary, with this technique, one can inexpensively create awesome, custom-made lock-and-spills that have a comparable sound quality to professionally manufactured instruments, staggering results. It seems that we are starting to appear in the news. It is really cool to see that there is a hunger for knowing more about science and research. If you like this episode, please help us reaching more and more people and share the series with your friends, especially with people who have nothing to do with science. Thanks so much for watching and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. If we would like to see how digitally modeled objects would look like in real life, we would create a 3D model of the desired scene, assign material models to the objects within, and use a photorealistic rendering algorithm to finish the job. It simulates rays of light that connect the camera to the light sources in the scene and compute the flow of energy between them. Initially, after a few rays, we'll only have a rough idea on how the image should look like, therefore our initial results will contain the substantial amount of noise. We can get rid of this by simulating the path of millions and millions of rays that will eventually clean up our image. This process where a noisy image gets clearer and clearer, we call convergence, and the problem is that this can take excruciatingly long, even up to hours to get a perfectly clear image. With the simpler algorithms out there, we generate these light paths randomly. This technique we call path tracing. However, in the scene that you see here, most random paths can connect the camera and the light source because this wall is in the way obstructing many of them. Light paths like these don't contribute anything to our calculations and are ultimately a waste of time and precious resources. After generating hundreds of random light paths, we have found a path that finally connects the camera with the light source without any obstructions. In generating the next path, it will be a crime not to use this knowledge to our advantage. A technique called metropolis light transport will make sure to use this valuable knowledge and upon finding a bright light path, it will explore other paths that are nearby to have the best shot at creating valid, unobstructed connections. If we have a difficult scene at hand, metropolis light transport gives us way better results than traditional, completely random paths sampling techniques such as path tracing. There are some equal time comparisons against path tracing to show how big of a difference this technique makes. An equal time comparison means that we save the output of two algorithms that we ran for the same amount of time on the same scene and see which one performs better. This scene is extremely difficult in a sense that the only source of light is coming from the upper left and after the light goes through multiple glass spheres, most of the light paths that would generate will be invalid. As you can see, the random path tracing yields really dreadful results. Well, if you can call a black image a result that is. And as you can see, metropolis light transport is extremely useful in these cases. And here's the beautiful, completely cleaned up, converged result. The lead author of this technique, Eric Vich, won a technical Oscar award for his work, one of which was metropolis light transport. If you like this series, please click on that subscribe button to become a fellow scholar. Thanks for watching, there are millions of videos out there and you decided to take your time with this one. That is amazing. Thank you and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Jolenefa here. So far we have seen excellent works on how to simulate the motion and the collision of bodies, but we have completely neglected some aspect of videos that is just as important as visuals, and that aspect is none other than sound. What if you have footage of objects colliding but no access to the sound of the encounter? You obviously have to recreate the situation that you see on the screen and even for the easiest cases you have to sit in the studio with a small hammer and a mug, which is difficult and often a very laborious process. If we can simulate the forces that arise when bodies collide, what if we could also simulate the sound of such encounters? If you would like a great solution for this, this is the work you should be looking at. Most techniques in the fields take objects into consideration as rigid bodies. In this work, the authors extend the simulation to deformable bodies, therefore making it possible to create rich clanging sound effects. Now, the mandatory question arises, how do evaluate such a technique? Something means that we would like to find out how accurate it really is. And obviously, the ideal cases if we compare the sounds created by the algorithm to what we would experience in the real world and see how close they are. Well pretty damn close. I love these simulation software works the most when they are not only beautiful, but they somehow relate to reality, and this technique is a great example of that. It feels quite empowering that we have these really smart people who can solve problems that sound inconceivably difficult. Thank you so much for checking the series out. If you would like to be notified quickly when a new episode of 2 Minute Papers pops up, consider following me on Twitter. I announce every upload right away. I've put a link for this in the description box. Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Carlos Rona Ifehir. Let's talk about the behavior of cloth in animations. In Disney movies, you often see characters wandering around in extremely realistically behaving a parallel. It sounds like something that would be extremely laborious to create by hand. Do animators have to create all of this movement by hand? Not a chance. We use computer programs to simulate the forces that act on the fabric, which start spending and stretching in a number of different directions. The more detailed simulations we are looking for, the more computational time we have to invest and the more we have to wait. The computations can take up to a minute for every image, but if we have lots of movement and different fabrics in the scene, it can take even more. Is there a solution for this? Can we get really high quality simulations in a reasonable amount of time? Of course we can. The name of the game is Adaptive Simulation again. We have talked about Adaptive Fluid Simulations before. Adaptive means that the technique tries to adapt to the problem that we have at hand. Here in the world of cloth simulations, it means that the algorithm tries to invest more resources in computing regions that are likely to have high fidelity details such as wrinkles. These regions are marked with red to show that wrinkles are likely to form here. The blue and yellow denotes regions where there is not so much going on, therefore we don't have to do too many calculations there. These are the places where we can save a lot of computational resources. This example illustrates the concept extreme level. Take a look. While the fabric is at rest, it's mostly blue and yellow, but as forces are exerted on it wrinkles appear and the algorithm recognizes that these are the regions that we really need to focus on. With this Adaptive technique, the simulation time for every picture that we create is reduced substantially. Luckily, some cloth simulation routines are implemented in Blender, which is an amazing free software package that is definitely worth checking out. I've put some links in the description box to get you started. Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In this work, we place a small light source to a chosen point in a scene and record a photograph of how things look like with the given placement. Then we place the light source to a new position and record an image again. We repeat this process several times. Then, after we have done that, we have the question, what would the photograph look like if I put the light source to places I haven't seen yet? This process we call image-relighting. This work uses neural networks to do relighting by learning how different light source placements behave. If you haven't heard about neural networks before, make sure to check out our previous episodes on the topic. I have put links for you in the description box. After the training, this technique guesses how completely unknown light source setups would look like in reality. We give the algorithm a light source position we haven't seen yet and it will generate us a photograph of how it would look like in reality. The first question is ok, but how well does it do the job? I am not sure if you are going to believe this one as you will be witnessing some magnificent results. On the left you will see real photographs and on the right, reconstructions that are basically the guesses of the algorithm. Note that it doesn't know how the photograph would look like. It has to generate new photographs based on the knowledge that it has from seeing other photos. It is completely indistinguishable from reality. This is especially difficult in the presence of the so-called high frequency lighting effects. The high frequency part means that if we change the light source just a bit, there may be very large changes in the output image. Such a thing can happen when a light source is moved very slightly but is suddenly hidden behind an object, therefore our photograph changes drastically. The proposed technique uses ensembles, it means that multiple neural networks are trained and their guesses are average to get better results. What do you do if you go to the doctor and he says you have a very severe and very unlikely condition? Well, you go and ask multiple doctors and see if they say the same thing. It is reasonable to expect that the more doctors you ask, the clearer you will see and this is exactly what the algorithm does. Now look at this. On the left side there is a real photo and on the right the guess of the algorithm after training. Can you believe it? You can barely see the difference and this is a failure case. The success story scenarios for many techniques are not as good as the failure cases here. These results are absolutely stunning. The algorithm can also deal with multiple light sources of different colors. As you can see, machine learning techniques such as deep neural networks have opened so many doors in research lately. We are starting to solve problems that everyone agreed were absolutely impossible before. We are currently over 2000 subscribers, our club of scholars is growing at a really rapid phase. Please share the series so we can reach people that don't know about us yet. Let's draw them in and show them how cool research really is. Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. The work we are going to discuss today is about visual microphones. What does this mean exactly? The influence of sound creates surface vibrations in many objects such as plants or a bag of chips, foil containers, water, and even bricks. They thereby work as visual microphones. Now hold onto your chairs because this algorithm can reconstruct audio data from video footage of vibrations. What this means is that if someone outside of your house pointed a high speed camera at a bag of chips when you start talking in your room, he will be able to guess what you said by only seeing the vibrations of the bag of chips. In the following example, you will see a recorded footage of the bag, but the movement is so subtle that your naked eye won't see any of it. First you'll hear the speech of a person recorded in the house, then the reconstruction from only the visual footage of the bag. Mary had a little lamb whose speech was lightest snow, and everywhere that Mary went, that lamb was stored to go. And this is what we were able to recover from high speed video filmed from outside behind soundproof glass. This is just unbelievable. Here is another example with a plastic bag where you can see the movement caused by the sound waves. The paper is very detailed and rigorous, this is definitely one of the best research works I've seen in a while. The most awesome part of this is that this is not only an excellent piece of research, it is also a great product. And note that this problem is even harder than one would think since the frequency response of various objects can be quite different, which means that every single object vibrates a bit differently when hit by the same sound waves. You can see a great example of these responses from bricks, water, and many others here. What it will be used for is shrouded in mystery for the moment. Even though I think this work provides fertile grounds for new conspiracy theories, the authors don't believe it is suitable to use for surveillance. Someone argued that it may be useful for early earthquake detection, which is an awesome idea. Also, maybe it could be used to detect video redubbing and recovering beaped out speech from videos and I'm sure there will be many other applications. The authors also have a follow-up paper on estimating material properties by looking at how objects vibrate. Awesome. Do you have any potential applications in mind? Let me know in the comments section. And there's also a fantastic TED Talk and paper video on the topic that you can find in the description box alongside with a great ready discussion link. I urge you to check all of these out. The videos are narrated by Abe Davis with it a great job at explaining their concepts. Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fahir. Anyone who has tried building bridges over a huge chasm realized that it is possibly one of the most difficult and dangerous things you could do on a family vacation. The basic construction elements for such a bridge can be ropes, cables and wires. And this kind of task is fundamentally different from classical architectural building problems. Here you don't need to have any kind of scaffolding or to carry building blocks that way a lot. However, you have to know how to tie knots. Therefore this is the kind of problem you need flying machines for. They can fly anywhere, they are nimble, and they are disadvantaged that they have a very limited payload does not play a big role here. In this piece of work at the ETH Zurich, these machines can create some crazy knots. From single to multi-round turn hitches, knobs, elbows, round turns and multiple rope knobs. And these you have to be able to create in a collaborative manner, because each individual flying machine will hold one rope, therefore they have to pass through given control points at a strictly given time and a target velocity. These little guys also have to know the exact amount of force they need to exert on the structure to move into a desirable target position. Even deploying the rope is not that trivial. The machine is equipped with a roller to do so, but the friction of this roller can be changed at any time according to the rope releasing direction to unroll it properly. You also have to face the correct direction as well. And these structures are not just toys, the resulting bridges are resilient enough for humans to use. This work is a great example to show that the technology of today is improving at an incredible pace. If we can solve difficult, collaborative control problems such as this one, just think about the possibilities. What an exciting time it is to be alive. We have gotten lots of shares for the series on social media. I'm trying to send a short thank you message for every single one of you. I'm trying my best and don't forget, every single share helps spreading the word for the series immensely. Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Carlos Jean-Eiffagher. As we discussed before, simulating the motion of fluids and smoke with a computer program is a very expensive process. We have to compute quantities like the velocity and the pressure of a piece of fluid at every given point in space. Even though we cannot compute them everywhere, we can place a 3D grid and compute these quantities in the grid points and use mathematical techniques to find out what is exactly happening between these grid points. But still, even if we do this, we still have to wait up to days, even for a few seconds of video footage. One possible way to alleviate this would be to write an adaptive simulation program. Adaptive means that the simulator tries to adapt to the problem at hand. Here it means that it recognizes the regions where it needs to focus a lot of computational resources on and at the same time it also tries to find regions where it can get away with using less computation. Here you can see spheres of different sizes, in regions where there is a lot going on you will see smaller spheres. This means that we have a finer grid in this region, therefore we know more about what is exactly happening here. In other places you also see larger spheres, meaning that the resolution of our grid is more coarse in these regions. This we can get away with only because there is not much happening there. Essentially, we focus our resources to regions that really require it. For instance, where there are lots of small scale details. The spheres are only used for the sake of visualization, the actual output of the simulator looks like this. It also takes into consideration which regions we are currently looking at. Here we are watching one side of the corridor, where the simulator will take this into consideration and create a highly detailed simulation at the cost of sacrificing details on the other side of the corridor, but that's fine because we don't see any of that. However, there may be some objects the fluid needs to interact with. Here, the algorithm makes sure to increase the resolution so that the particles can correctly flow through the holes of this object. The authors have also published the source code of their techniques, so anyone with a bit of programming knowledge can start playing with this amazing piece of work. The word of research is incredibly fast moving. When you are done with something, you immediately need to jump onto the next project. Two minute papers is a series where we slow down a bit and celebrate these wonderful works. We're also trying to show that research is not only for experts, it is for everyone. If you like this series, please make sure to help me spread the word and share the series to your friends so we can all marvel at these beautiful works. Thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Kádoz Zsolnai-Fehér. Photorealistic rendering is a really exciting field in computer graphics. It works the following way. We use a piece of modeling software to create the geometry of objects, then we assign material models to them. After that, we would like to know how these objects would look like in real life. To achieve this, we use computer programs that simulate the behavior of light. So this is how the scene would look like with photorealistic rendering. If it is possible to create digital objects that look like if they were real, then artists have an extremely powerful tool they can create wonderful images and animations with. It is not a surprise that we see photorealistic rendered cities next to real actors in many feature-length movies nowadays. Game of Thrones is also a great example of this. I've linked two jaw-dropping examples in the description box below. Take a look. The automotive industry also has lots of ads where people don't even know that they are not looking at reality, but a computer simulation. But in the movie industry, the Pixar people were reluctant to use photorealistic rendering for the longest time, and it is because it constrained their artistic freedom. One classical example is when the artist says that I want those shadows to be brighter. Then the engineer says, okay, let's put brighter light sources in the scene. But then the artist goes no, don't ruin the rest of the scene, just change those shadows. It is not possible if you change something everything else in the surroundings changes. This is how physics works, but artists did not want any of that. But now things are changing. With this piece of work, you can both use photorealistic rendering and manipulate the results according to your artist's vision. For instance, the reflection of the car in the mirror here doesn't look really great. In order to overcome this, we could rotate the mirror to have a better looking reflection, but we wanted to stay where it is now. So we'll just pretend as if we rotated it so the reflection looks different, but everything else remains the same. Or we can change the angle of the incoming sunlight, but we don't want to put the sun itself to a different place, because it would change the entire scene. The artist wants only this one effect to change, and she is now able to do that, which is spectacular. Removing the green splotch from the wall is now also not much of a problem. And also, if I don't like that only half of the reflection of the sphere is visible on the face of the bunny, I could move the entire sphere. But I don't want to. I just want to grab the reflection and move it without changing anything else in the scene. Great! It has a much better cinematic look now. This is an amazing piece of work, and what's even better, these guys didn't only publish the paper, but they went all the way and found the startup on top of it. Way to go! The next episode of Two Minute Papers will be very slightly delayed, because I will be holding a one hour seminar at an event soon, and I'm trying to make it the best I can. My apologies for the delay. Hmm, this one got a bit longer, it's a bit more like three minute papers. But I really hope that you liked it. Thanks for watching, and if you liked this series, become a fellow scholar by hitting that subscribe button. I am looking forward to have you in our growing group of scholars. Thanks, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. First of all, thanks so much for watching Two Minute Papers. You Fellow Scholars have been an amazing and supportive audience. We just started, but the series already has a steady following and I'm super excited to see that. It is also great that the helpful and respectful community has formed in the comment section. It's really cool to discuss these results and possibly come up with cool new ideas together. In this episode we're going to set foot in computer animation. Imagine that we have built bipedal creatures in a modeling program. We have the geometry down, but it is not nearly enough to animate them in a way that looks physically plausible. We have to go one step beyond and define the bones and the rooting of muscles inside their bodies. If we want them to walk, we also need to specify how these muscles should be controlled during this process. This work presents a novel algorithm that takes many tries to build a new muscle rooting and progressively improving the results. It also deals with the control of all of these muscles. For instance, one quickly needs to discover that the neck muscles cannot move arbitrarily or they will fail to support the head and the whole character will collapse in a very amusing manner. When talking about things like this, scientists often use the term decrease of freedom to define the number of independent ways a dynamic system can move. Building a system that is stable and uses a minimal amount of energy for locomotion is incredibly challenging. You can see that even the most miniscule change will collapse a system that previously worked perfectly. The fact that we can walk and move around unharmed can be attributed to the unbelievable efficiency of evolution. The difficulty of this problem is further magnified by the fact that many possible body compositions and setups exist, many of which are quite challenging to hold together while moving. And even if we solve this problem, walking at a given target speed is one thing. What about higher target speeds? In this work, the resulting muscle setups can deal with different target speeds, and even terrain. And hmm, other unpleasant difficulties. Thanks for watching and I'll see you next time.
Ladies and gentlemen, this is Karo Yuzhou and I Fahir and I am very excited to show you the new features of Laxrender version 1.5. Laxrender is a physically based renderer program that generates photorealistic images from scenes that are created in 3D modeler programs. Here you can see the geometry of a scene in the modeler program. After we have the geometry of objects, we assign material models to them. And here you can see what the photorealistic renderer program does with it. At Laxrender we are privileged to have so many incredibly skilled artists using our system and creating stunning scenes such as these ones. So about the new release, there is just so much meat in this one. Very excited to tell you all about it. Ready? Let's get started. First off, Laxrender now uses a micro-coronal based architecture that can compile and render super high resolution images like this in about 5 minutes. The resolution of this image is higher than 4K. It is so detailed that even if we zoom in this much, it still looks remarkably crisp. The new BIOS Path Trace Rangin has a variety of new features. Examples include tile rendering, radiance clamping to reduce firefly noise, visibility for indirect rays and many others. In short, the new BIOS Path Engine allows fine control over the sampling process, giving you a more powerful and flexible algorithm. Laxrender now supports adaptive rendering, which means that it will automatically find and concentrate on noisy regions of the image. It won't waste your resources on regions of the image that are already done. The no-intel-embry-based accelerator is between 20-50% faster than the previous technique for building acceleration structures. This helps the renderer to minimize the amount of time spent with intersecting rays of light against geometry. Laxrender now natively supports a new light source type called laser light. No more hacking with tubes and IES profiles. You can now create unique artistic effects by slicing scenes in half with the new arbitrary clipping plane feature. The new point in a feature allows using surface curvature information in materials and textures. This powerful mechanic can be used to create worn wooden edges, moss in rock crevices and many others sophisticated effects. With the new volume priority system, it is finally really easy to correctly and intuitively render overlapping volumes. Hair strand primitives are now supported. Look at these incredible examples. There's going to be lots of fun to be had with this one. Exporting meshes is now up to 16 times faster. We have a completely new Laxrender plugin for 3D studio max. It's still early in development but it's definitely worth checking out. Let us know what you think about it. And the icing on the cake, we have a new volumetric emission system that supports fire and many other kinds of glowing volumes. Here in the video you see nothing less than a textured heterogeneous volume with animated colors. Love the demos in look on this one. And please note that this is not everything. In fact, not even close to what Laxrender 1.5 has to offer. I have put the forum post with all the changes and the new features in the description box. Check it out. We invite you to come and give it a try. We also have a seemingly positive where you can download a collection of really tasty scenes to get you started. And if you're stuck or if there's anything we can help you with, just jump on the forums and let us know. We'll be more than happy to have you as a member of our friendly community. Or if you have created something great with Laxrender, let us know so we can marvel at your work together. Thanks for watching and I'll see you in the forums. Cheers!
Dear Fellow Scholars, this is Two Minute Papers with Karoi Zsolnai-Fair. So many of you were sharing the previous episode, for the first time I just couldn't keep up and write a kind message to every single one of you. But I'm trying my best. It really means a lot and again just thanks so much for sharing. So delighted to see that people are coming in, checking out the series and expressing that they liked it. The feedback has been absolutely insane. Two Fellow Scholars seem to love the show quite a bit and it really makes my day. It's also fantastic to see that there is a hunger out there for science. People want to know more on what is happening inside the labs. That's really amazing. Thank you and let us continue together on our scholarly journey. 3D printing is a technique to create digital objects in real life. It has come a long way in the last few years. There has been excellent work done on designing deformable characters, mechanical characters, and characters of varying elasticity. You can even scan your teeth and print copies of them. And these are just a few examples of a multitude of things that you can do with 3D printing. However, this technology is mostly focused on the geometry itself. Colored patterns that people call textures still remain so challenge and we only have very rudimentary technology to do that. So check this out. This is going to be an immersive experience. Traffic printing on 3D surfaces is a really simple technique. You place a film in water, use a chemical activator spray on it, and shove the object in the water. So far so good. However, since these objects start stretching the film, the technique is not very accurate. It only helps you putting repetitive patterns on these objects. Computational hydrographic printing is a technique that simulates all of these physical forces that are exerted on the film when your desired object is immersed into the water. Then it creates a no-image map taking all of these distortions into account and this image you can print with your home inject filter. The results will be really accurate, close to indistinguishable from the digitally designed object. The technique also supports multiple emotions that helps putting textures on a non-planar object with multiple sides to be colored. So as you can see 3D printing is improving at a rapid pace, there's tons of great research going on in this field. It is a technology that is going to change the way we live our daily lives in ways that we cannot even imagine yet. And what would you print with this? Do you have any crazy ideas? Let me know in the comment section. Thank you for now, thanks for watching and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fair and this paper is as fresh as it gets. As of the making of this video, it has been out for only one day and I got so excited about it that I wanted to show it to you Fellow Scholars as soon as humanly possible because you've got to see this. Not so long ago we have been talking about deep neural networks, a technique that was inspired by the human visual system. It enables computers to learn things in a very similar way that a human would. There is a previous two-minute papers episode on this, just click on the link in the description box if you've missed it. Neural networks are by no means perfect, so do not worry, don't quit your job, you're good. But some applications are getting out of control. In Google DeepMind's case, it started to learn playing simple computer games and eventually showed us superhuman level plays in some cases. So, if you've run this piece of code and got some pretty sweet results that you can check out, there's a link to it in the description box as well. So, about this paper we have here today, what does this one do? You take photographs with your camera and you can assign it any painting and it will apply this painting's artistic style to it. You can add the artistic style of Vincent van Gogh's beautiful story and get some gorgeous results. Or, if you are looking for a bit more emotional or may I say disturbed look, you can go for Edward Monks' The Scream for some stunning results. And of course, the mandatory Picasso. So, as you can see, deep neural networks are capable of amazing things and we expect even more revolutionary works in the very near future. Thanks for watching and I'll see you next time.
Hey there fellow scholars, I am Karoi Zonaifahir and this is two-minute papers where we learn that research is not only for experts, it is for everyone. Is everything going fine? I hope you are all doing well and you are having a wonderful time. In this episode we are going to look at time lapse videos. Let's say we would like to build a beautiful time lapse of an Norwegian glacier. The solution sounds quite simple. Let's find hundreds of photos from the internet and build a time lapse video from them. If we just cut a video from them where we put them one after each other we will see a disturbing flickering effect. Why? Because the images were taken at a different time of the day so the illumination of the landscape is looking very different on all of them. They are also taken at a different time of the year and from different viewpoints. Moreover since these images are taken by cameras, different regions of the image may be in focus and out of focus. The algorithm therefore would have to somehow equalize all of the differences between these images and bring them to a common denominator. This process we call regularization and it is a really difficult problem. On the left you can see the flickering effect from the output of a previous algorithm that was already pretty good at regularization but it still has quite a bit of flickering. Here on the right you see the most recent results from the University of Washington and Google compared to this previous one. The new algorithm is also able to show us these beautiful, rhythmical seasonal changes in Lombard Street, some Francisco. It can also show us how sculptures change over the years and I feel that this example really shows the possibilities of the algorithm. We can observe effects around us that we would normally not notice in our everyday life simply because of the reason that they happen too slowly. And now here's the final time lapse for the glacier that we were looking for. So building high quality time lapse videos from an arbitrary set of photographs is unbelievably difficult and these guys have just nailed it. I'm loving this piece of work. And what do you think? Did you also like the results? Let me know in the comment section. Thanks for now, thanks for watching and I'll see you next time.
Greetings to all of you fellow scholars out there. This is two-minute papers where I explain awesome research works a couple minutes at a time. You know, I wish someone explained to me in simple terms what's going on in genetics, biology and just about every field of scientific research. There are tons of wonderful works coming every day that we don't know about. And I'm trying my best here to bring it to you the simplest way I possibly can. So you know, researchers are people, and physics research at the Hadron Collider basically means that people smash atoms together. Well computer graphics people also like to have some fun and write simulation programs to smash together a variety of objects in slow motion. However, even though most of these simulations look pretty good, they are physically not correct as many effects are neglected, such as simulating plasticity, bending stiffness, stretching energies and many others. And unfortunately, these are too expensive to compute in high resolution. Unless you have some tricks up the sleeve. Researchers at UC Berkeley have managed to correct this nut by creating an algorithm that uses more computational resources only around regions where cracks are likely to happen. This new technique enables the simulation of tearing for a variety of materials like cork, foils, metals, vinyl, and it also yields physically correct results for glass. Here's an example of a beaten up rubber sheet from their simulation program compared to a real world photograph. It's really awesome that you can do something on your computer in a virtual world that has something to do with reality. It is impossible to get used to this feeling. It's so amazing. And what's even better since it is really difficult to know in advance how the cracks would exactly look like. They have also enhanced the direct ability of the simulation, so artists could change things up a bit to achieve a desired artistic effect. In this example, they have managed to avoid tearing a duck in two by weakening the paths around them. Bravo! Thanks for watching and if you liked this series, just hit the like and subscribe buttons below the video to become a member of our growing club of scholars. Thanks and I'll see you next time.
Dear Fellow Scholars, we have just reached 250 subscribers on the channel. So far, the reception of these videos have been overwhelmingly positive. I'll show you some of these comments on the screen in the meantime. 250 Subscribers This is probably not much compared to even mid-sized YouTubers, but it means so much to me. It means that there are 250 people somewhere around the world waiting for new videos to come up. This is insane. If you think about it, even one subscriber is insane. Even one click from somewhere is mind blowing. Imagine that someone who you have never met somewhere on the face of Earth, perhaps in Peru. Somewhere in the United States or maybe in the middle of Africa, is excited for your work and just waiting for you to say something. There are millions of other videos they could watch, but they devote their time to listening to you. And now, multiply this by 250. I am just sitting here in disbelief. As a computer engineer, I've been working with computers and network algorithms for a long time, but I still find this mind blowing. I can just record the lectures that I hold at the university and thousands of people can watch it at any time, even while I'm asleep at night. I can teach people while I am asleep at night. We have over a thousand views on my first lecture, which is possibly more people than I will ever reach through the university seminar rooms. So for all 250 of you, and everyone who has ever watched any of these videos, thank you very much for watching and subscribing. I have created two-minute papers to show you the best of what research can offer and what your hard-earned tax money is spent on. Because that's the thing. Every single country I've been to, researchers are complaining about the lack of funding. And rightfully so, because most of them can't secure the funds to continue their work. But let's try to turn the argument around. Funding comes from your tax money, and 99.9% of the case you have no idea what your money is spent on. There are lots of incredible works published every single day of the year, but the people don't know anything about them. No one is stepping up to explain what your money is spent on. And I am sure that people would be happy to spend more on research if they know what they invest in. Two-minute papers is here to celebrate the genius of the best and most beautiful research results. I will be trying my best to explain all of these works so that everyone is able to understand them. It's not only for experts, it's definitely for everyone. So thank you for all of you, thanks for hanging in there, and please, spread the word. Let your friends know about the show so even more of us can marvel at these beautiful works. And until then, I'll see you next time.
Dear Fellow Scholars, there is a really fantastic photorealistic renderer program out there that not many of you know about. So let me give you a quick rundown of my top 7 LuxRender features, and note that this is not an official list or anything, just my personal favorite features. LuxRender is a completely free, open source physically based renderer with many contributors and it is led by Jean-Philippe Grimalti. What does a renderer do exactly? Well, there are many modeling programs where artists can sculpt objects, assign materials to them, and the renderer will run a light simulation process and show an image of how this object would look like in real life. You'll see in a second how cool these tools really are. So now that you know what LuxRender is, let's jump into the best features. Hold on to your pants, because this is going to be good. LuxRender supports a multitude of material models, matte, glossy materials, less objects of different roughness, translucent materials, subsurface scattering, metals, car paint, velvet, and you can mix all of these together to obtain an even more complex appearance. That's so great. Love it. With light groups, you can adjust the influence of light sources on your scene without needing to render your image. That's the most interesting point. So you can, for instance, fiddle with the intensities of the sunlight, the light fixtures, and the TV in the scene. If you feel that any one of those are not useful for the artistic effect that you're trying to achieve, you can just turn them off instantly. And apart from intensities, you can also adjust the color temperature of these individual light sources. Such a gorgeous feature. I have played way too much with this. A great thing about LuxRender is that it supports network rendering. It means that you can use multiple machines that will work together if they are connected. However, what is even better is that this render offers you many unbiased algorithms, which means that you can do network rendering without using a network. Now this sounds flat out impossible. But take a look at this noisy image. Not really convincing, right? Now imagine that you have 10 computers running in parallel on the same scene. There's a tool called LuxMurder, which can combine together many noisy images of the same scene together better, smoother output. So after merging together 10 images that have roughly the same amount of noise we get, this. Note that this is without using a network. So these computers have never heard of each other. We have RenderRender on them completely, independently. LuxRender has sophisticated rendering algorithms like Metropolis Light Transport to render notoriously difficult scenes like this. Most renders use path tracing or bidirectional path tracing, both of which struggle here. Here you can see the result of Metropolis Light Transport running for the same amount of time. It indeed makes a world of a difference. And this is the true final image. Different film brands and models have different color profiles, which means that they react to the same, for instance, red light differently. LuxRender is able to get you this look, which may bump up the realism of your render images as they will have the color profiles that people are used to see in real world photographs. It also supports GPU rendering. How much of a difference does it make? Here's a test run after 60 seconds, one with the CPU and one on the GPU. I don't think I'm ever going back to CPU rendering. And finally, LuxRender is cross-platform. It works on Windows, Linux and OSX as well. And it also works with a huge number of modeling software out there. Blender, 3D Studio Max, Maya, you name it. If you like these features, please come and be a member of the LuxRender community. There is a professional and quite welcoming bunch of people over at the LuxRender forums. If you have any questions or just want to show off your work, we'll be happy to have you there. We also have a nice scene repository with some truly spectacular scenes to get you started. There are also lots of goodies in the description box. Make sure to take a look. Hope you liked my quick rundown and I'll see you on the other side.
I am Karo Jolene Fahir and this is two-minute papers where I explain awesome research in simple words. First of all, I am very happy to see that you liked the series. Also, thanks for sharing it on the social media sites and please, keep them coming. This episode is going to be about artificial neural networks. I will quickly explain what the huge deep learning range is all about. This graph depicts a neural network that we build and simulate on a computer. It is a very crude approximation of the human brain. The leftmost layer denotes inputs, which can be, for instance, the pixels of an input image. The rightmost layer is the output, which can be, for instance, a decision whether to image the picture horse or not. After we have given many inputs to the neural network, in its hidden layers, it will learn to figure out a way to recognize different classes of inputs, such as horses, people, or school buses. What is really surprising is that it's quite faithful to the way the brain does represent objects on a lower level. It has a very similar edge detector. And it also works for audio. Here you can find the difference between the neurons in the hearing system of a cat versus a simulated neural network on the same audio signals. I mean, come on. This is amazing. What is the deep learning part all about? Well, it means that our neural network has multiple hidden layers on top of each other. The first layer for an image consists of edges, and as we go up, a combination of edges gives us object parts. A combination of object parts, eared object models, and so on. This kind of hierarchy provides us very powerful capabilities. For instance, in this traffic sign recognition contest, the second place was taken by humans. But what's more interesting is that the first place was not taken by humans. It was taken by a neural network algorithm. Think about that. And if you find these topics interesting, you feel you would like to hear about the newest research discoveries in an understandable way. Please become a Fellow Scholar and hit that subscribe button. And for now, thanks for watching, and I'll see you next time.
A movie that we watch in the TV shows us from 25 to about 60 images per second. In computer graphics, these images were referred to as frames. A slow motion camera can capture up to the order of thousands of frames per second, providing breathtaking footage like this. One can quickly discover the beauty of even the most ordinary, Monday moments of nature. But if you think this is slow motion, then take a look at this. Computer graphics researchers have been working on a system that is able to capture 1 trillion frames per second. How much is that exactly? Well, it means that if every single person who lives on Earth would be able to help us, then every single one of us would have to take about 140 photographs in one second. And we would then need to add all of these photographs up to obtain only one second of footage. What is all this good for? Well, for example, capturing light as an electromagnetic wave as it hits and travels along objects in space like the wall that you see here. Physicists used to say that there is a really, really short instance of time when you stand in front of the mirror, you look at it and there is no mirror image in it. It is completely black. What is this wizardry and how is this possible? Since Einstein, we know that the speed of light is finite, it is not instantaneous. It takes time to travel from the light source, hit the mirror and end up hitting your eye for you to see your mirror reflection. Pictures at MIT and the University of Saragosa have captured this very moment. Take a look, it is an enlightening experience. The paper is available in the description box and it's a really enjoyable read. A sizable portion of it is understandable for everyone even without mathematical knowledge. All you need is just a little imagination. Thanks for watching and I'll see you next week.
How can we simulate the motion of fluids and smoke? If we had a block of plastic in our computer program and we would add the laws of physics that control the motion of fluids, it would immediately start behaving like water. In these simulations we're mostly interested in the velocity and the pressure of the fluid. How these quantities exactly change in time? This we need to compute in every point in space, which would take an infinite amount of resources. What we usually do is we try to compute them not everywhere, but in many different places and we try to guess these quantities between these points. By discussing a lot of information is lost. And it still takes a lot of resources. For a really detailed simulation it is not uncommon that one has to wait for days to get only a few seconds of video footage. And this is where wavelet turbulence comes into play. We know exactly what frequencies are lost and where they are lost. And this technique enables us to synthesize this information and that did back very cheaply. This way one can get really detailed simulations at the very reasonable cost. Here are some examples of smoke simulations with and with out wavelet turbulence. It really makes a great difference. It is no accident that the technique won a technical Oscar award. Among many other systems it is implemented in Blender so anyone can give it a try. Make sure to do so because it's lots of fun. The paper and the supplementary video is also available in the description box. This is an amazing paper. Easily one of my favorites. So if you know some math, make sure to take a look and if you don't just enjoy the footage. Thank you for watching and see you next time.
Okay, so the very last assignment, please go to this unofficial extruder scene repository. There is also a link below it that shows you how the individual scenes look like. Please choose a scene and render it with an unbiased method several times and merge the results together. I hope that you remember from the previous lecture how you can merge together individual runs of unbiased algorithms and hopefully get something better than the individual images. Do it with other algorithms both biased and unbiased algorithms. I've also uploaded a settings file to help you with these different algorithms and see what happens. I don't want to spoil the fun, but obviously we expect a given class of algorithms to perform well in this regard and some of them. Hmm, not so much. Also try to experiment with photo mapping type algorithms. Place your observations in the observations.txt file. Tell me what kind of algorithm worked where, what are the failure cases and why, and is this what you have expected or did you get something different. Remember when we were doing mathematics in the very first lectures we first always listed our expectations and then after we got the results we discussed whether reality was in line with our expectations or not. This is a really good methodology, so please do it all the time. There will be a rendering competition afterwards where a really prestigious international committee will judge your work and there are lots of valuable prizes. We will get three tickets to the next year's CEGC conference and Pixel Vienna. So that's a total of six free conference tickets for you. I'm also holding a talk at this CEGC so I will be more than excited to meet you there. So the CEGC is the Center European Games Conference. This is Pixel Vienna. They're flyer from last year. So after you hand it in your work you may be getting one of the three prizes. The third prize is plus half of a grade on the exam and this provided that you would already pass the course. The second prize is plus one grade on the exam and the first prize is perhaps this is the official description perhaps in even greater influence on the exam grade. If you don't really have artistic veins or you would like to do some programming assignment instead of the Luxrender Sim contest you're free to do that. Please contact me. Let's cook up a realistic and exciting problem for you that you can solve. So don't just start pounding away at your keyboard and doing something. Please write to me so we can discuss what you're exactly going to do. And if you do that you are going to be subject to the very same prize. Okay so what about the rendering contest? The contest theme this year is going to be fluids. It's great because we have a great fluid simulator in blender. You have to create a scene and hand in converged images. Not noisy, converged images of this scene. Okay so what is the list of things that you need to hand in? We would like to get the Luxrender scene. Please copy every asset, every texture, every mesh, everything that you have in the scene. In this Luxrender scene directory that you give to us so we can run it ourselves. We need also like I said a completely noise free render image. We would also need the blend file or if you're using a different model or program you're absolutely free to use that please be my guest. Just please send us the project file. And also send us one text file with a few lines on what you try to accomplish and why you think that your work is the greatest work ever created by humanity. Third party sources for meshes are fine but you have to give credit to the people who created it. Important. We also ask for a halfway progress report. What does it mean? There will be a final deadline for the assignment and halfway through that there will be another deadline where I expect you to send me an email with the very same subject as the assignment itself. You send me one render image which is just a rough, really rough draft of what you're going to do. So I'd like to see your current progress and at least one line of text with your plans. What are you exactly trying to accomplish? This will do because we would like to discourage people from trying to put together some scene in the last two days of the assignment and not have enough time to render it correctly or to develop it correctly. We would just like to make sure that you are on time. And please check the course website to see what exactly are the deadlines for this. Okay so we'll be on the committee. First Jean-Philippe Grimaldi he is the head developer of LuxRender, the kindest, kindest person who has been on this committee for the third year now and he's always very excited to see your work. Vojta Kiaros, you hopefully remember the name from before, he's the head of the rendering group of Disney Research Zurich and an excellent, a truly excellent researcher. Michal Vimer, our beloved professor who is the head of the rendering group at our university. What about the programming guys? If you don't want to participate in the rendering contest that's fine, that's perfectly fine. There's two different things that you can do. One, do something with LuxRender. We have for instance a bug tracker where people are asking for features and people are also asking for bugs to be fixed. So if you're interested in this then please take a look and if you commit something that is useful then you will be subject to a first prize. Now note that the first prize can be won by multiple people. If you cross a given threshold with the quality of your work then you will be subjected to the first prize and there may be many of you who do. And there's also the small paint line where you can improve small paint by practically anything. You can add by directional past tracing, multiple important sampling, photo mapping, whatever you have in mind, but before you decide on anything, contact me. Last year's theme was volumetric caustics. I think that's an amazing theme but this year this is not what we're going to be interested in. What we're going to be interested in is fluid simulations. This scene was created in blender so you can do sophisticated simulations like this and even much more sophisticated than this. I have prepared some blender fluid simulation tutorials for you so please take a look and please make sure that your simulation is the very least 300 cube. And also an example video to set the tone. This is taken from the real flow real from last year. It is absolutely amazing. Make sure to take a look. And the subject of the email that we're looking for is the very same. You only need to increment the number of the assignment. And that's it. It's been a wonderful journey for me so thanks for tuning in. I was trying my very best to teach you the intricacies of light transport and I hope that now you indeed see the world differently. I got some student feedbacks from many of you and I got the kindest of words so thank you very much. I'm really grateful. And if you're watching this through the internet then we have a comment section. Let us know if you like the course. So thank you very much and despite the fact that it seems that the course ends here we have a lecture from before that we haven't published yet there will be some more videos with Thomas who teaches you how to compute subsurface scattering. So one more time thank you very much and it's been a wonderful journey. Thanks. I'll see you later.
Path space manipulation, this is again from the kit guys, this is a wonderful tool. Now, what happens if the artist would create a scene that he really likes but there are some artifacts or some small effects that he would like to get rid of? What do you have to do? Well obviously you have to change the physical parameters because this is what physical reality would look like. But still you could say that if you take a look at the left image you don't really like the reflections on the wall behind or you don't like the incoming direction of the sunlight or maybe you don't like the reflection of the car in the mirror. So for instance for the mirror what if we could pretend that the normal of the mirror wasn't what it is but it would have been something different and this is exactly what this work gives you. On the right you can see a side by side comparison of the original image and the manipulated scene. It looks much better and you don't have to change a single thing in your scene. You just manipulate and you just bend the some of the light paths there are in the scene. Imagine that you don't like the caustics on the bunny and you can just basically grab and pull the thing onto the face of the bunny. It is really really amazing. This work may be one of the reasons one of many if I may add for Pixar to change from their race render that has a long history of more than 25 years of movies and wonderful works and now they have changed to path tracing. They use global illumination in their newest movies and imagine how powerful this tool can be in the hands of a professional artist let alone a team of professional artists. We have some amazing times ahead in global illumination research. Residue or ratio tracking this is a Disney paper. This is basically about how to render heterogeneous participating media. What does it mean? heterogeneous means that either the density or the scattering or absorption properties of the medium are changing in space. They are not uniform. This technique helps you to render this kind of light transport much more quickly than previous methods. It builds on woodcock tracking. It improves woodcock tracking but it is basically the industry standard way of rendering heterogeneous materials and what it does essentially is a mathematically really appealing way of probabilistically treating the scene as if it was homogenous instead. So trying to reduce the problem to a simpler problem that we can solve easily and doing some probabilistic tricks over this classification and this gives you an unbiased estimator for the heterogeneous participating medium. And this piece of work is an improvement even over that. This was done by Jan Novak and colleagues at Disney. In this work by Alexander Wilkie who by the way used to be a PhD student here at our university and he graduated here. Now he moved to the Czech Republic and is doing wonderful work. We discussed earlier that if you would like to do fully spectral rendering then you take a random sample in the spectrum of visible wavelengths. And he came up with a trick that if you do this in a way that is just a bit smarter than what we do naively then you can get results like this using the same number of samples. You can see that the noise is much more representative to the actual image that we're rendering. Let's take a look at another example. How about this, this is the naive spectral rendering and his technique called the hero wavelength spectral sampling. Amazing piece of work. You should definitely, definitely check it out. I promise to you that we would start out with algorithms from 1986 up to until last week. So this literally appeared last week. This is the gradient domain part tracing algorithm. But I will also use a figure from the gradient domain metropolis paper for better understandability. So the key idea is that we are not seeking the light anymore. We're seeking changes. Now what does it mean? Take a look at the image on the upper left. It says that we're basically interested in this small region that is a hard shadow boundary. And below it the image says that let's say that this whatever function that we're computing is zero in the shadow boundary and one outside. You can intuitively imagine that this means that yes, we have no radiance in the shadow boundary and we have a really bright region outside. What would the regular metropolis sampler do? Well, it is a mark of chain that in its stationary distribution would like to do optimal important sampling. What does it mean? It means that the brighter regions would be sampled more. So you can see the red dots in there. We would sample this region that is one all the time and we would never put any sample in the zero. But if we are not seeking for the light, we are seeking for changes. So imagine that we are interested in putting samples at the shadow boundary because we know that there is some change happening in there, but right and from the left to it, there is absolutely no change. So if I get enough information only of the shadow boundary, then I can reconstruct the whole image with a technique that is called Poisson image reconstruction. This means intuitively something like reconstructing a function from its gradients. You can imagine it in 1D as something like you have a 1D function. You are interested in the function, but the only thing you have is how the function changes. You have derivatives. And from these derivatives, you would like to reconstruct the function. This is exactly what the algorithm does and it's an amazing idea. Love it. You can see that it significantly outperforms past racing with a much lesser number of samples. Now let's note that because of the Poisson reconstruction step, the 5K SPP is compared to the 2K SPP. This is probably because it is more expensive to draw samples with this gradient domain past racing. You can see that this smart algorithm is really worth the additional time. Another great paper from last week from our friends at Disney. What if we would have a scene where we build a castle out of sand? And what if we are crazy enough that we would like to render every small grain of sand that is in the castle? That would mean billions upon billions of objects. That's a lot of intersections. That's a lot of problems. Even if you have some kind of spatial exploration structure. So this would take forever and a day. And they came up with a really cool solution that can give you these beautiful, beautiful results, at least an order of magnitude faster. I also promised to you that I would refer you to implementations of many of the discussed algorithms. So this is a huge list. Some of them are implemented only on the CPU. Some of them have also GPU implementations. So take a look and play with them. It's lots of fun. And if you're watching this lecture on the internet and don't worry about the links, in the video description box, I provide a link to these slides and you can just click away at them. And there are some must see videos. Some awesome, slow motion fracture tests with SLG. Well, smashing virtual objects is a lot of fun. Slow motion videos are a lot of fun. So this absolutely has to be a crazy good video. And it really is. And the remarkable thing about this is that the whole thing took 25 seconds to render per HD frame. I've also uploaded a bonus unit on how to use together blender and lux render. Basically this means that you model something in blender and you would like to export this scene and render it in lux render. This is going to be useful in the next assignment.
This is the work of Vensal Jakob, he's a super smart, really brilliant guy. He extended the beach metropolis algorithm to handle SDS transport better. Now how is this possible? What I have written here is the very scientific way of stating what is really happening. The most difficult parts form a manifold in path space and you can grab this manifold and sample this exhaustively with an equation solving system. Let's take a look at the intuition. This is super useful but very challenging to understand and implement for ordinary people. What is exactly happening here? So we have a diffuse bounce. This is xb and we hit the light source after that which is xc on the upper right. And between the b and the c we have two specular bounces. And imagine that I am fixing the xb and xc. These are two fixed vertices. And if I have this glass egg in between that is perfectly specular, then I can write an algorithm that computes what should be the exact outgoing direction from this diffuse vertex in order to exactly hit that xc point. There is only one possible path because we have perfectly specular inter-reflections in between. So what should be this outgoing direction from xb? This is the equation solving system that we are interested in. How do the results look like? Well you can compare it to Metropolis light transport that is either very noisy or it misses some of the light paths completely in these very difficult test cases. The manifold exploration path eraser outperforms all of the existing algorithms. PSSMLT is the Kalamaz time MLT, MLT is the original feature Metropolis. One more example. A, Vitch Metropolis B, ERPT. I will tell you in a second what that is. C, Kalamaz Metropolis light transport algorithm, and D, manifold exploration path tracing. Vancell was kind enough to put I think 20 minutes talk about this work on his website so make sure to check it out. It is really well illustrated. It is really well explained. Make sure to check it out. Let's take a look how the algorithm converges in time. Take a look at this beauty. Lots of SDS light paths and in the first 10 minutes you already have some degree of convergence that would take days or possibly forever with other algorithms. Pretty amazing. Pretty amazing. One of my favorites out there. Here you can see side by side. Elements Metropolis light transport versus manifold exploration path tracing. It's difficult not to get excited about this right?
Now, let's proceed to vertex connection and merging by Iliangar GF and colleagues. So what he proposes to do is that we conditionally accept this path, the vertex next to XS, but we pretend that we indeed have the hit. What this basically means is that we have a biased connection, something that didn't really happen, but we pretend that it did, and we have this R, that's the merging radius. So what this means is that on the left side, this XS star, I would put on XS instead if it is close by. And by close by, we mean that it is in a circle that is of radius R. Okay, but what does this give to me? Because this is a biased technique. If you add one more trick and this trick would be making R decay over time. So this would shrink and shrink and shrink, and eventually it would get to an epsilon value and that's something that's very close to zero in an infinite amount of time. So the bias would disappear from the renderer in time. That's quite remarkable. I'll tell you in a second why. Some results with the vertex connection and merging technique. You can see that it can render this difficult, difficult SDS light transport situation. So this is indeed a historical moment in global illumination. Why? Because this kind of unifies biased and unbiased photorealistic rendering. And that's huge. That's huge because biased and unbiased rendering was the two biggest schools in photorealistic image synthesis. There were the unbiased guys who were the rigorous scientific, let's sample all the light paths and let's not cut corners type of people. And there were the biased guys who said that, okay, let's cut corners because this thing takes forever. So let's use some optimization techniques. And what vertex merging gives you is essentially an algorithm that starts out biased, but it has less and less bias as time goes by eventually ending up as an unbiased technique. So this is a historical moment that unifies unbiased and biased photorealistic rendering. Wonderful piece of work. Now comparison first by directional path tracing, then progressive photo mapping, and vertex connection and merging. Make sure to check out the paper here. Onwards to path space regularization. This is a work of Anton Kaplanyan and colleagues. He's a super smart guy at the kit. And this is essentially a generalization of vertex connection and merging. What is happening is essentially not spatial but angular regularization. What does this mean? What we're looking for is connecting the diffuse vertex to the specular. With Vcm, what you would do is you would continue the light path from the light source. And you would hit a point that is nearby this next specular vertex. And you would set this tolerance, this radius, this merging radius, or and if it's inside, then you accept the light path. Now this you can call spatial regularization. What Anton is proposing is angular regularization. So you would say that you will take a tolerance value in terms of outgoing values. And this intuition is slightly different because what this essentially means is that we have delta distributions for specular reflections, but we start out with a large angular tolerance. And this means that the specular interreflections will be treated as if they were diffuse. So the mirror will show up as if it were a completely white or some colored wall. And then it will slowly, slowly converge to being a mirror. We can imagine this distribution as what you see in the right side that you have the blue. A diffuse ish, vrdf. And you put your two fingers on the sides of this and you start pushing them together. And this push happens in time. So as time goes by, we go from the blue to the orange to the green. And we would even squeeze this green more and more and more until it gets to be a delta distribution. So over time mirrors are going to be mirrors. But in the meantime, we will be able to render SDS light paths, brilliant piece of work. And the comparison to other algorithms. What you should be looking out for is path tracing with regularization on the right. And this is the only technique that can render this eG, the urographics logo reflected in the mirror.
So, hectic progressive photo mapping. What is this thing about? Well, you would need an infinite amount of photons to ensure consistency. You cannot do that. But what you could do is that you could, from time to time, generate a new photo map and use that. And this means discarding previous symbols and creating new ones. So we start out with a regular ray tracing pass that we call Ipass. And we use this photo map that we have. And then we generate a new photo map and then we are going to use that from the next pass. There's also an addition and you start out with bigger photons, so to say, and the size or the radius of these photons would shrink in time. Why is this useful? Well, because you have practically an infinite number of photons. And you can see how the rendered image evolves over time with progressive photo mapping. So this method is consistent. This is a big deal because you can make photo mapping consistent in practical cases. So this is our previous scene with heavy SDS transport. And you can see how it converges in the first 10 minutes of the rendering process with SVPM. Another set of results with the classical algorithms that we all know and love. And you can see that photo mapping kind of works, you don't have higher frequency noise, but you can see that it overblers many of the important features of the image. And this is the result with BPM, much sharper images, slightly more noise, but it is practically consistent. What about this difficult previous scene with lots of SDS transport? Well, photo mapping kind of worked, but it again overblurred many of the important features. Progressive photo mapping takes care of this. You can read the papers here. So SVPM doesn't just render SDS light paths, but it does it efficiently. It is a wonderful previewing algorithm. So you can just fire it up and in a matter of seconds you can get a good idea on how your scene actually is going to look like. However, if you set this starting radius to a setting that's too high, then you're going to have large photons for the longest time. And this means that the image will be again overblurred for a very long time in the rendering process. However, if you set it for too low, it will be a very sharp image, but it will take a very long time to fill the image. So as you can see, this is a more complex technique that can possibly outperform the algorithms that you have previously seen, but this comes at a cost. This is a more complex algorithm. This is slightly more difficult to implement. And it has more parameters than previous methods. You can see that this is not like the large mutation probability with Metropolis Light Transport. If you set up one of the parameters incorrectly, you may have to wait for way too long. And if you set up a simple photo map, not SPPM, a simple photo map incorrectly, you may even get an incorrect image because you don't have enough photons at the most important regions of the image. This work was created by Toshiyah Hachiska and his colleagues, and it's a brilliant piece of work.
Welcome to the last lecture where we are going to be talking about the state of the arting global illumination. Now, if you remember this scene, you hopefully also remember that we were able to render caustics insanely fast with Metropolis Light Transport, and the rest of the scene is also cleaning up pretty quickly. But what about this? This was a not so difficult looking scene, and you can see that the Kalaman style Metropolis is doing pretty awful here. I'm not even sure if it would converge if we would wait for a very long time. What the problem is here is called SDS or Specular Diffuse Specular Transport. Let's talk about this for a bit. So imagine that I have a light path that starts out from a light source, hits this glass object that is a Specular Bounce, and now it has another Specular Bounce after the refraction. Then we hit the diffuse object, then the mirror, and then we hit the eye. Let's put up there Hacbert's notation, and let's rip out the middle part of the light path. Now this says SDS. What is the intuition of this? It is reflected caustics because 1s and 1d gives you caustics like we discussed before, and then if you have another Specular Bounce, then this says that I am seeing the caustics through the mirror. The intuition for SDS light paths is reflected caustics. So what is exactly the problem here? Imagine that we start out from a diffuse surface, we sample the BRDF, and therefore we arrive to this other Specular surface, and off of this Specular surface we are supposed to hit the pinhole camera. Also note that this diffuse point on the surface was chosen by the Specular Interaction before, and it was chosen explicitly. Now you can hopefully see that this means that depending on the material models, if we have perfect Specular Inter reflections, then sampling such a light path is impossible. And this is a problem that you can encounter very often, because imagine that if you have a light source that is covered by a Specular surface, so for instance a glass light bulb, then even if you have a regular DS path, so 1d fuse and 1 Specular Bounce, then you add 1 more S because all the light that is exiting the light source is going to hit the cover, the last part of the light bulb. And therefore every DS is going to be SDS. Another image for the intuition and better understanding of what is exactly going on. You can also imagine that you are starting out a light path from the light source and from the eye, and you have the SD from the light source and you have the SS from the eye. Now what you would like to do is you would like to connect this diffused to the Specular vertex. Now this is impossible. The Specular vertex would have to be chosen by the diffuse, and the diffuse BRDF would be choosing one outgoing direction on the hemisphere, uniformly sampled, and it is only one possible direction that we would be happy with. The probability of sampling this one possible direction is exactly the same as the probability of sampling one point, and that is zero. So this is the SDS problem, and we are going to look at bias diagrams that try to address this problem. So this looks like SDS to me because we hit the glass cube, that's a Specular Bounce, then we hit the donut inside, and then we hit the glass cube again. So this is SDS. This is why it is so difficult to sample with Metropolis Light Transport. Photo mapping, the key idea is that we don't want to evaluate all possible light paths. What we would like to do is sending photons out of light sources, and we are going to store all of these photons in the map. And when we are computing actual light paths, we are going to rely on interpolation, we are going to use this knowledge that we have in the photo map. Some visualization images to get an idea how it exactly looks. Let's take a look at the first bounce. This is an image with only the very first bounces in the path-tracer, this is the direct light map. Now let's take a look at the indirect light map. This is the second and higher order bounces. This is basically indirect illumination, color bleeding. And you can see that this is actually low frequency information. You can see that the colors don't really seem to change so quickly. If we have indirect illumination, this is a mostly low frequency signal which lends itself to the idea of interpolation. This is an example on how to use all this information in the photo map. So I would be interested in the incident radians at the red dot. And what I can do is use the information from the nearby photons and I would average all this information to get an estimation for the red dot. And you can see that the brighter regions of the image seem to have more photons in the photo map. Why is that? Well, it's simple. It's because we are shooting photons out of the light sources. And these are the regions that are very visible from the light source. Let's take a look at some results. You can see a difficult scene rendered with path tracing. By direction path tracing is much, much better. But you can still see the firefly noise. We also have some results with metropolis light transport. Also doesn't help a lot with SDS transport and photo mapping. You can see that all this high frequency noise is gone. But the result is slightly more blurry than it should be because of the interpolation. We are averaging samples. And therefore, this smoothing behavior is inherent to that interpolation. What are the upsides of photo mapping? Well, caustics indirect illumination they converge really quickly. caustics why? Because you have a lot of samples because you see it from the light source indirect illumination why? Because it's mostly a low frequency signal that you can interpolate very easily. Note that it also helps with the SDS problem because of the interpolation. You don't really get high frequency noise for most cases. However, don't forget that you may need to shoot and store a lot of photons depending on how complex your scene is. And this can be very computationally intensive and also memory intensive. And interpolation can cause artifacts to appear. And this actually happens quite often for more complex scenes because you are looking up photons in the photon map nearby. If these nearby photons are on the same object that you would like to query, then these are usually depending on textures and many other things. This is usually usable information. But you are looking up nearby photons and you may see many examples in the room you're sitting in where you have discontinuities nearby. So there may be a wall that is one color and there may be a wall nearby at an intersection that is a different color. It may be that during the interpolation you use samples from the other wall because it is nearby and it doesn't really take this property into consideration. So therefore artifacts may appear. What about this algorithm? Well, we are cutting corners. We are using interpolation. We are not computing all the possible light paths there are. Therefore, this algorithm has to be biased. What about the consistency? Well, it is consistent provided that you have an infinite number of photons in the photon map and therefore you always get perfect information. However, this is only of theoretical value because obviously having an infinite number of photons may make sense in a mathematical way. But in a practical implementation, you cannot even shoot and you cannot even store an infinite number of photons. Some literature. This is the place where you can look up the original photon mapping paper from Henry Kvanyansen. Some delightful news. This is an image I shot at the geographic symposium of rendering EGSR last year. If you take a look at these people, you can see for instance Vojta Kiaros, lead of the rendering group at Disney Research. And he and the EGSR organizer crew gave out the test of time award to Henry Kvanyansen because of the photon mapping algorithm. It's been around for a while and it had seen a lot of use and it's a fantastic piece of work and he got recognized for that.
Okay, let's continue with even more good stuff. Metropolis light transports, straight from 97. The key idea is to seek the light. This is the thing that I always hear from the artists who use the algorithm. What we are trying to do is sampling brighter light paths, more often, than darker light paths. That's it. That's the basic principle. That's what we're trying to do. And educated people would immediately say that, hey, but isn't this what we have been talking about at important sampling? Isn't this conflicting with important sampling? What is important sampling? Well, it means that if I have, for instance, a glossy reflection, a glossy BRDF, which has a really high probability of sending rays out in the perfect reflection direction. So almost like a mirror. With a higher probability, it would behave like a mirror. Then I would like to have a high probability of actually sampling that light path. Proportional to the shape of the BRDF. And this we can do through important sampling, okay? But imagine a case where you would have a glossary reflection covered from almost every direction by black bodies. So it doesn't matter if I important sample the BRDF correctly, because after I important sample the BRDF and the light is coming to the next bounce, it's always going to hit the black body and it's going to be absorbed. I'm never going to continue my light path afterwards. So even though I would important sample this one bounce, I am not important sampling along the whole path, because I have important sample this one bounce correctly. But I didn't know that globally I'm just heading to a region that's really dark. And what Metropolis Light Transport gives you is something that is not really referred to, but I like to call it multibounds important sampling. So it may take some optimal decisions, and it may send out rays in a direction that is not so likely for your BRDF. If it knows that it's going to end up being a bright light path. So for instance, if you have a glossy inter-reflection that would be mostly sending rays out in this direction, but there is complete darkness in there, then it would do is it would actually send more rays towards the light source, which looks like a suboptimal decision in there in that BRDF, but over the whole light path that is actually going to be something bright. So this is the key idea behind Metropolis Light Transport, and I'd like to give you an intuitive example of that. So imagine that you have the camera in this room in the scene, and you have a light source only in the adjacent room in the next room. And this next room is separated by a wall and the door that is slightly redjar, so it is just opened just a bit. And all the light that you see is coming through that door. And if you imagine, for now naive path tracing, what am I doing? I am sending the ray through the first pixel, and I'm going to bounce it around the scene, and it is very likely that I will never find the light source. And I cannot even connect to the light source. It's in the other room. I'm going to hit the wall or the door. And imagine that I'm computing thousands and thousands of samples, and I finally get to hit the light path that is actually connectable to a light source. If we are doing path tracing, you can imagine that I'm starting from here. If you take a look at the arrow in there, it gives you the intuition that maybe we are doing light tracing. We are shooting light rays of light from the light source. And we finally get into this room and hit the camera. This is finally a good connection. After thousands and thousands of samples, I finally have one contribution. Before that, 0000, and my CPU is basically dying on 100% load. Nothing gets out of there. Now imagine that I finally found a light path that makes sense that has a contribution, and then I would suddenly forget about the whole thing, and I would again start sampling completely random light paths and get the 0000 again. Now it would be a crime to do that, wouldn't it? What Metropolis is doing is essentially trying to remember which are the paths that make sense. And if they find something like that, they are going to explore nearby. So they are not going to shoot out a completely random sample. For the next sample, it's going to take this one sample that made sense, finally a connection, and then it's going to add very small perturbations to this light path. What if I shoot this in an angle that's just a bit changed? And what you can expect is that most of the time it will give you again some amount of contribution and you don't have to start from scratch. So basically you can use all of these knowledge into your advantage. How does the difference look like? Well, this is the scene with bi-direction of path tracing after 40 samples to pixel. And now if you look closely, you will see Metropolis after the very same number of samples per pixel. So this is bi-direction of path tracing and now Metropolis with the same number of samples. So if you take this knowledge into account, most of your samples are going to be useful. Just another look. Bi-direction of path tracing, Metropolis. And bi-direction was already a good algorithm. It's not a naive path tracer. It's a good algorithm. A naive path tracing would be even worse. Not an example. Some nice volumetric caustics with naive path tracing and an equal time comparison with Metropolis life transport. How does it work exactly? Mathematical details, but just enough to understand the intuition. What we're trying to do is important sampling. What does it mean? It means that I am computing discrete samples of f over p. F is the function that I would like to integrate p is a sampling distribution. What I'm looking for is to match the blue lines with the green bars if you remember. So it means that if the function is large somewhere, it means that the image is bright somewhere, or the path space is bright somewhere, then I would like to put more samples in there. So if f is large, then p should also be large. This is what I'm trying to achieve. Now, how do I actually do this? I have some high-dimensional function. Or if I'm doing local important sampling, then I have a BRDF function. How do I important sample this? The trivial solution is called rejection sampling. Basically, what it means is that I would like to compute samples from a sampling distribution. So here you see something that is almost a Gaussian, but imagine that I cannot generate samples out of this function. Because what do I have in my C++ code? Well, I can generate uniform random numbers, but this function is not uniform random numbers. So what I can do is I can sample an arbitrary distribution function. If I enclose it in a box, and I throw completely uniform random samples on this box. So it is almost like drawing your function on a sheet of paper and throwing random samples at it. Now, I cannot generate random samples out of this function, but I can generate random uniformly distributed samples. And the scheme is very simple. If it is under the function, I'm going to take this sample and pretend that I've generated that sample in the first place. And if it's out there, I'm just going to kick it out. So if I do this, I would have samples according to this almost Gaussian. This works well, but this is not what we're doing in practice. This is very inefficient, and hopefully you can see from the image why this is an inefficient technique to do so. Someone please raise your hand and help me out. Why this is not efficient? Do you reject me? Okay, that's true. Thanks. So there's tons of rejected samples. Most of my uniformly generated random numbers are completely wasted again. So there must be some technique that's better than this. Well, there is, but I guarantee that it's not going to make you any happier when you see how it is done. So there's lots of rejection. There's lots of rejections. This can be analytically, this problem can almost always be analytically solved by a technique called the inverse transform sampling or the smear of transform. And this takes a bit of work, but I'll just briefly show you how it works. And if you are really interested in the details, then please take a look at this document. So I'll show you what you have to do. You have to do all of these calculations and then you will have your sampling distribution. Okay, what do we have at the end? Let's start with the intuition. We have uniformly generated random numbers. This is the xe1 and xe2 at the end. And I want to do some transform to these numbers in order to get an arbitrary sampling distribution. And what they are essentially doing is you have a probability distribution function. You want to sample from that. It can be like a Poisson distribution, an exponential distribution, or some custom BRDF. And if you integrate the PDF, you are going to have a CDF. So you integrate the probability distribution function. You will get a cumulative distribution function. And this can help you in this transformation from uniformly generated random numbers to the actual function. Now this is very intimidating, isn't it? Imagine that whenever you come up with a new BRDF or any kind of function that you would like to sample, you would have to compute all this. And not only that, we were doing this for BRDFs. So I can import an example of one bounds. Again, I emphasize that it means that if I hit the table, I locally know what are the good outgoing directions because of the material model. But it doesn't mean that it's globally a good idea because there may be this completely black curtain next to it, which I'm going to hit. And all of the energy is going to be absorbed. What does Metropolis give us? A solution to this. So it's important sampling not only for one BRDF, not only for all possible BRDFs, but an optimal important sampling along the whole path. So this means that it will know that if there is a path that is 15 bounds long, but it hits something that is really bright and it transfers a lot of energy, it will know that I will need to sample this light path and nearby. And it is not going to trace many rays towards the shadow regions. But what does it work? Again, intuitively. It wants a Markov chain process. And there is for Markov chains. There is a steady state distribution. This means that we have been running the Markov chain for a while. And if you do that, then it promises you optimal important sampling for any kind of function without doing anything. I hope that it is understandable how really amazing this is because it is actually a simple sampling scheme that you can write down the pseudocode in five or six lines. And it gives you optimal important sampling. So this is really amazing. And I emphasize again that this is over multiple bounces, not important sampling one BRDF, but over whole light paths. There are different variants of metropolis light transport. The original is the each type metropolis. This is the one that was published in 97. It is a great algorithm. It has different mutation strategies. It means that it has different strategies of changing the current light path into a new one in a smart way and not randomly. The problem is that almost no one in the world can implement it correctly. So it was published in 97. And the first viable implementation that came out was in the Mitsubar under implemented by Vensal Yakov. And it was around I think 2010. So just a few years ago. The original metropolis light transport also attributed to Eric Vich. No one in the world could implement it. I honestly don't know what was going on because he published it in 97 and it took the very least 13 years for the first super smart guy to implement correctly. I don't know what he was doing in the meantime. Maybe he was laughing on humanity that no one is smart enough to deal with this. And maybe we don't deserve this algorithm. I don't know. It's not for the fent of the heart. It's a really difficult algorithm. Yes. He's working for Google. He's working for Google. That's true. He was. After the PhD, did he go immediately to Google? So he's basically working on Edwards. How to get more money out of advertisements. It pays. It definitely pays well. And who knows. I mean that if Eric Vich is working on it, there's going to be some good stuff in there. I guarantee you. But I have to say that that his face looked actually quite delighted when he got the Academy Award just recently for his work that was the very least 15 years ago. It's still used all over the industry, multiple important sampling, but actually pop tracing, metropolis is all over the industry. The rich style metropolis is really difficult. Fortunately, there are also smart people at my former university, namely, Chobok element and last little see my colors. They came up with a simplified version of the algorithm, which is almost as robust, but is actually quite revealed to implement. It is also implemented in small paint. It is called the primary sample space metropolis. It is now implemented by one of my students from a previous year rendering course, and it is in small paint, so you can give it a try. Basically, it does complicated sounding, but otherwise simple mapping from an infinite dimensional cube, where I can generate infinite dimensional cube means arbitrarily long vectors of independent randomly uniform random generated numbers. And these random numbers are somehow transformed into light paths. So what the algorithm does is there's a probability that I am computing a completely new light path. And if I don't have this probability, then I'm going to stay around this light path and explore nearby. What does it mean practically? If I find this super difficult light path from the other room to here, then I find a really bright light path. The algorithm will know that, okay, I'm just going to add slight perturbations to this light path. I'm going to stay around here. And sometimes it will start to look around for random samples. There's also a visualization video on YouTube. If you take a look, you will immediately understand what is going on. And some literature about these algorithms. Now, it is also a sampling scheme. So metropolis, you can implement together with path tracing or bi-direction path tracing. And therefore, this is also an unbiased and consistent algorithm. And it is very robust. It is tailored for really difficult scenes. So if you have a scene with a lot of occlusions difficult to sample light sources, difficult to reach light sources, use the metropolis. But if you have an easy scene, this is not going to give you much because the metropolis is a smart algorithm. It takes longer to compute one sample than a path tracer. And if this smart behavior of the algorithm, it does not pay off, then there may be scenes where the metropolis is actually worse than a path tracer. So if you have an outdoor scene with large light sources and environment maps that you hit all the time, don't use metropolis. It doesn't give you anything. Path tracing would give you better results because it can dish out more samples per pixel because it's dumb. And it parallelizes even better. And only the number of samples matter in this case. And there may be algorithms that take this into consideration. So what if we had an algorithm that could determine if we have an easy scene or a difficult scene, and it would use for easy scenes, easy naive path tracing by direction of path tracing, or if there is a difficult scene, then it would use metropolis light transport. Now this would need an algorithm that can somehow determine whether the scene is easy or hard. And that's not trivial at all to do. But behind this link there is a work that deals with it. And I would also like to note that metropolis light transport is unbiased, but it starts out biased. So what it means is that I'm running a mark of chain that will give me optimal importance sampling, but this mark of chain also evolves in time. So I have to wait and wait and wait and it will get better and better estimations on where the bright paths are and where the dark paths are. And this takes time. This effect is what we call start-up bias. Now what do we get for it? We'll see plenty of examples. So for instance, on caustics, it's even better than by direction of path tracing. For caustics, you will get almost immediate convergence. Now what about this scene? This scene was rendered with Lux render. Here you have not glass spheres, but some kind of prism material spheres because you can see a pronounced effect of dispersion. And you can see volumetric caustics. So there is a participating media that we are in. And these caustics are going to be reflected multiple times and refracted multiple times. Let's say that this is a disgustingly difficult scene. The only light source there is is actually this laser that comes in from the upper left. Let's try to render such a scene with the different algorithms that we have learned about. Now if I start a path tracing, this is what I will get after 10 minutes. So the high scoring light paths, the bright light paths are not the greatest probability light paths. And therefore most of the connections will be also obstructed towards the light source. So it is very difficult to sample with path tracing. By direction of path tracing, it's better, but I mean if I get this after 10 minutes, I don't know how long it would take to render the actual scene. And if you are on a tropolis, it will find the light paths that matter and find the ones that are actually needed to be sampled. And this is the simplified version, PSS and LT. And the number next to it is just a ratio of these small perturbations to large perturbations. Sorry, the opposite. So a large number means that most of my light paths are going to be random. So most of the 75% probability I'm going to do by direction of path tracing, 25% metropolis. And if I pull down this probability, 0.25, then most of the time I'm going to do metropolis sampling, I'm going to explore nearby. And you can see that this renders the scene much, much faster. So this is definitely a very useful technique to have. Now, I've done this animation just for fun. This is a primary sample space metropolis light transport algorithm, only with small mutations. So just very small adjustments to the light paths. And this is how an image would converge with these small steps. And you can see that the caustics converge ridiculously quickly. Now let's take a look at one more example. Take a look at this. Most of the scene is still noisy, but the caustics are completely converged as we start out. Why? Because it is really bright. And this is exactly what the metropolis is going to focus on. So it is even better on caustics. Something that takes a brutal amount of samples with a normal path tracer is going to be immediately converged with the metropolis. So this is the first, I think, 10 minutes of rendering with the metropolis on a not so powerful machine. So it seems that we have solved everything. We're looking good. We got this. But I will show you a failure case that we actually still have problems that we couldn't solve. This is sophisticated scene that is for some reason even harder in some sense than the previous scenes. And it just doesn't want to converge with the primary sample space metropolis. I'm just rendering and rendering and still fireflies. If I have really large, really bright noisy spots, then this means that I have light paths that have a ridiculously low probability to be visited. And that means that my sampling strategies are not smart enough. And this is a classical longstanding problem in global illumination. Metropolis is not a solution for this. It is still not good enough, but there are techniques that can give you really smooth results on ridiculously difficult scenes like this. And I will also explain you during the next lecture why is this essentially difficult? Because it doesn't seem to intuitive, does it? But I will explain to you during the next lecture. Thank you very much.
Now, before we start the algorithms one more time, a disclaimer, these results are coming from scientific papers. And if you come up with a new method, you want to show that this method outperforms existing methods in the scenes or in the setups that you have tried. And some people are very open about the limitations of the techniques because if I have a technique that's better than the best technique out there on this scene, that's great. But it doesn't mean that it will be better on all possible scenes. And some people are very candid about the limitations of the algorithms and some of them are not so candid about this. But with time, as people start to use the algorithm, these possible corner cases or just simply difficult cases come up. So what do I mean by this? But I mean is that if you see great results that there's an algorithm, wonderful results, it's the best thing ever. Okay, but always have a slight doubt whether this algorithm would be robust enough. Would it always work? When would it not work? Because don't just extrapolate from one case. There may be drawbacks that are maybe not so clear where you first see the algorithm. Now mathematical details again will be omitted mostly. But what we are interested in, the motivation for each algorithm, what is the key idea? What are the advantages? The disadvantages? How do the results look like? Where can you access implementations? Where can you try these? And for most of them some additional literature, if you think that wow, this is a really great algorithm, I would like to know more than there will be links. You click them and then you can read either the paper or some writing about them. So let's get started. Park tracing from 1986. Super old stuff, but this is the very first and the easiest way to wrap your head around global illumination. You start your race from the eye or the camera. You bounce them around the scene if you would like to earn some style points. And after every bounce you would also trace shadow rays towards the light source. This is next event estimation. This usually lowers your variance. And then you end up somewhere, you compute all these light paths and jolly good. You don't do any simplifications to the integrand. You exhaustively sample all possible light paths. There's no interpolation, no tricks, no magic. So this should be an unbiased and consistent algorithm. Unbiased the error is predictable. I know that if I add more samples, there's going to be less error. And I know that sooner or later the image is going to converge because I am sampling all possible light paths. There are. It is impossible that I would miss something. Now there may be corner cases, but they are really difficult, but fortunately well understood corner cases where there are contributions that you may miss. I will discuss this during the next lecture. What are the advantages? It's simple. It's also very easy to implement. I didn't write it there, but it also parallelizes well. Why? Because it's a dumb algorithm. It doesn't do anything tricky. It doesn't build super difficult and super complicated data structures. You just put it on the GPU and you just cram out as many and you just dishought as many light paths per second as possible. What is a common problem that people encounter with this? For instance, caustics converge very slowly because caustics are usually light paths that are extremely improbable to be sampled and you would need to compute many, many samples in order to hit these caustics many times in order to clean them up. Onwards 1993 by directional path tracing. What is the motivation behind this guy? Well imagine a scene that this is your camera on the left and you have a light source for instance enclosed in this object which is for now for the sake of experiment a black body. You hit it from anywhere, it's not a glass, light bulb or anything like that, it's a black body. So whichever part of the container you hit you won't continue your light path. Now you would start a path tracer, what do you do? You start tracing the rays from the camera and it is not too likely to hit the light source. So it's not a point light source, it's an aerial light source, it is possible to hit it, but it's not very likely. Now after the previous lecture you would say no problem, next event estimation, what do I do? I don't wait until I hit the light source, I would send out shadow rays after every bounce and I would get some of the energy of the light source, the direct contribution of the light source. Great, but the problem is that this also doesn't work because most of the connections would be obstructed because if I hit this very first bounce I cannot hit the light source because there is the black body that contains it. After the second bounce I also cannot connect to the light source. It's again, even with next event estimation most of my samples are wasted. We are tracing random rays, it is very unlikely to hit the light source and even if I connect to the light source it is very unlikely that I will see an obstructed connection. What is the solution? By direction of path tracing what happens here is that I am not starting only one light path from the eye, I start two light paths, one from the eye as with regular path tracing and I also start light paths starting out from the light sources, this is called light tracing and I try to combine these two techniques into one framework. So what it means is that I start one or a given number of bounces from the eye, I start a given number of bounces from the light source and then I connect these light paths together and I pretend that I just built this light path instead. And now with this I have a much better chance to sample these light sources because I would have the opportunity to get out of that small zone that is otherwise difficult to hit from the eye. Now let's see the difference between the two techniques, these are taken after 10 seconds for the very same scene and you could say that there is a huge difference for this indoor scene between the two. So it's definitely worth looking into. Now what is actually difficult about biorectional path tracing is that theoretically it's very simple, there is not one light path, there are two and I connect them in all possible different ways. Now what you should take into consideration is that this is actually two Monte Carlo processes. One Monte Carlo process is when you start out from the eye and you hit a diffuse or a glossy object then you would start to important sample it, important sample the VRDF. This means that I would take the likely paths more often. Now if you start a light path from the light source then what you would be sampling is actually the distribution of the light source itself because regions that are visible from the light source would be sampled extensively with light tracing because you're always hitting them, they are in front of you. And that's a completely different sampling distribution. So you can imagine as if you had two different Monte Carlo processes that sampled the very same integrand. And one Monte Carlo process would have some variance and the other would have some other variance. So different regions of the path space, and also different regions of the image would converge quicker with light tracing and different images would converge quicker with standard path tracing. And I would like to combine these two techniques together. And this is entirely not trivial. Variance, I've written noise in there to be more intuitive but we're talking about variance, noise comes from variance, variance is an additive quantity. So this means that if I have two Monte Carlo estimators of given variance and if I would just add them together and average these samples, then I would also average the error of the two. And that doesn't give me a great result because there are some regions that are sampled by light tracing well and there are regions that are sampled by path tracing well. And I cannot just cut out the good parts from each sampling technique because the error would be averaged. And this can be solved in a meaningful way in a way that is actually proven to be optimal in some sense. And this technique is called multiple important sampling. Now multiple important sampling was brought to us by a person called Eric Weach in his landmark thesis of beautiful, beautiful works by direction of path tracing is one of them. And if I remember correctly last year he got an Academy Award for his work, this is basically gone. This is basically the technical Oscar award if you will. And in his acceptance speech, it was really funny because he has a daughter and his daughter had taken a look at his thesis which is hundreds of pages of heavy integral calculus. And she asked that that he do people actually read this huge poem of knowledge and he finally can say that yes people actually do read that. We read it like the Holy Bible. Multiple important sampling is among one of his discoveries and it is maybe, it's a bit subjective, maybe the most powerful technique in there in all rendering. And I will show you plenty of examples to convince you that this is so. So on the left, let's forget about the middle example for now. Let's just compare the left and the right. You can see that there are many artifacts and many of these fireflies that can be suppressed by this technique. So I can unify multiple sampling techniques in a way that wherever they do really bad I can just forget that and I will take only the best samples for each region. Let's take another look which is maybe even better. This is called, at least this is what we call a Veech pyramid. This is created with bidirectional path tracing and the code below each image means that we have taken a different number of steps from the light source and from the eye. So in every image you see one given number of bounces. So if you would have path tracing, you would get like 10 or something images, not in a pyramid. One image would be the first bounce, second image would be the second bounce, third image would be the third bounce. For bidirectional path tracing you have a pyramid like that because you subdivide them to the first bounce from the eye and the some bounce from the light source. So this is now a two dimensional thing. And you can see that some of the effects are captured really well in some of these images and there are some other images which are absolutely, absolutely terrible and really noisy. So for instance if you take a look at the two sides, these two sides mean that I am hitting either the camera or the light source by accident. And if you have a small light source which we actually do look here, then this is a relatively low probability event and if this is a low probability event and most of my samples are going to be wasted and I'm going to be have a noisy image, not a well-converged image. So on the sides I have really low probability events and these are samples that I really don't want to use. Imagine that I would add all of these images together, average them. I would have plenty of noise from the noisy ones. Now what if I could say that if you take a look at s equals 1, t equals 5, you can see that we have caustics in there. And the caustics is almost almost immediately converged in there. It is definitely good in a sense that I would, for caustics, I definitely would want to use these samples and not the ones for instance in s equals 0, t equals 6 because there is also caustics but it is really noisy. It is not systematically looking for caustics, it just happened to hit it but it is not good at sampling it. And I don't want to average these guys together. What I would want to do is I would want to give a large weight to s equals 1, t equals 5 on caustics and I would just grab it in there in my image. And I would just forget about the other contributions. And this is mathematically, doing this in a mathematically sound way is not easy but Eric has proven really good and super simple technique on how to do that. And now look closely to the image. This is without naive biorectional power pressing, without multiple important sampling. And now what you will see is if we add multiple important sampling. So look closely. See the difference? There are many noisy images that were completely shut down because they were not really good at sampling different parts of the space of light paths. Some images are not good at anything at all. Take a look at the two sides. And there are images where I can take caustics from for instance. Like the s equals 5, t equals 1, it seems to have been even better at sampling caustics because this s equals 1, t equals 5 was also pretty good. But it was shut down by the other technique that was even better. So this is an amazingly powerful technique in order to create even more converged images if you have multiple sampling strategies. Now you can also play with it, it is implemented in shader toy, the nice classical V-Sync where there is light source sampling and BSDF, DRDF sampling. And it doesn't matter if you sell BSDF or DRDF in this case, by the way. But you remember. So you can play with it. And I encourage you to do so. It is lots of fun. And you will see what kind of light transport situations are captured well with which sampling technique and how to unify them in a way that everything looks converged almost immediately. And also what does a good engineer do? Well, a good engineer obviously is interested in the problem. So I just set down and also implemented the same thing in a simple example in 1D to make sure that everyone really understands what is going on. So this is a simple Monte Carlo sampling problem in 1D. I have a function that I would want to integrate. If I remember correctly, I am integrating a Gaussian. And I would like to sample it with two different techniques. So this is two different Monte Carlo sampling processes. And I would want to take only the best samples in order to get an approximation which has the least variance. And there are multiple ways of combining them together. And there's also naive averaging, which just averages the error. So it would give you back all of these images from the side. And I write out what are the exact Monte Carlo estimators for different multiple important sampling estimators as well. So take a look. It is now part of small paint. And you can run it super simple and hopefully super understandable. I think it is less than 100 lines of code. So what we now know, bi-directional path tracing, definitely better convergence speed, especially in scenes where you are not that likely to hit light sources. So especially in indoor scenes. And you will also get quicker convergence for caustics because you will have sampling strategies that are very efficient in that. So caustics are usually visible from light sources and you will sample them very often. So there's going to be at least one estimator that captures it well. So if you use MIS, multiple important sampling, you're going to have caustics covered very quickly. Now it is definitely not easy to grasp and it is definitely not easy to implement. So it requires quite a bit of an effort. Even if it sounds very intuitive. It is, but it is not easy. This is also a brute force method. This also samples all possible light sources and therefore this is also unbiased and consistent. Some more literature on bi-directional path tracing and even better, there is a nice comparison quoted up also on shader toy. So when you are at home just fire it up and you will see the difference evolving in real time on your GPU on an indoor scene.
Let's talk about just briefly about the PBRT architecture. PBRT is not exactly the renderer that we are going to use. We're going to use LuxRender, but LuxRender was built upon PBRT and therefore the basic structure remained completely intact and this is a really good architecture that you would see that many of the renderer engines out there, globally illumination rendering engines out their use. Most of them use the very same architecture. So we have a main sampler render test that asks the sampler to provide random samples. So the sampler you can imagine as a random number generator. We need a lot of different random numbers because the pixel that we are sampling, some techniques choose it deterministically going from pixel to pixel. Some techniques take pixels randomly. I mean which pixel we choose to be sampled is usually deterministic but the displacement because we would be sampling the pixels not in not only in the midpoint like recursive ray tracing but you you would take completely random samples from nearby and use filtering to send them up in a meaningful way. Now this requires random numbers they come from the sampler. You would also send outgoing rays in the hemisphere of different objects. You also need random numbers for this. So in this sample these random numbers arrive and this sample you would send to the camera and the camera would give back to you array. So you tell the camera please give me a array that points to this pixel and this camera would give you back array which starts from the camera starting point and points exactly there. Now all you need to do is give this ray to the integrator and the integrator will tell you how much radiance is coming along this ray. And what you can do after that is write it to a film and this is not necessarily trivial because for instance you could just simply write it to a ppm or a png file and be done with it. In contrast what what LuxRender does is it has a film class and what you can do is that you can save different for instance different contributions in different buffers. So what you could do is for instance separate direct and the indirect illumination into different films, different images. And then you can in the end sum them up but maybe you could say that I don't need postics on this image and then you would just talk that image. So you can do tricky things if you have a correctly implemented film class. Okay so LuxRender just what I have been talking about is built upon VBRT and uses the very same architecture. This is how it looks so it has graphical user interface and you can also manipulate different tone wrapping algorithms in there, different denoising, image denoising algorithms in there and you can even manipulate light groups. This is another tricky thing with the film class. Basically what this means is that you save the contributions of different light sources into different films by films you can imagine. Image files. So every single light source has a different png file if you will and they are saved into there and the final image would come up as a sum, a sum of these individual films but you could say that one of the light sources is a bit too bright. I would like to tone it down but if you want to tone it down then you would have to revender your image because you changed the physical properties of what's going on. Now you can do this if you have this light groups option because they are stored into individual buffers so you can just dim one of these images and just add them up together and then you would have the effect of that light source a bit dimmer. You can for instance completely turn off sunlight or television that you don't that you don't want to use in the scene. It sounded like a good idea but it wasn't. You can just turn it off without rendering the scene. You can operate all of these things in the through the Luxor under GUI. Now before we go into algorithms let's talk about algorithm classes what kinds of algorithms are we interested in. First what we are interested in is consistent algorithms. Consistent means that if I use an infinite number of Monte Carlo samples then I would converge exactly to the right answer. I would get back the exact interval of the function. Intuitively it says if I run this algorithm sooner or later it will converge. It also is important to note that no one said anything about when this sooner or later happens. So if an algorithm is consistent it doesn't mean that it is fast it doesn't mean that it's slow it can be anything absolutely anything. It may be that there is an algorithm that's theoretically consistent. So after an infinite amount of samples you would get the right answer but it really feels like infinity. So it may be that after two weeks you still don't get the correct image. There are algorithms like that and theoretically that's consistent that's fine because you can prove that it's gonna converge sooner or later. The more difficult class that many people seem to mess up is unbiased algorithms. Now what does it mean? If you just read the formula then you can see that the expected error of the estimation is zero and we have to note that this is completely independent of n. n is the number of samples that we have taken. The expected error of the estimation is zero. It doesn't mean that the error is zero because it's independent of the number of samples. It doesn't mean that after one sample per pixel I get the right result. It says that the expected error is zero. I will give you many intuitions of this for this because it is very easy to misunderstand and misinterpret because in statistics there is a difference between expected value and variance and this doesn't say anything about the variance. This only tells you about the expected values. So for instance if you are a mathematician and think a bit about this and you could say that if I have an unbiased algorithm and I have two noisy images you render something on your machine. I render something on my machine that's two noisy images. I could merge them together. I could average them because they are unbiased samples. It doesn't matter where they come from. I would add these samples together every step and I would get a better solution. We will see an example for that. My favorite intuition is that the algorithm has the very same chance of over and underestimating the integrand. So it means that if I would try to estimate the outcome of a dice roll the expected value you can you can roll from 1 to 6 with equal probabilities. The expected value is 3.5. So this means that I would have the very same probability of saying 4 as I would have the probability for saying 3. So it's the very same chance to under and overestimate the integrand. And I'll give you my other favorite intuition. This is what journalists tend to like the best. It means that there is no systematic error in the algorithm. The algorithm doesn't cut corners. And if there are errors in the image then this can be only noise and this noise comes because you don't have enough samples and if you add more you're guaranteed to get better. Now let's take another look at this really good intuition. So I can combine together two noisy images. So this means that I should be able to do network rendering without actually using a network which sounds a bit mind-boggling. I really like the parallel to this which is a famous saying of Einstein from long ago where they talked about sending electromagnetic waves out and they talked about the telephone and people could not grasp the idea of a telephone. And he said that we would have a super super long cat. One the tail of the cat would be in Manhattan and if you would just pull the tail of the cat in Manhattan then the front of the cat would be in New York and if you pull the tail in Manhattan then she would say meow in New York. And he asked the people is this understandable? Yes this is understandable. Okay perfect we're almost there. Now imagine that there's no cat. And this is the exact same thing. So this is network rendering without an actual network. Well okay mathematical theories okay but let's actually let's give it a try. So what I did here is I rendered this interior scene and this is how it looks like after two minutes. It's really noisy right? Now what I did is I ran 10 of these rendering processes and saved the images 10 times. So I didn't run one rendering process for long. I ran many completely independent rendering processes for two two minutes and what I did is I merged the images together. What it means is that I averaged the images. I added them together and averaged them. Now basically this means that you could do this on completely independent computers that have never heard of each other. And now let's take a look. This is the noisy image that we had and now let's merge 10 of these together. This is what we will get. Look closely. Look at that. Now one more time. This is the noisy after two minutes and this is merging some of these noisy images together. So this is unbelievable that this actually works. So if you have unbiased algorithms you can expect this kind of behavior and you don't need to sophisticated networking to use your pathfacer for instance in a network because you don't need the network at all and this is really awesome. No because if you don't add any kind of seed to your computations then you're computing completely independent samples and it doesn't matter if the sample is computed on the same machine or in a different machine. If you have some kind of determinism then it may be possible that the same paths are computed by multiple machine and that's indeed wasted time. But otherwise it works just fine. Now let's practice a bit. Instead there's a question. Yeah. Just how's the biggest difference between one picture of renders 20 minutes and 10 pictures rendered two minutes each and then combined? Nothing. In terms of samples nothing. The only difference is that you actually need to fire up that scene on multiple machines. So if there is like 10 gigabytes of textures then it takes longer to load it up on multiple machines and and maybe transfer the data together but if you think only in terms of sample it doesn't matter where it comes from. Okay let's practice a bit. We have different techniques and this is how the error is evolving in time. Now the intuition of consistent means that the error tends to zero over time so if I render for long enough then the error is going to be zero. Is this black one a consistent algorithm? Nope because it converges here to the dashed line and not to zero. Now what about the other two guys? Are they consistent or not? Okay. The error seems to converge to zero. Okay now what about these techniques? Are they biased or unbiased? Which is which? What about this one? This is the darker gray. Is this biased or unbiased? Now if we have this intuition that if we render for more the image is guaranteed to get better or at least not worse then this darker is definitely not unbiased because it is possible that I'm rendering for 10 minutes that's this point for instance and I say okay I almost have a good enough image and I render for another five image and expect it to be better and then I get this maybe a completely garbled up image full of artifacts and errors and that is entirely possible with biased algorithms. No one said that it's likely but it is possible so you cannot really predict how the error would evolve in time and if you take a look at the other two lines you can see that they are unbiased algorithms so as you render for longer you are guaranteed to get a better image.
Okay, so I don't think I started. I will show you some of the small paint contest assignments from previous years. Is this visible? Is this? Looking to the one, I think I should pull some of the curtains. Maybe I'll be honest with you. Just a thanks. How about that? So you can see that even with the small paint program, you can make incredible, incredible scenes. So this is the thanks. So this is the state of the art lecture. Basically what we are interested in, this and the next lecture, is starting from the very first algorithm that was ever created to solve the rendering equation, a classical path eraser, up to the most sophisticated works, some of them, which came out less than a week ago. And I won't go into deep mathematical details for most of these techniques. What I would like you to know is the basic idea and intuition behind the method and why we are doing the things we do. So deep mathematical details will also be there in the form of links where you can look behind the curtain and you will see what is exactly going on in there. Now before we start with the state of the art part, there's a few things that we need to discuss. One, this person. So we have talked about indices of refraction. The index of refraction for different materials, what was it? It was a number. So in every code, in every program, in every theory, we use numbers. Well, in reality, indices of refractions are not numbers. They are, in fact, functions. What does it mean? They could be functions that depend on the wavelength of incoming light. And what it exactly means is that there are materials that refract incoming light of different colors took different directions. And that's quite profound because you will see the beautiful effects of dispersion in a second. And there are also some supplementary videos that you should take a look at. This is a good example of it. This is a prism. So you can see that the incoming light is white and the prism does break down to this white incoming light to all possible colors there are. Another good example of this is rainbows. So whenever you are on a family trip and they are asking what you're looking at, and if you accidentally don't say rainbows, you will maybe say dispersion. And they will put you in an asylum instead. But don't worry. You are correct, scientifically, and that's all that matters. You can also see, and maybe not so beneficial and not so beautiful effect of dispersion. It is called chromatic aberration. This means that we have a camera lens that is possibly not the highest quality, and it can introduce artifacts like this because different colors of light are reflected into different directions. And you don't get the sharp image that you would be looking for. Now this is dispersion rendered inside LuxRender. So you can see that with physically based rendering, you can actually capture this effect. And you can also render diamonds. So if you have a fiance and you would like to buy a ring, but you are broke because you're a university student, then you can just render one. And you can also render one with dispersion. But see if you have a nerd girlfriend because if so, then maybe she will be happy about it. And most people aren't. And I speak from experience. And you can also see this really beautiful effect in the old, old pink-cold album cover called The Dark Side of the Moon. There are also some videos about this in the internet rendered with LuxRender. Take a look. Now, the first question is, is the index of refraction of glass constant? Well, let's look it up. Obviously we may have glasses that are made and manufactured in different ways. There are most of them are not completely clear. They are some kinds of a mixture. So there are different kinds of glass. But let's just pick one randomly from a database that gives you indices of refraction. And you can see that it is actually not flat. It is not a constant. There's something happening in the function. So this means that there are glass types that have dispersion effects. And even only slightly because you can see that between the minimum and the maximum there is not such a large difference, but there is something. So you could say that at least this kind of glass introduces some degree of dispersion. So let's take a look. What do you think about this image? Does this caustic have any kind of dispersion effect or does it not? What do you think? Is it a bit more colorful around the edges or is it completely white? Looks exactly. Looks exactly. Looks a bit red. Could be. Give me one more opinion. What do you think? It's a little bit rough. It's a little bit more. It looks like a rainbow or something. It might be a rainbow, but it may be significantly smaller. So maybe you would have to zoom in really close to Cedar Rainbow. So this is up for debate. We could see that the IOR seems to be non-constant. And therefore there should be a dispersion effect. Some artists claim that they can spot a difference between a physically based renderer, even for materials like that, and simple RGB rendering, where you cannot render these dispersion effects correctly. This is up for debate, whether you can see it, but science says that yes, there is, even if there is a slight difference, there is a difference. If you'd like to know more about this person, there is this wonderful series called Cosmos, a space time on the C. Have any of you heard of this before? Raise your hand. Okay. Few of you. So this is hosted by the magnificent Nile de Grasse Tyson. Yeah. Nile de Grasse Tyson. And you should absolutely watch it. So everyone who hasn't watched it yet, I'd like to hear your excuse, or at least I'd like to hear that you will go home and watch it. So this episode is about that dispersion mostly, and you will know all about dispersion if you watch it. Okay. Now we have another question, because we have written an RGB renderer. So we, if you look at the source code of small paint, everywhere you just see RGB, RGB, RGB. How do we write a correct physically based renderer? And even before that, how do we even represent light in the visible spectrum? Now a good answer to this is to introduce a function that describes how much light is carried at different wavelengths. Now this would be a continuous function that we could call spectral power distribution. And you can see that at these lower wavelengths, there is not too much light carried on the higher wavelengths, there is more. So you can put this representation into your renderer. And what you would do is that you would just a naive solution, you would pick a randomly chosen wavelength. And you would trace it into the scene using this wavelength. And if you do this, you can actually do another kind of Monte Carlo integration, because you would also add one more dimension of integration, and this one more dimension would be over wavelengths. Because you would also be statistically taking random samples of the rendering equation for a given wavelength in a given color. And then you would need to sum it up somehow to get a sensible solution. There is more about this in PBRT, Chapter 5.
What a wonderful day we have today and what a wonderful time it is to write a path tracer. So why don't we get started? What we are going to be looking at is a program called SmallPaint which is a small path tracer in effectively 250 lines of code and it contains everything that we have learned during this course. We're going to be able to compute soft shadows, anti-aliasing, Monte Carlo integration and even quasi Monte Carlo sampling which basically means low discrepancy sampling. This version of the program is going to be able to compute refraction, color breathing and caustics and in the end the binary you compile from the code can be compressed into less than 4 kilobytes. This is how the end result looks like it has a beautiful painterly look which actually comes from a bug and you can also see that the light source up there, the whitish looking sphere is you could say perfectly anti-aliased. In order to achieve this with a recursive ray tracer and no global illumination algorithm you would need to compute the very same image on a much larger resolution and then scale it down to a smaller image like this. This anti-aliasing effect you get for free if you compute Monte Carlo path tracing. The question is how is this exactly done and now is the best time to put everything into use that we have learned so far. Let's get started. We have a completely standard vector class. It is a three-dimensional vector. It has its own constructor, the classical operators that you would expect and we also have a dot product for the vector and a cross product for the vector. It is also obviously possible to compute the length of this vector so nothing too exciting or important here but we definitely need to build on a solid vector class. Now the representation of a ray, a ray has an origin and the direction and if you take a close look at the constructor you can see that when you initialize a ray with a direction then this direction is normed and the reason is that when we compute the dot products between these vectors most of these information needs to be directional information so we are not interested in the magnitude of the vector only interested in the direction of the vector. A good way to get rid of problems where you would initialize your ray with directions that are not vectors of unit length but you can do is to normalize this input in the constructor so you will never have headaches about incorrect results where you have no idea what is really happening. What is the representation of an object? Well an object has a color. This is denoted by CL. This is actually very sloppy notation and this you can imagine as Elbedo but it is not a double so it's not a number it is actually a vector and the reason for this is the fact that we need to define the Elbedo's how much light in different wavelengths is being reflected and how much is being absorbed by the given object. Now object may also have a mission if they have some non-zero emission then they are light sources and by type we have an integer that would specify what kind of BRDF we have. It is also important to have an intersection routine and some other function that can compute the normal of the object. Now these are of course virtual functions we don't define them or an abstract object but they would have to be implemented in other classes that inherit from the object. Let's take a look at the sphere. So sphere has this C and R C is the center of the objects and R is the radius. The constructor is trivial. We have the intersection function. Now if you hopefully remember all three of the elements of the quadratic function that we need to solve but if you take a good look then you will see that A is missing from here and the question is why is that? The answer is we are multiplying D with D the direction vector of array with itself and if it's a vector that is normed so it is of length 1 then this A will always be 1. After that we can deal with the discriminant. The discriminant is normally not B squared minus 4 AC. Remember A is 1 here so it's omitted but it is behind the square root and this square root is completely omitted here which looks like a mistake but it is not. It is essentially an optimization step because you will see that we are testing the discriminant if it's less than 0. If it's less than 0 then we don't need to deal with this equation because the solutions for the quadratic equation are going to exist in the plane of complex numbers and that's not a valid T. It's not a valid distance where we would intersect the sphere. If this is not happening then we can compute the square root for the discriminant. Why after that? Because if the discriminant is bigger than 0 then the square root is not going to make a difference. So what we can essentially do is to postpone the square root calculation after the test. Note that square roots are really really expensive so we can save up lots of computational time if you omit this calculation. There is a really nice square root hack in the source code of quick 3 which is by the way open source. Take a look at how people are trying to hack together functions in order to work better and faster than they should because square roots are super expensive and there are some really interesting hacks in order to speed them up. We have the plus and minus term and the division by 2a is again postponed. And that's also another interesting question. Why is this postponed? So you can see that the sol 2 is divided by 2 and the sol 1 is also divided by 2 but only after the test. So it is possible that if we have the solution 2 if it is bigger than epsilon then we have the first expression after the question mark but if it's not then we are looking for the second expression after it which is another test and if the answer is no for that as well then we are going to return 0. This would mean that we don't have any hits or the hits are behind us and we are not interested in intersections that are behind our array. There is a possibility that we encounter this and in this case I don't want to waste my time by dividing these solutions by 2 because I'm not going to use them. Why am I splitting hairs here? That's an important question. Why do we need to optimize so much? Because if you grab a profiler a program that is able to show you in what functions are you spending most of your time in then this profiler would show you that more than 90% of the execution time is spent in the intersection routines. So you have to have a really well optimized intersection routine. Some programs have replaced these expressions with assembly in order to speed it up even more. So how do we compute the normal of a sphere? Well very simple. What we have here is p minus c. Now what does it mean? So if I have a minus p then this means a vector that points from b to a. So what this would mean look at the figure here. If I would have a circle then this would mean that from the center it is pointing towards the given points of the sphere. Now this is also divided by r because you could imagine you have a sphere that is not of unit radius and if it's not of unit radius then this normal vector would have a length. You could compute a normalization we have a normalized function in our vector implementation but it also has a square root so it would be much better to have something that's faster. Well if you just divide by the radius of the sphere then you would immediately get a vector of unit length. So in the end we can get the correct result by just simply dividing by r. Excellent. Now we have a perspective camera here. Hopefully you remember from the first lecture we are basically just copy-pasting this expression here we have derived them rigorously and analyzed how this exactly works. A simple intuition basically what we are doing we have as an input an x and a y. Basically this means give me a pixel with this displacement and what it would give you back is the world's base coordinates of these pixels. Uniform sampling of a hemisphere what is this for if we are encountering a diffuse object what we would like to do is to send a ray out on the hemisphere of this object. This we would need to uniform example this means that the diffuse PRDF is one over pi or row over pi if you take into consideration the elbows and you need transforms in order to do it there is a reading behind this link how it works exactly is drawing uniform samples on a plane which is simple and then we are projecting it on the hemisphere that's basically all there is. What about the trace function? As you can see here in the first line this code says that there is a maximum depth. Now clamping after a maximum depth value is not really optimal because whatever number you put in there the higher order bounces are going to be completely omitted. Now the real solution would be Russian-Rulet past termination which we fortunately also have after depth of an arbitrary number like five you start the Russian-Rulet routine which basically says there is a probability for stopping the light path right there and we generate a random number and compare to this probability. If we don't hit this probability then we will continue our path but we will multiply the output and the contribution of this light path by this given number that we have specified in one of the previous lectures. So this was implemented by Christian Mahacek and kind thanks. And you can see that later we are going to use this RR factor in order to multiply the contribution of a ray later. Now what about the intersection routine? This is definitely not the best way to do it but it's sure as hell the easiest way to do it. We specify this T which is going to be the intersection distance. How far we are from the start of the ray and how far is this intersection exactly? ID is basically which object did we hit and then we iterate through all of the objects in the scene and what we are interested in is calling the intersection routine. This will return a T how far is the intersection and what I am interested in an intersection that is the smallest number. This means the closest intersection and also something that is larger than epsilon because if I would tolerate zero then this would mean that self intersections are accepted. Therefore every single object that I am on is going to be the first intersection. I'm not interested in this. I know where I am. I just want to know where I am continuing my path. If there is no intersection then we return. There is zero contribution. Where is the intersection in world space? We denote this by HP. This means hit point and where we have started a ray we traveled along the direction of the ray with this T amount where the intersection is. So this is where we ended up and the origin of the new ray is going to be this hit point. What is the normal going to be? Well we just grabbed the object that we intersected and we are taking the normal with the given function. What is the return radiance? We simply add the emission term. This is the emission term on all three wavelengths. There is a magic multiplier disregard that and then we continue evaluating the second part of the rendering equation. The first part is the emission and the second is the reflected amount of light. Let's continue with the inside of the trace function. If we hit an object with a type of one then this is a diffuse BRDF. The next functions compute the next random number for the low discrepancy halton sampler and the direction is going to be a random sample in the hemisphere, a completely uniform random sample in the hemisphere of this object. What we have here is this N plus the hemisphere function. This is intuition. This is not exactly what is happening. I have just shortened the code slightly in order to simplify what is going on here. The code that you will download will have the real deal in there. Now then we compute the cosine term very straightforward and on the TMP we are going to instantiate a new vector and this is going to hold the recursion. So the subsequent samples that we shoot out from the hemisphere are going to be added and added to this TMP. Now is the time for recursion. We pass the ray and the scene to the trace function. The ray is actually not the current one, it's the new one. So basically we set up the new hit point and the new direction of the ray and this is what we are going to pass to the trace function. We increment the depth variable because we have computed the bounds. The TMP is going to be a variable where we gather all these radians and we put every other parameter that is needed to compute one more bounds. Now the color is going to contain the cosine term and all these radians that is collected from the recursion and we multiply it with the CL.xyz which is basically the BRDF. So this is the right side of the rendering equation for a diffuse BRDF. This is multiplied by 0.1. This is just a magic constant. Now what about a specular BRDF? What if we hit a mirror? Well very simple. We compute the perfect reflection direction. You can see the ray dot D and we again set up this variable to collect the radians in there and we are not doing anything. We are just going to add the radians as we get reflected off of this mirror. Then we are going to compute subsequent bounces and this is going to be stored on this TMP. So this is what we are going to add to this radians. What about a refractive material? Well we have every bit of knowledge that we need for this because essentially this is the vector version of Snals Law. What does it mean? Well the original Snals Law that we have computed is in 1D. So it only gives you one angle. But if you are in 3D you are interested in angles in two different dimensions. This is nothing but the extension of the very same law into higher dimension. Now where is this implemented exactly? You can see the cosine of theta 2. Note that n1 and n2 is considered differently because one of these media is always going to be air. Therefore one of the indices of refraction is always going to be 1. The rest is just copy paste. And again you can see that the square root is missing and we are going to postpone this after the test of cosine t2 because if it is actually not larger than 0 then we are not going to need this variable at all. Therefore we can postpone this after the test again. What about the direction of the outgoing gray? Well this is just copy paste from the formula that we have derived before. So simple as that. Obviously we again need the recursion because if we go inside a glass sphere then we are going to compute the refraction. So we are going to be inside of the sphere. What does it mean? One that we have to invert the normal because we are inside so the normals are flipped. And again we need to compute the trace function which is the recursion. So we are also interested in the higher order bounces. Onwards to Fresnel's law. What is the probability of reflection and refraction when rays are bouncing off in different directions in different angles of refractive surfaces? Implemented by Christian Hathner. So a big thanks for him. It is very simple. You can see that it is exactly the same as what we have learned in mathematics. So this is the R0 term. This is the probability of reflection in normal incidents. And we are interested in the square of that. And note that you don't see the n1 and then 2. This is because one of them is always going to be air or vacuum. So it is going to have the index of refraction of one. Now what about the final probability of reflection? It is also coming from the formula. We have every bit of information we need. So we just put in there this term with the cosine attenuation. How does the main function look like? Well we have some wizardry with C++11 lambda functions. But basically this is just a shortcut in order to be able to add a new sphere or a new plane to the scene in one line of code. Spheres are given by their radius position color by color. We obviously mean albedos emission and type. Type means what kind of BRDF we have? A diffuse a specular or a refractive BRDF. Now for planes we have position normal color emission and obviously type. So what kind of material we have? So using just one line of code you can add a new object and specify everything every information that you would need to it. Now we also add the light source and we specify the index of refraction for the refractive BRDFs. And we also specify how many samples per pixel would we like to compute? Onwards to the main loop we have two for loops that iterate through the width and the height of the image plane. Now vector C is color. It's again very sloppy. What it means is actually the radiance that we compute. We instantiate array. What is going to be the origin of the ray? This is going to be a 0 0 0. So this is where the camera is placed. What is going to be the direction of the ray? Well we connect this ray to the camera plane. And we specify which pixel we are computing with i and j and then we add this weird random number to it. Now what this means is actually filtering. In recursive ray tracing what you would do is you would only send the ray through the midpoint of a pixel and that's it. You would compute one sample per pixel. In Monte Carlo part tracing you're computing many samples per pixel and they don't have to go through the midpoint of the pixel. You would sample the area of the pixel. And this gives you anti aliasing effects for free if you use it correctly. What is going to be the direction of the ray? Well this is again the same a minus b. The b is the origin of the ray and a is the camera coordinate. So what does it mean? That it is pointing from 0 to the camera plane. And we normalize this expression to have a ray of unit length. Now we obviously call the trace function. The number of bounces is 0 and we pass every information that needs to be known in order to compute these bounces. So we provide this initial ray and the scene and everything else. Obviously we also pass the C and this is going to collect all the radiance there is in the subsequent bounces. And then after this recursion is done we deposit all this energy, all these radiance to the individual pixels. And then we divide by the number of samples because if we wouldn't do this then you remember the one over n multiplier everywhere in Monte Carlo integration. If you wouldn't do this then the more samples you compute the brighter image you would get. And this is obviously not what we're looking for. At the very end we create a file. This is the ppm file format where you can easily write all your contributions in there. We also start a stopwatch in order to measure how long we have been tracing all these rays. So very simple, very trivial and when we are done we close the file. It has the image in there and we done write how long the rendering algorithm has been running for. And basically that's it. That's it. This is effectively 250 lines of code that can compute indirect illumination, caustics and every global illumination effect. And it can compute images like this. This is one student submission from previous years. Absolutely gorgeous. This is the fixed version of small paint where there are no errors in the sampling. Another one from Michal Kama. This actually looks like I don't know if you are into the music band Boards of Canada but this looks exactly like one of their album covers. Love it. Really cool. And also Syrpinski Triangles from Christiane Kusla. You can find the link for the code in there and take a crack at it. Just try it, build different scenes, try to understand what is going on in there, try to mess the code up. I wonder what happens if I would not normalize this vector. Play with it. It's a really small, concise and really understandable path tracer. So take your time and play with it. It's lots of fun and you can create lots of beautiful, beautiful images with global illumination. Thank you.
Let's talk about assignment 3. This is the fun stuff. There's a lot of writing that don't is there. Assignment 2 is where you need to do quite a bit of mathematics. And down, there's only the fun stuff. So this is one of that. And that line for this is going to be way after the previous assignment. So it's not like I'm going to add that line of this next to the previous assignment. So after you have completed that, then you will still have enough time to do it. So lots of blah, blah, blah in there. One. Please, download this in file. And there you can see the implementation of my simple one. So you want the Monte Carlo integrator that we have played with. Put together a function, any function that you like, and integrate it with the Monte Carlo integrator. First, write it down in some way in Lottec and put it in a PNG file over PNF or whatever, and say that this is being enrolled by the light of the computer. And that be the mathematician or be the engineer. So do the actual calculation through analytic integration or just open it from alpha and let it do the hard work. And then we the Monte Carlo guide and do the actual Monte Carlo something for this function. And see if you will get the same results. So this is how you can prove that your calculations are correct. Now, the pro version of the same thing is what they find the code to be suitable for higher dimensional integration. And then also create any function, any where he found polynomial cosine, whatever function that you would like, exponentials, whatever. And do multi-dimensional Monte Carlo integration. This is literally one line of code of the change in the code. And then you can also integrate higher dimensional functions with this very small C++ program. And if you feel like a pro, you can also evaluate the speed of convergence. So how does it look like? How far on the solution after 10 samples, 10,000 samples, 1,000 samples, just plot the result? And if you feel adventurous, we can also snatch my code in this rendering program code small paint. This is the one with the painted both. You can snatch my code for the halton, low disc frequency series. And you can sample a one or two or whatever dimensional function with halton series. And you can see that how the last ratify a sample. Second part, this is the even more fun part. There's going to be a scene contest. So we have this Monte Carlo pass pressure program implemented in 250 lines. We threw the code, try to understand what's going on, but we're going to talk through this anyway. And just render an interesting scene. Just put together a full scene with a given number of objects. See what you can do with this. And if you go to this website, you will be able to see a gallery of the results from previous years. And some of these guys and girls have made amazing, amazing artistic works in the power. So they're going to be a contest subject your result and make sure to generate complete converges. So no noise images, converges. Don't try this in the last five minutes. This takes time, but it's insane. And as in the previous years, we will make this in this year a gallery of the submissions. So you can be proud of your own work after you're done with this course. You can show your friends how well you have done. Then obviously the subject is going to be the same only in the number of the assignments. And the seventh class I am very excited to see your results even in the middle of the night. So I just met one of my former students at conference. And he immediately told me that he has the fondest memories from the rendering course because he was one time late with his assignment. And he thought that I need to work on this all night to be done. So I got a email from him with the results 3 am. And that he was very delighted to see that five minutes later. So five minutes after 3 am, he got an answer with something like past. So it's great. And he thought that oh my god, I messed up. It's the mail or even because who answers the email? So he went to my five minutes later. Spirited the mail. Checked it out. No, okay. Wonderful. Wonderful. Okay. That's basically it. And I can go to your address. Yes.
Quick question, we really discussed the rational plan that we were done. When do we start tracing the array? Well, what we did so far is we did a product, so we said that there's a maximum number of bounces that I'm going to compute and the rest of it I'm not interested in. The problem is that this is a biased solution and we are missing some energy in the image, because if I didn't compute many more subsequent bounces, I would accumulate more radiance, so I would get perhaps a brighter but more faithful in this to reality. We can do much better than that. Once more, there is a technique that can compute an infinite amount of samples, an infinite amount of depth, and that boggles the mind, because what is really happening, if I want to be able to compute physical reality, I would need to have a maximum depth of infinity. I couldn't even compute one sample of the pixel, because I would have to bounce the array indefinitely. But there's an mathematical technique that can keep you, the results, as if you computed an infinite number of bounces. And this is against the statistics and probability, which is usually very difficult to wrap once mind around what is happening, but we can actually solve this problem. So how can we overcome this? What we're looking for is an estimator that converges to the expected value of integral. Okay, that's fine, that's fine. I'm looking for the expected value parts. Having the expected value, or looking for the expected value or something is one thing, but there's not only expected value that is variance. So I may have multiple estimators, and what I'm looking for is one that has the lowest possible variance. What I can do here is, after each step, by step, I need each bounce, I decide whether I will terminate the path, so I would stop this like that, or I would continue. But if I continue, I multiply the collected radiance with the sun then, and the question is what should this happen? Now, this, I would like to relate to, for instance, Fredel's law. In Fredel's law, we could compute what is the probability of reflection, and what is the probability of reflection? Like here, the last window, and with some probability, I will continue to migrate through the window, and with some probability, I will get a very effective. Now, what I can do is, I can run many samples, and add that together, or what I can also do is, that I don't run many samples, I enter the window, and I compute that there's an 80% chance of reflection, and 20% for reflection. And I will send out only one ray in each direction, but I would multiply this by relative probability of the effect. So I'm not tracing 10,000 rays, I will send out one, and I will multiply it by 0.8 in one direction, and 0.2 in another direction. And then, if I will do this, yes, I will compute more and more samples, but statistically, this is sound, so what this means, is that this converges to the expected value of the integral. And rational that does the exact same thing, but it gives you an infinite number of boxes. So, with a given probability, I stopped, and with a given probability, I continue, but I will multiply the collected variance with a factor. And this factor for the front-end, is an example of the probability of reflection. What does the algorithm look like? I choose a random variable, let's call it xe, on 0,1. And with a given probability that's pi, I continue the light path after hitting something every round, and it flow a dice. And if I have this probability, I will continue my light path, but I will multiply the collected variance with something. And what gives you the end results, as what you would see in the textbook, I'll try to show you the gross process on from how someone can put together. I will need to multiply it by something, I don't know what this something should be. We will find out together. And if I don't hit this probability, then I will terminate the light path, so we could imagine this as if I continue the light path, but I will multiply all the results even for gradient, pi, 0. I spoiled 0 for the second question, so that's a damage, but you would have found this out in a second anyway. So I'm looking for an expected value of something, that l i in the hat is an estimation, an estimator. And on the right side, this is the actual L i. So whatever happens in the middle is some magic, but the constraint is that the expected value of the estimator should be the same as the origin of what it is in the line of the input. There is a probability of continuation, and if I don't hit this probability, then I will stop. The stopping part is trivial, if I stop, then I will multiply this term with 0. So imagine that I continue my light path, I won't be the wasted time until infinity, but it will be the pi 0. Now the question is, what is the other question mark? What I know is that on the right side, I would want to get l i. So forget this right term, what do I need to do with this expression on the left in order to get l i to the wrong side? Raise your hand if you know the answer. I want you to take a few seconds and think about it. What do I need to do to get l i from this expression? The rest is multiplied by 0, so this doesn't matter. Raise your hand if you know. Maybe, maybe. Yes, please. Yes, I have an answer. Asked if you were OK. I think that l i multiplied by 0. Yes. It was that. I killed the pi because I don't see an epi here. So there is going to be a fraction, and the denominator is going to be pi. So I killed this guy, but there is no one in there, and I want someone in there, and that someone is the l i. So I killed the pi with my fraction, and in the numerator there is going to be l i. So if I do this, then what I am doing is going to be statistically solved. And I try to give you the intuition again. This takes time to wrap your head around. It is almost like in the front-out equation that what you could do is you could send out one ray in one direction. So you could send out 800 rays in one direction and sum them up. What you could do is that I sent out only one ray, and I multiply it with 800. And no, I would not get the same result, but I would get the same expected value. And over time, the variance around this expected value would shrink if I do this many times. So this is the intuition behind the whole thing. What is the good choice for the pi? Because this has been a parameter so far. What should I put in there? Well, with a little fraction, I could say it doesn't matter. You could put many sensible choices in there, and it would work. But quickly, let's review the cases where it would not work. Well, obviously, there are two very stupid options. If you put pi or zero, then this would mean that you would never continue your path. You would always stop. So this is obviously great. What if I say pi equals one? Well, this means that I would always continue. I would never stop. Well, you can say that the mathematically this is sound, but you could never compute one sound or two pixel. If you're on that condition, you would say I have a theory machine that you never stop. It doesn't make too much sense if you're looking for practical solution. Now, anything that it may mean the two is completely fine. The only difference is, because I've showed you that the expected value is the same as the actual quantity done. That I'm looking for. But the variance is different. So it is oscillating around the very same number, but the magnitude of the oscillation depends on this choice. And what you can prove, but it is actually very easy to visualize that the good choice for the PI would usually be something that would sample brighter paths, longer, and darker paths I would want to terminate faster. Because this is the same as matching the green function with the blue function with the green bars. I would want to reconstruct the brighter regions more faithfully than darker regions. Because this is what this means, smaller error. So what you can plug in there is plenty of the elevator of the material. So if you have a really bright, wide wall, then you would be the super high-quality if you would want to continue up. But if you have a really dark object like the curtains, you either side of the room, you would want to stop with a much larger probability. So this is how Russian will work. We will also code this. So in the next lecture, you will see the whole thing that we studied in code.
Let's talk about all these preferences series. What we have been talking about so far is WebOm something. This means that I have a random number generator, each generally samples, and this is the sample that I'm going to use. And many net addition for thinking that we could perhaps do much better than that. Because what I would be looking for is the hemisphere, and I would be shooting samples on the surface on this hemisphere. And I can do this deterministically. What if I have an algorithm that doesn't generate random numbers, but it will make sure that this hemisphere, if I have 100 samples, the samples are well distributed on this hemisphere. And if you do this, you may get much better convergence and much better looking results. So below, you can see a random number generator generated samples in 2D. And up there, the halter sequence, which is called below this frequency series, what this means is it's not completely random, but it tries to fill the space reasonably. Now this is not fibio. If you read after it, what you would think is that you would just get a grid. It would be very simple to get a grid, and just put points in the grid. And then you would have samples that are really well distributed. And this you can do it 2D. This you can do in one line, just given the space of this work step. But there are mathematical clues that tell you that this is absolutely terrible in higher dimensional. So if you have higher dimensional spaces, then this is not well specified. So the most commonly used sequences are the halter sequence. So the most commonly used sequences are the halter series, the soil series and water corpid series. You can do this low discrepancy sampling in many different ways. And this matter of years ago, an even distribution of the noise, because you are sampling these halus spheres reasonably, in a reasonably stratified way. So it cannot really be that one side of the hemisphere is sampled almost exhaustedly and the other one is completely neglected. So you would get images with a noise distribution that's better for you. So that's a plus. But what is even more important is that this is deterministic. So if you're entering an animation, imagine completely random sampling. If you have frame number one, you distribute your samples. Then comes frame number two. And then you distribute your samples in a completely different way. So the noise could look like this from frame one and that can be completely different. So until you have converged perfectly, you will have these issues that you call temporal flickering or temporal hearings issues. Because the noise looks like this on frame one, looks like that on frame two. And if you take 25 of these frames every second or you leave a door, then you will have really is probably incredibly different. You have computer and different things in every frame. And the solo series and all low discrepancy series help you with that. So they love to use the same thing as screen because of this reason. In the subsequent frames, you will compute the very same thing. And advantages. OK? Disadvantages. Well, these advantages are also huge. It's often not trivial to implement such a thing. If you take a look at this image, these walls are not textured. These are one color green. This is a one color green. This is a one color red, if you want. This should be rendered like this at all. This is a body image. And I have implemented the halter subplot. And the problem that I encountered was called correlating dimensions. And this is a serious problem that you can encounter. I will not go into the details. But you just messed up with one small detail. And you can get an image like that. Well, this is actually a delightful way of failing. I don't know about you, but most of my primary errors and these calculations are quite like this. They tend to look like this. So I usually read send-onation problems, or like that. So if you make a mistake in global innovation, rendering even your errors, it's better than in other fields.
Let's take a look at how the exact algorithm looks like. I have a recursive function where what I'm looking for first is if I have reached the maximum depth that I would like to render. This means that if I say that I will write together 5 bounces, then this is going to be 5. Did I reach this number of bounces? Yes, okay, then just top and return the back part of it. Then what I'm looking for is the nearest intersection. You remember from the previous lecture that this means parametric equations which I solved for p. So what I'm interested in is that I am intersecting a lot of objects and I'm only interested in the very first intersection. And if I didn't hit anything, then I will return a black color because there's no energy going in this through this way. Now, if I have the intersection of the object, I will be interested in the emission and the material of this object. The emission means that if this is a light source, then it's going to have emission and the material can be not retrieved by the idea of diffuse, glossy or some complicated multi-linear material. This I'm going to store on that. What's up next? Well, I would like to construct a new ray because I would face the next ray. I, especially to move right. This will start wherever I hit this object. So if I hit the table, I will create a new ray that starts from the table and I will set the outgoing direction according to some circumstances to find out. Let's say that what they have here, if it says that random unit vector, if you have a sphere ball where the object was hit, and this sounds like a diffuse case for me. So I generate a random unit vector on this and this is going to be the outgoing direction. Now let's add together the elements of the random equation. I have the cosine theta which is the light attenuation. I have the BRDF term and in the BRDF term it seems that here they have also included this cosine theta or which is the light attenuation. This is the albedo of the material. How much light is absorbed and how much is reflected. And then what I would like to do is I would call the very same function that you see in line of code number one. So this is a recursive function. I will start the same process again with a new ray, a new particle form and a new direction. And in the end if I have phrased a sufficient number of rays, then I will exit this recursion and I collect the result of this in this variable that's called reflected. And in the end this is the elegant representation of the random equation, the emission, the lb, plus the integrated function which is the BRDF times this reflected which is all the recursive terms. So this means that I shoot out this ray in the hemisphere and there is going to be many sub-city contences and I add up all this energy into the reflected incoming light. So this is the pseudo-fold. This is not something that you should try to compile or anything like that, but this is what we will code during the next lecture. This is just a schematic overview on what is happening exactly. It's actually very clear. We shoot the ray, we bounce it into the scene and then we hopefully hit the light source at some point. And even if we hit the light source we continue, but hitting the light source is important because this is where the emission term comes from. Let me show you what's going on if we don't hit light sources. So this lb is the emission term of the left side here. So we add this to the end result at every recursion step. And the fundamental question is that if we have a long light path that doesn't hit the light source, we are using completely random sampling. So maybe some smart important sampling. Well, that we never have this emission term. What does this mean? That the variance that we give, we get from the program is going to be zero. So the corollary of this is that only you will get variance, you will get an output from only light ads that hit the light source. If you don't hit the light source, you don't know where the light is coming from so you will return a black pixel. And this is obviously really bad thing because you're computing, turning out samples and samples and samples perhaps on your GPU, but it doesn't return you anything. So it's a very peculiar fact about simple life operating is that if we have a small light source, the convergence of your final result to this lower. Why? Small light sources, more variance, slower convergence. Because we need a random raised hit the light source. It's small, then we want to clear this off. Exactly. So the relative probability of hitting the light source is going to be less for a small light source. And up to the extreme where we have a point light source. And if we have a point light source that we will see that we will be in trouble, because what I would expect from my heart crisis to return something like this, what we imagine a point light source in here. But this is not what we will end up with. So I would expect it to return the correct results. Many people have reported many forums on the internet that, hey, I implemented it, but this is what I got. This doesn't work at all. All these pronounce law, smells law, totally turn over reflection, Monte Carlo intervention, for a black image. I mean, I could generate this with five lines of C++. Why do we even part? We will get nothing. Why is that? Point light source, black image. Why? Yes. Exactly. Exactly. So the point represents a location in mathematics. It does not have error. So technically, it is the same. Getting a point by source is impossible, because this is the same as what you would study in statistics and how we can deal with that. If you have one number in a continuous scale, what is the probability of hitting this number? 0, because that's the point. It has low surface error. It's infinitely small. We cannot hit it. So this is the reason of your black image. You read the coordinates on the internet, you don't find plenty of this. Now, we could also sum up our findings in the internet meme style, if you will. So if you would like to compute our face into the point by source, without the technique that is called next event estimation, then you usually will expect a wonderful image. But this is what you're not getting. Now, the first question is obviously, how will your work out this? What we can do is that every time we get something some object in the scene, we get the views for anything that in not light sources, we compute the direct effect of the light source on this point in the scene. So this is a schematic to show what is going on. So I start from the viewer, I hit this sphere, and I don't just start tracing the new gray outwards, but I will connect this point to the light source, and I will compute the direct illumination. This is the schematic for path tracing without the next event estimation, and this is with next event estimation. So at every station, I connect to the light source. In this case, this is actually included. In this case, this is the data ball, and in the third bounce, you will get some contributions from the light source. The question is, how do we do this exactly? Well, this was the topic of assignment 0. So the formula that you see in assignment 0 is exactly the very same thing as what you should be using. What was in there? Well, what we were interested in is that there was a term with the 4 pi, because if you have a light source that's a sphere, then what we were interested in, how much variance is emitted in one direction. So then you will need to divide by the area of the surface, which is a division by 4 pi, and there's going to be the attenuation curve, which is all squared. Same as in the gravitational law, or in the law of electric fields. It means that the further away I am from the light source, the less light is going to be. This is a really good technique because of multiple reasons. One of the reasons is that you will get contributions from every bounce during the computing to light. Before I proceed, I would like to tell you that here is L, we are talking about this LB, the emission curve. We are adding these parts of this emission curve in every bounce. So if I hit P1, I add this something. If I hit P2, I add this something. If I hit P3, then I also add this something. But when I hit the light source, I don't add the emission curve anymore because I will be adding it again. So this 1LB that you will add, when you hit the light source by chance, this is distributed into individual complexes. Why is this great? One, you can rather point light sources because the direct effect you can actually measure, but you cannot hit the light source itself. So let's get to 2, you will have less variance because it's not like I either hit the light source or I don't. I statistically always hit the light source unless there are recruiters. So I'm adding many samples with small variances, not one sample and lottery because you either win or you don't get anything. So I can lower the variance which means that my images will converge faster. And the other thing is that because there are contributions of every bounce, I can separate directly and redirect the illumination. So a lot of people do this in the industry because the movie industry is nowadays using power tracing. I cannot say that like as a whole and composing something statement, but for instance, this movie is now using global illumination. What do you think that most of the power tracing? Why? Because it looks insanely good and it is very simple. And it took them more than 20 years for them to replace their old systems which they really liked. And now they are using global illumination. And it has taken a long time but the benefits of global illumination are now 2D to S. And what they are doing is that they get a physically based result but this is not always what the artist is looking for. And if you have work together with artists, then they will say, okay, you have computed a beautiful image but I would like the shadows to get a bit brighter. The engineers say that, well, this is not possible. I can put it what will happen in physical reality and that's it. But the artists are interested in physical reality. They are interested in their own thoughts and their own artistic vision. And they would like to change the shadows. So you could technically make one of the light sources brighter and then the shadows would get brighter. But then the artist says, hey, but don't change anything else in the scene. Just change the shadows. And then you could pull out your knowledge of the rendering equation and look. The radius coming out from this point, you can't only surround this. So you cannot just make something brighter and the nearby things will also get brighter. You cannot circumvent that. What you can do with the next event destination is that you take, you generate an image from the first box. So you will get one image which is which you deposit the radiance that you measured in P1. That's an image. And then you create another image which will only contain the second file, P2 and upwards. So you would have multiple images and you could technically just end up all these images with simple addition and you would get physical reality. But if the artist says that I want stronger interactive illumination, then you would grab this buffer, this image that holds up the second and the higher or the advances. And you could do some Photoshop or you could do whatever you want without touching any of these. So you have a nice separation for directing the director illumination, moving the street. But moving the street, they love it. They are playing it all the time. And later you will see some algorithms that behave differently on the interactive illumination and differently on the interactive illumination. You can only do that if you separate these terms. So let's see path tracing now with next event estimation. I have the very first bounce. And before I continue my array, I will send the classical super, super classical shadow array to the light source. I'm going to choose the point of the light source. And I will add this direct contribution of the light source to this point. And then I continue. Let's go back to the terms. Sorry, we use many terms for the very same thing. This is why I write in all of these terms. Because if you do the forms, if you do the papers, we will see these terms and they all mean the same. So explicit light something, next event estimation, the very same thing. So I continue my array. And I also hit the light source with the shadow array. And then I continue on and on and on. And imagine that this third one is an outgoing rate that actually hits the light source. And if I do, I don't add the dark part in there because I did in the previous one. It's very important. Now you have seen the results for point light source, nothing versus something that's pretty happy. But even if you have a reasonably big light source, like the light, light side light source, I told you that you can't have variance suppression effect as well. So this is some amount of sample prefix. I think it's two maybe three samples to pixel. So this means that I grab one pixel and I send three ways to it. So it's three. Now, this you can do in two different ways because if you start to use renders, then you will see how this exactly happens. Some renders are rendering tiles. So what they do is that they start with some pixels. And if you say I want 1000 samples for pixel, then it will start take one or four or whatever number of threads you have on your machine. It would take four or four pixels and it will shoot more and more samples through. And after it got to 1000 samples, it will start and show you a really good and conversion pixel. And what we call progressive rendering is the opposite. You pick one pixel, you shoot or rate it, but only one. And then you go to the next. And then you go to the next. And then you will see an image that has some amount of noise and progressively you will get less and less noise. So this is what you see here is progressive render. Now, no next event estimation. So we only get contributions if we hit this light source in there. If we don't, you will get a black sample. Now, look closely. This is with next event estimation. So there's a huge difference. Such a simple technique can speed up the rendering of medicines with orders of brain. You can also play with this program by the way. This is implemented on shader toy. So when you read this at home, just click on the link and play with it. It's amazing.
Let's talk about important sampling, because we have always been talking about uniform distribution. We will see that it's not usually a great idea to sample any function with uniform distribution. What I usually look for is that I have a function that I would like to reconstruct and I have a fixed sample project. From this project, like x samples or x samples per pixel, I would like to get the best estimation possible. Now, we have written the formula for important sampling. This important sampling was before when I divided by the P of x, because I don't only have the F, I also take into consideration the C. And there I can fly in and or reverse up to this distribution. It can be uniform distribution, it can be a Gaussian distribution, it can be many things. Now, take a look at this. I would like to integrate this function, which is the new line. So, it's a spiky function and imagine that the green bars is the actual sampling distribution that I use. It doesn't look like a good idea. Can anyone have a green bar? Because the green bars are too high on the right side and too low in the middle. Why is this a problem? It's not representing the actual distribution. Exactly. So, it has to represent the actual distribution that we would like to sample. Why? Well, let's give a few slides. So, if the function takes the higher value at some regions, this means that if I miss out on the reconstruction of this region, then my error is going to be high. So, what you can say is that if there is like a Gaussian function or spiky function, I would want to put more samples where the spiky is, because that's a large error. So, if I can reconstruct this large error better, I'm doing much better as if I would be sampling the parts that are actually have very small numbers. So, the flat regions that are almost zero. So, let's put more samples to the regions where the function is actually larger. And if we do this correctly, then what we're doing is called important sampling. So, what we're looking for is that I have these green bars and these green bars should match the blue function. So, if I can sample it again, I am looking for the expected value of f over p. So, I divided and multiplied with p of x in the expected value formula. And the question is, what should be the p that I applied in here? So, this can be the uniform distribution on a p or it can be allowed with very distribution. What would be a good distribution? Usually, what is proportional to the function? The function is large somewhere, the sampling distribution has to sample that region often. So, it also should be large. If it's small, in different regions, that it also should be small. And it's also said that if there are regions where the function is zero, I don't want to put samples there at all, because this must be reconstructed. The current below the function is zero. And we will deal with reconstructed functions like that. But for now, imagine that I have in my hand the function that is represent the people. Now, we have talked a bit about this. So, this better, this should give me quite a bit of an advantage, because otherwise it's not worth any bit. So, this is a rendered image with no important sampling. That look closely, you will see now results with important sampling. So, for example, butchered, it was running for the same amount of time. And this is the difference that you have in simple important sampling. So, this means, wherever there is more light, I will put more samples. And the darker regions, I will make a lecture with my samples. Let's take a look at another example. You can see how noisy this region is next to the car. And with important sampling, this is accounted for much better. Now, we are finally at the moment where we can attempt to solve the rendering equation. So, this infinite dimensional singular, this problem child of death, is so difficult that it seems at first that the one should ever block better than the drive. But now, it seems that we have every tool in order to solve it. So, just again, the intuition, the left side after the equality sign means that there are objects in the light sources, if you will. And this I have to account for. But this is not the only source of light. As these objects emit light, then these will be other objects that reflect this amount of light. This means that I am in an amount of light and I also reflect an amount of light, taking into consideration also light attenuation and the VRDF, which is the material properties of the object that I have at hand. So, let's say that I relate this to Monte Carlo integration. So, again, the formula, I am sampling F over E. This is equivalent for integrating F over X from A to B. Now, what is F? F is what you see up here on the right, this whole thing, sorry, just integral part. And P will be something. So, I just substitute the very same thing here on the right side. So, incoming light times the VRDF times the light attenuation factor. And there is going to be the P, which is now a sampling probability for outgoing direction. So, this means that I hit an object and I need to have a choice which outgoing direction should I set for? Where should I continue in this direction? Remindering. So, this is going to be one direction on the hemisphere. This is the Monte Carlo estimator for the actual integral. And let's imagine that we are trying to solve this for a diffuse object. So, a diffuse VRDF is rho over pi. Normally, it was 1 over pi, why? How can a VRDF be just a number? What you see? The perfect diffuse material means that all possible outgoing directions have the same probability. If I hit this table, if it would be perfectly diffuse, we talked about the fact that this is actually velocity. But if it would be perfectly diffuse, why is the light inverse? I have here. I will hit it somewhere and the outgoing direction can be anywhere from this illumination hemisphere. They all have the same probability. What does rho mean? rho is the orbital in the material. Because if I say 1 over pi, this means that every ray that comes in will have a outgoing rate. So, this object would be completely reflective. It wouldn't absorb completely why it wouldn't absorb any influence. Most objects are not like that. So, this absorption is way reflective dependent and we can represent this as rho. Now, how does the equation look like? I just substituted rho over pi for the VRDF. So, it seems that we know everything in this one just the incoming radius. So, what do we do with this sampling distribution? When we hit this diffuse object, we send out samples and we try to collect the incoming radians, which is the LI with this sampling distribution. And the question is, for this case, what would be a good sampling probability? Does it function to sample the diffuse here? Now, what we said is that this peak, the denominator, should be proportional to the numerator. Now, LI, we don't know. This is some part that we cannot really estimate. Because I would have to send many samples out on this hemisphere to know exactly how much light is coming. But by the time I get to know how much light is coming in, I've done the sampling. So, then I am not interested in the sampling distribution because I have a committee. So, this part will leave out for me for you sampling. This we cannot, I know as of now. But this rho over pi times cosine of theta, we can't deal with. So, let's imagine this sampling distribution, which is cosine of theta over pi. And the goal of this is that these people will kill each other. I have a cosine theta in the numerator and the denominator and the same in pi. So, this only, so only this part will remain there. I can technically also put the LV dough of this given material in the sampling distribution. But let's, let's be general for how? So, in the end, I have this simple equation. Look at this. This is what is going to be the solution of this new teni-dimensional integral. What it says is that I'm going to send samples on this hemisphere and I'm going to average it. Because we can identify by that. That's it. And then if you do something like this and you have the volume at very far, then you can render a mercury throughout from the gamemode slots.
We have covered some problems. So we wanted to integrate this function two times sine squared of x from 0 to pi. And through engineering or through mathematics, we realized that this should be pi. And what we did is that we ramped the code that would integrate this through multi-hound iteration. And we got one instead. So there is some problem. There is some insufficient knowledge that we have that we have to remedy in some way. So why don't we take a look at another example which will reveal what we need for. So let's integrate this unbelievably difficult function from 1 to 0. Obviously this is x squared over 2. And the brackets show you that we have to substitute this for 1 to 5. What we get in mean n is 12. Now, let's do Monte Carlo integration. Let's pretend that we have integrate this function as we need like 2, analytically. So I take three completely random samples of this function. What does it mean? That I evaluate f of x at 1. Now, if I evaluate this x at 1, I have to use the get 1. I evaluate it at 3 and I also get that 3. So I have three samples now. And what I do is I simply average that. So this is 1 plus 3 plus 5 over 3. The end result is 3. But it shouldn't be that, right? Because the end result through analytic integration is exactly 4 times that. So something is definitely wrong with this Monte Carlo integration scheme. So what we know is that 3 is exactly 1, 4 of 12. So we see that there is a difference of the factor of 4. And if you take a closer look at the integration domain then you will see that 4 is exactly the size of the integration domain. You are integrating from 1 to 5. So just empirically, if we don't, this is 1 angle to look at the problem and to form it. You will see multiple angles. This is more like the engineering way of solving things. You don't know how to derive the full and correct solution. But you see that there is a factor of 4. 4 in the size of the integration domain. Well why don't we multiply with that and see what happens? And obviously it works. So if we multiply with the size of the integration domain, we get the result that we are looking for. So let's change the code. I multiply the previous function with the integration domain, which is from 0 to pi. And this is what I want to apply. And obviously I will get pi as an result, which is the correct solution for the previous integral. Now this is great and this looks very simple. And apparently this technique seems to work. But we still don't really know what is happening here. So we should use some black magic or mathematics, if you do, to see what is going on in the road. So imagine that we sample a function with the uniform distribution on 0 power. What does it mean? I have an interval from 0 to pi. And I generate random numbers on it. And every single number has the same probability. So this function would look like one of our pi, regardless of x in the parameter. Because it doesn't matter which part of the domain I choose, it will have the same probability with the future. Now, what we are essentially doing is integrating a function f of x multiplied by this something probability. Why? Because imagine that some regions of the function would have 0 probability to be sample. So imagine that I'm integrating from 0 to pi. But I will only take samples from 0 to 2. So there is a region in the function that I'm never going to visit. And I don't integrate this part. So tip that's one intuition. The other intuition is that if I draw samples not with uniform distribution, but with a different distribution, that in the average that I compute some regions of the function would be over exactly. Because I have a higher chance of sampling those. So what we are doing is, multiplying this f of x with a sampling probability of x. Now, this p of x is, in this case, to 1 over pi, the uniform distribution, which is obviously a constant. So get out of mind the role. And in the end, we have the integral of the function over pi. But this is not what I'm looking for. I just want to integrate the function itself. So I need to make this pi disappear. So I have this 1 over pi multiplier. What do I need to multiply with to get only this function? What should the question mark be? The flower. Excellent. Exactly. So I just killed this 1 over pi multiplier, which is this f of x sampling distribution. And if you take a look at it, yes, this is also the size of the integration for me. So this is a bit more rigorous. A bit more rigorous way to understand what is going on. This is through a derivation. Not just empirical stuff. What should I multiply with? We know a bit more about what is happening. I have a sampling distribution that I need to get it on. So if I have to 1 over pi multiplier, I got the 1 incorrectly. And if I use this scalar multiplier that I'm looking for, then I will get to the correct solution. Let's examine the whole thing. A bit 40, please. Different angles. I would like to show you how to solve the same problem in multiple different angles. So the super quick probability theory we can. We have an expected value. This is what we're looking for. What is an expected value? An expected value means that there is a value of something and there's a probability of getting these values. So let's take the expected value of the large score. How does it work? I can roll from 1 to 6. And they all have the same probability. All roles have the same probability. 1, 6. So the values are 1, 2, up to 6. And the probability is all the same, 1, 6. And if I have this up, then this says that the expected value of the large score is 3.5. Well, this means that if I need to guess what the next large score would be, then this would be the best value in order to minimize the error from the expected outcome. Now, if we would like to compute the expected value of something, then this means that I take the values that this something can take and I multiply it with the probability for this event. For instance, it is impossible to roll seven with the dice. So theoretically, you could put as the something a seven in there, but it would have zero probability. Therefore, you could not show up in the sum. And this is the discrete case. For the continuous case, we don't really need to do anything very serious. We just changed the summation to integration. So we are not using the discrete sum. But we are integrating continuous functions and we're using continuous subway distributions. Now, let's introduce this notation. What I'm looking for is the expected value of this function f of x after an n amount of samples. Because in multicore, you need to add more and more samples to get a more f whole representation of t. Now, what this means is f is the something and p is the something distribution. What we can do is that we can create a discrete sum that takes samples of this function and then multiplies with the size of the domain. And obviously, since we are taking the sum, we need to divide it by f. Because the more sample and the number of samples, the more samples we take from the function, the larger the number you get. So this is the averaging part. Now, you have to take a look at always the keep looking at the relevant quantities. So the expected value of this f of x does mean that in the integration, I won't apply it with this something probability. And on the right side in the Monte Carlo estimate, I will have the same quantity as on the left side. So if I'm looking for the expected value of x, then I will sample f of x. Now, if you take a look at that, you can see that this is just an approximation. This is not exactly the interval that we're looking for. But there is a multitude of theorems in computer science that show you that if you could use an infinite amount of samples, then you wouldn't approach the actual interval. And most courses on Monte Carlo integration show you different ways of proving this. But this is not what we are interested in. We would just believe that this is what is happening. It's actually very intuitive. Why this is happening? You remember seeing this sign wave that we sample with all these two and that ball. So you could see that if you have a lot of samples, you will get a good estimation of the error under the curve. Now, let's try to use different sample distributions. I mean, a few minutes, you will see why this would be a good idea in some cases. So I would like to integrate this f of x. I am now doing the transformation that is the identity acceleration. I didn't do anything to my f of x. I multiplied by p of x and then I divided by. So this is almost like a scalar multiplier and then I divided the same number. I get the very same thing. But if I would like to write that this is the expected value of something, then this will look a bit different because f over p is the something and p of x is the sample problem. So what we have now is the expected value of f over p. And the question is, what is the Monte Carlo estimator for this? And what we concluded in the previous slides that this should be the very same quantity as what I see in the expected value. So I will be something f over p. So I am not only something f. I am something f over the arbitrary chosen probability distribution. Now there are some good readings on how to do this well and why this is useful. So if you would like to know more about this, please read some of these documents. They are really well written and that's a rare thing nowadays because I have seen lots of not so well written guys on Monte Carlo integration. I need you to do a very long time to find something that has the quality that I should give out rather than to study. Now let's solve the actual example that we have previously with this formula. So f over p times p. So I am still integrating only f. And the sampling distribution was this two times sine square x. This was the function that we wanted to integrate and one over pi is the sampling distribution probability, sorry, uniform distribution over 1 to pi. So and yet in fact the integral of the original function. So I am looking for the expected value that's f over p. So I am going to sample in my code f over p. Let's put this in source code. If you look here, I now divide by the sampling distribution. So it's 1 over v minus a. So this means 1 over pi b and a. This a should have been 0 in this case. So I apologize for that differences in the code. I put the 2.5 in there because if you always a is always 0, then you may write code that works for integration from 0 to something but not 1 to something. So this is a cool thing to check if you have disappointed. So I apologize this a should be 0. But if you compute the actual result that you would be looking for, then you will get your pi. So this is the f. The first term in the sampling line 36. And after the division we have it. Wonderful. So this works. And from multiple angles we now understand how exactly this thing is working. Now if you write the good one to power integration routine and you solve the rendering equation with this. What you want to see is that as you add more samples, you will see first the really noisy image. And then as you add more and more samples this noise will slowly clean up. And if you think back in the previous lecture of mine, we have talked about over and under estimations of the integral. And this is exactly what shows up also in images. If we are trying to sample a function, I would like to be interested in the radiance. But as I add more and more samples, before I converge, I will get values that are larger than the actual intensities. And I will have values that are smaller. So this is what shows up visually as noise. So what you are looking for is always this samples per pixel metric. And when you have a noisy image, you would need to know how many samples I have used per pixel. And if it's still noisy, then you would need to add more samples. This is also some visualization on the evolution of the image after hundreds and then 100,000 samples. Depending on the algorithm, there are multiple ways of solving the rendering equation. You could have smarter algorithms that take longer to compute one sample because they are doing some smart magic. That this would mean that you would need less samples per pixel to get the first image. And the first algorithm that you use to study is actually the naive algorithm for hard tracing. And usually it is a tremendous amount of samples to compute an image. But since it is a simple algorithm, you can use your GPU or CPU to dish out a lot of samples per pixels in every second. Now, a bit of a beauty break, this is what we can get if we implement such a hard tracing. This was rather a bit luckscrutter. And some recent example. That's everyone who this is. Just raise your hand. Okay, how often people? Okay, excellent. So this is actually a margarine material from the Game of Thrones. And anyone has me and spoilers. I will be on that page. Okay. So please. And this is actuality because the Game of Thrones is running. Obviously, we all love the show. And there's also skin being rendered. So there's tons of stuff. So this is kind of. And you can solve this with a simple part. So that we will put together the theoretical part in the second half of this lecture. And then we will implement the next lecture. So when I see renders like this, what I feel is only comparable to religious spiritual wonder. It is absolutely amazing that we can compute something like this using only mathematics. These very simple things that I have shown you. And the other really cool thing is that we are writing these algorithms. We are creating products that use these algorithms. And these are given to world class artists who are just as good as an artist as we are engineers. And they are also giving it their best to create more and more free, cool models. And we can work together to create stuff like that. So this is absolutely amazing.
Okay, so welcome to today's rendering lecture. This is going to be unit 4. We will have two parts in it. So one, the first part will be spatial acceleration structures and the next part will be tone mapping. I will hold the next three lectures. So this one and two more and then I'll take it over again. So spatial acceleration structures. So where are we? The rendering pipeline, as it was shown in the first lecture, so we start with a 3D scene, performs some kind of light simulation, generate an image out of it that's going to be displayed. So spatial acceleration structures are central to the light simulation because they increase the efficiency of ray shooting. So as you heard last time, ray-based methodologies, so from geometric optics, are mainly employed to enable photorealistic rendering. And in this work, you have to shoot a lot of rays. So usually in the order of million to billion. And if you can cut down the computational cost of this procedure, then you will gain significant speed ups. So to summarize, so generally the Monte Carlo method uses ray shooting to sample the integrand of the rendering equation, as shown in the last time. So usually you have to compute the closest intersection with the scene. So this is equivalent to computing the local visibility. So how far does it travel through the scene before it hits its first object? And this is usually very expensive for a large amount of scene objects, because if you start with one ray and you want to check, does it intersect any of my scene set triangles, then if you have millions of triangles, then each ray has to check all the millions of triangles, which one is the first that intersect. If you have millions of rays, you see that this is a quadratic explosion, and you will not converge in any reasonable time to a high quality image. So the naive approach would be just to determine the intersection with each object. So the object could now be usually triangles can also be non-linear surface patches or whatever you want to use. So if you just go through all the objects, one after the other and check which one is the closest, you have to go through all the objects, so it's a linear approach. So the complexity is over n. A better approach would be to reorganize all the objects in your scene, say the triangles, in some kind of spatial hierarchy. So that I know that say in the left half of this room are these triangles in the right half of these triangles. And then if I have a ray that I know that it only travels through one half of the room, then it can, evidably, discard half of the triangles in my scene and don't need to intersect against them. So this approach, I mean it's a bit more sophisticated than that, leads then to a sub-linear complexity. So eventually it gets close to the rhythmic. So, I mean this is a very old topic. So this popped up very soon when ray tracing was used. So there are many methodologies that were looked into and two main techniques, say, are considered the state of the art. So there are KD trees and the other are bounding volume hierarchies. So a KD tree subdivides the space itself. So your scene is situated in a surrounding space, three-dimensional. You then cut this space into pieces. As can be seen in this example on the right hand side, here the space in which the object resides is just a square. And each object is just a point. And as you see with recursive subdivision of the space, you group the objects together in a spatially local volumes. And if you the right side gives you the subdivision of the space itself and where the object's lying those, but each split of the space can be seen as a construction of a binary tree. So you start off, have the root node, which is the whole space, then you try to find some kind of good cut through the space so that approximately half of the objects are in one half and half of the objects and the other. So it doesn't make sense to start off with the whole volume and then separate a very small part from it. Because every ray has to start its traversal of the tree at the root node. And then it has the decision, am I in the peak volume or do I have to check the small volume too? If you have a lot of small volumes, then this is inefficient again. So what you want to do is to place or to get the criteria that you are not going to have to intersect a lot of triangles as far up in the three years possible. So in this example, the first cut, the vertical cut through the whole space, subdivides the objects approximately in half. So that half of the objects are left of the cut, half of the objects are right of the cut. So this would be a cut plane through the volume, but it's the same procedure. And then you recursively subdivide the parts, so the two sub volumes that you generated with the first cut. Also try again to have half of the objects there, half of the objects there. And you continue with this procedure until you have one object per volume. I mean, of course, you can also terminate earlier. So if you are okay with having 100 triangles in each leaf node of the tree, then you have to check through all these 100 triangles if you enter the subspace. But the main advantage you gain with this is that if you have some ray through this volume, then you can do very quick checks against the subspaces. So you know that all the subspace here are rectangles in a volume. They would be boxes and you can do very quick intersection test against boxes. And if you know that you're not going to intersect the box, which is one test, but there are thousands of triangles in this box, then you can immediately discard all these triangles for your real intersection test. So you only have to check the triangle intersections in those boxes that you checked beforehand that you intersect. And you can imagine that if you have huge areas that you don't intersect, you gain a lot of speed because you don't do unnecessary work. So KT3s subdivide the space. And then you have to, and then in the sub-volumes, the objects life. Another approach, a bounding volume hierarchies, there you group the objects together. So you take, you start with the triangles, and then you say, I put close triangles into groups. And then you again build up a tree structure, but this tree structure now depends on the triangles. So the fundamental unit there is a triangle, not a subspace of your whole scene volume. So now they have advantages and disadvantages, otherwise you would only take the better one. So KT3s, they are usually faster for traversal on the CPU. Here I mean a multi-core CPUs, but they have usually a larger amount of nodes, and they have duplicate references. Because if we go back to this example, here we have points. Okay, a point can, does not have a spatial extent. But if you imagine that you have triangles and you cut through the whole volume, then it could be that you cut through triangles. And then you have two possibilities. Either you just add the triangle to both volumes, so you get duplicate references. That means your triangle discard is less efficient, because you have to check against this triangle if you're in the left or in the right half. Or you cut the triangle itself and add one half there, one half there. But cutting a lot of scene content is computationally expensive. So this would then degrade the performance of the KT3 generation. Bounding volume hierarchies are very popular for GPUs and multi-core architectures, so the CUNE-Fi for example. So they got more attention in recent research, because most of the current work tried to implement. So the spatial hierarchy generation on GPUs or other highly parallel architectures. They are also easier to update, because imagine you have a moving object inside your scene. A KT3 cuts the whole volume apart, and then if you have an object moving from one sub volume to another, you would have to update the whole KT3, because you don't really have a grasp on at which level you have to edit it. Bounding volume hierarchies on the other side, they group objects together. So there you can just, you have the option of ignoring dynamic complication, because say you have two objects and B that are close together. So you generate your Bounding volume hierarchy, so they are grouped together at some level of the three. And if they then move apart, the grouping is not influenced. The only thing that happens is that the Bounding volume that holds both groups gets larger and larger. So what happens is that your spatial hierarchy gets more inefficient, because say a lot of empty spaces generated in between object and B. So raise the travel exactly through the gap between them, they would still have to check A and B. If you would then update your Bounding volume hierarchy to acknowledge that they are spatially separated, then they would be cut at a, they would be put into different branches of the three at a different level. But you don't have to do that. So in Bounding volume hierarchies, dynamic scenes just degrade your performance, but don't invalidate your whole hierarchy. Because in KT trees, if you move from one sub volume to the other, you have to update this in the whole tree. And this could be quite complicated, because traversing the tree for higher than the next scene can be very costly. And another advantage for Bounding volume hierarchies is that every object is only in one tree leaf. I mean, this is naturally because it's constructed that tree. But a negative point of them are that the nodes can spatially overlap. So if you put two triangles that are close by each other into different nodes of the Bounding volume hierarchy, then you still generate the box around them to do a fast intersection test. But if the triangles are, say, right next to each other, then a simple box will have some overlap. So the Bounding volume hierarchy can be inefficient if you generate a lot of boxes with content in it that overlap to a large extent.
Some detail on bonding volume hierarchy. So, I mean, you take the object, say the triangles and group them somehow together. I mean, there are a lot of ways to do that. This is a combinatorial explosion. So you cannot just say, test each possibility and check which is the best one. So usually, this is seen dependent. So you don't know in beforehand which bonding volume hierarchy would give you the best performance. Because it could be that through the light propagation, light very seldomly enters, say, one half of the room. Because there is a wall with only a small hole. If you do not know that, then you would treat both rooms with the same priority and put them very high up in the tree hierarchy. But if light only travels very seldomly to half of the room, then you could make one huge note for only half of the room. And spend all your detail on the thing where actually something happens. So the same usually dictates what kind of hierarchies optimal. But this doesn't make too much sense to take this into account. Because if you have to run the light simulation to know what is the most optimal spatial hierarchy, then you have done the light simulation already. So you need to use some kind of heuristic that works for general scenes and build a hierarchy that optimizes this heuristic. And the most popular one is the surface area of heuristic where you compute a cost for the whole hierarchy and try to find the one with the lowest cost. And here, just to quickly show the formula, you can read this in detail in the references I provide on the last slide of the lecture. But here you sum up two components. So the costs of the inner nodes of the trees and the costs of the leaf nodes. Because as we already know, the object, so the triangles are in the leaf node of the tree. So all the intermediate nodes are just different groupings. So from fine to coarse. But they do not contain content. So they just say if I hit a bounding box of some intermediate node, then it tells me year my next level are these two bounding boxes continue with them. Then you check the next two bounding boxes in this volume and continue recursively until you hit all the leaf nodes that are appropriate. So that lie along your ray. And the costs. So there is a inner cost associated with getting from one bounding box to the bounding boxes that lie in it. And the cost of the leaf nodes, which are also the sexual costs of the triangles themselves. So see in this formula is the cost of checking which bounding boxes are appropriate for continuing through the tree. And the cost for the leaf node is the same for it. The TN is the cost for the triangle intersections. And now the heuristic enters via the surface areas of objects. Because the main assumption in this heuristic is that you have that race lie randomly in your scene. So you don't know in beforehand in which direction light will travel. So you just assume a random rate is tribution. And then you check how probable is it that a hit certain objects. So objects with a small surface area are less probable, larger triangles are more probable. And what you want to do is that you give very good groupings. As a groupings that have a high chance that you actually hit something in them or that you can exclude a lot of this. And this is here shown as a ratio of the surface areas. So AN is the surface area of node N. This is the volume of the bounding box which has a certain surface. And it's also dictates how probable is it that I hit this bounding box. And then you have the surface area of the root, so the level above. So I have my own bounding box at a certain level of the tree and my root that contains me is a larger bounding box that at least has the extent of my current one. But what you want to do is that you want to minimize this cost so that you want to have a large surface area of the root but a small surface area of your current bounding box. Because this means that you can exclude a lot of volume in the space. So if you're going through a huge bounding box and you want to decide where do I have to continue, then the smaller the continuation is, the more descriptive it is where the same content is. And then you want to have a small bounding box that is the same as the one for the final leaf nodes where the triangles are. And now you try to build a whole hierarchy that optimizes this cost. So this is not that you can decide at every level or at every level you decide what is the best ratio here that I can achieve. And then this gives you how your grouping has to be done. And there are different heuristics in the recent literature that take some more information of the scene into account. So for example, the surface area heuristic not only assumes random redistribution in the scene, but also assumes that they are infinitely long. So that they just travel through the whole scene and are not blocked by objects. This is taken into account with more sophisticated heuristics and there are references on this. And so, yeah, I have a dynamic scene, then the bound box can get larger. Yes, one object moves away. So a leaf node can be larger than its root because the two objects move from the bottom and so that leaf gets larger. Now you have to account for that. So you would have to update. You would have to propagate this information up the tree. Otherwise, it would fail, so to say, because if you do not hit the root node, yeah. I mean, this you have to propagate up, but it's the way of the propagation is clear. So it's just a grouping upwards till you have contained even with the dynamic update what's happening. But with KD trees, this is not so easy because the space itself is activated. So you have to somehow determine where is the object moving to in which other part of the tree, which is not simple because it could be that it moves into another leaf node. But the leaf node could be split already at the very top level of the tree. So to find the other leaf node where your KD tree object moves into, you have to go up and down the whole tree. So this is much more costly, much more complicated. Here you just propagated upwards till it's okay. So, I mean, the surface area of heuristic is just that, a heuristic, but it's still expensive to compute the optimal tree for that. So there is not necessarily a unique solution with the minimal surface area of heuristic, but there is one. And since this is expensive, there are also methods how to approximate. So not to develop a hierarchy with the optimal cost, but with one that's good enough for the purpose. And usually this is a trade off. So the more time you invest to build your spatial hierarchy, the better its quality gets. And in turn, the more efficient the light simulations. So if you don't spend a lot of time to build your hierarchy, you have bad quality, inefficient rate or virtual during the global illumination simulation. You are actual rendering takes longer. But if you spend more time on the hierarchy, then it has better properties for a propagation. So your lighting simulation is more efficient and it's faster. But you see that there is some kind of trade off. So I mean, if I and usually this is encoded and or start again, usually depends on how complex your light simulation is. So if I want to trace say 1000 rays, then the cost of this is very, very low. So I just need a, I can live with a very approximated hierarchy. So the hierarchy quality can be very bad, but because I shoot so less rays, I will not feel the difference too much. But if I shoot rate counts in the billions, then even a small increase in optimality of the hierarchy will give you significant gains in your rendering time. So what you see here in this graph, this I just showed that you get a feeling what are different methods there. I put the reference to the actual method to the actual paper where this is from right next to it. So what you see here are different methods on how to generate bounding volume hierarchies with the surface area heuristic. So as you see the blue line, the SPVH has very low call or start again. So what you see here is on the x axis, the number of rays that you will shoot in your light thing simulation. So that means that the more you go to the right, the more complex the light simulation is, the more quality you want of the final rendering, the deeper you go into reflection and reflection levels, things like that. On the other hand, on the y axis, you see how many rays the lighting simulation can trace per second. So that means that the higher you go up the y axis, the faster your lighting simulations. And now you have to find some trade off. So SPVH constructs very good spatial hierarchies, but it's also very slow. That means that for lighting simulation that only use a few million rays, the performance is very bad. Because most of the time is taken to build a spatial hierarchy. So the press, the edge, it takes longer to build the hierarchy than to do the actual rendering, which doesn't make too much sense. But if you go into the into one, the other is for computing your final image, then it starts to pay off because you have a very high performance. So you can trace in this example on the hardware, 400 million rays per second. BL, BVH, HL, BVH, on the other hand, is a method to quickly get a spatial hierarchy that's not very optimal. So you see that for scenes with only a few million rays, you already get close to the final performance. 200 million rays per second, and you are much faster than SPVH here. But the more rays you shoot, the more you are heard by the missing optimality of your hierarchy. And it's that there is some kind of sweet spot around 10 giga rays where SPVH gets actually better than HL, BVH. And in this paper, they propose another method that is faster to construct. So you see it in the green dotted line. So you quickly, it already gives a significant performance, increasements, even for smaller simulation. So already at 100 million rays, you are better than HL. And you get, but you get quickly close to the performance of SPVH. So this is in this paper, this shows that, yeah, they found a very good intermediate method that's only a bit less optimal than the state of the art before. So I advise you to look into this paper. You see a lot of interesting things there. So how to port BVH, a construction on the GPU, parallelization issues, and other smart tweaks. So I give you a literature. So in PBRD, it's the chapter four. And since this is inherently a geometrical problem, so you want to know where are triangles in the scene, the same hierarchies can also be used for collision detection. Because for collision detection, if you want to know, could two objects collide, then you have to be spatially near to each other. So if I know that they are far apart already in the, through the bounding boxes of the tree, then I can ignore this and not compute the exact intersection between them. And there are several papers here. So the work of IngoVide, more or less, started this whole business in this thesis. And then I also give some recent papers that usually looking to how to do this fast in the GPU. So this is more or less the current trend now. There are also upcoming works to do the same on this Intel, many core architecture, so the Xeon file. Good. This concludes the first part of this lecture. Are there any questions? If not, then I continue with something completely different now. I mean, this is a very technical topic. If you want to implement it, then you have to look into the papers anyhow, because I cannot layout here all the issues with coding. I mean, it would be super boring. And on the other hand, it's also the surface area heuristic in itself has proved worthful. But I mean, there are a lot of different approaches. So approximation of this small part, approximation of this small part. So there are many papers that focus on different partial problems in the whole in the whole research problem. So going through a lot of literature is also suboptimal because due to the rapidly increasing hardware capabilities, the tunnel is also quite fast. So things that were super smart approaches say four years ago, do not cut it anymore because GPUs now have completely different functionality and can do certain aspects more efficiently. So this is a rapidly developing topic since years already. So if you want to implement that, have a look at the current literature. There are a few standard papers like the one of the of IngoVide, which have lasting contributions, but mostly in between are small optimizations that are focused on things that are perhaps not relevant anymore. Okay, good. Let's put this.
Now to the second part. It's quite different, so it's tone mapping. This is concerned in our so before we had optimization for the light simulation. Now we will look at the very end of our rendering pipeline. So the issue of showing the image output on a display. Because the problem is that a light simulation outputs radians. So it's the how much light travels along one direction when it comes from a small surface patch. So you collect radians with your camera, recorded, but somehow your display expects RGB values. So the radians can carry some color information. So you could, for example, trace the R, the G, the B value, Genuo independently. You can do something with more fidelity like spectral ray tracing. So your trace raised that where not only radians is carried along the ray, but spectral radians. So radians of a certain wavelength of the light. This, for example, you can see this effect in Prisons where you split white light into rainbow colors. So if you do not perform spectral ray tracing there and just assume we have white radians, then you cannot simulate this effect. Because then the refraction, so the ray geometry changes with the wavelength that's associated to it. So your reflection angles get different and this causes the split into the rainbow colors. So, but in some way you have radians as output. So to show them on displays or to bring them, you need to convert your images somehow from radians to RGB. And there is an inherent problem with that because radians of light simulation have a huge range because they try to simulate real world physics. And in real world, you have a huge difference between dark and bright. So for example, if you take the ratio between the surface at say ground level in the Earth atmosphere that is either illuminated by the sun or the moon, the difference is a factor of 800,000. And then imagine that the patch on the ground is either white or black. This causes another difference in the reflected radians quantities of approximately a factor 100. So if you want to do say a general light simulation and you can expect that illumination by a sun and moon should be in there, white and black surfaces should also be in there. You have to somehow cope with a ratio between the darkest and the brightest values of 80 million. So is it relevant to do that? So can people even see the difference between that? And yes, they can. So since these are highly relevant features of our world to differentiate dark from bright, especially under very bright and dark conditions. So imagine a caveman in the woods at night. It will be very good to see small contrast differences that could indicate predators. So there was a evolutionary forcing to develop a visual system that also can take advantage of this huge dynamic range of radians values. So for example, how does this build up? So for the receptors in our eyes, they have chemical bleaching when they are hit by light particles. So this can be regulated biochemically to enable adaptation in a range of two orders of magnitude. So just by regulating the biochemical properties of photoreceptors, you can adapt the eye to a different in dark and bright of 100. The pupil size gives you another order of magnitude, so a factor of 10, and the neural adaptation is more or less the single processing. So what to do actually with the changing signal because your receptors get bleached. So the dynamic range of human vision is approximately 100 million. So that means that we can perceive, in fact, the dynamic range of realistic conditions in our atmosphere. But then you have, so the output of a light simulation, if it takes this into account, can be, I can have a difference of 80 million, a factor of 80 million between dark and bright, and then you want to show this on the display. So the technology for a standard display gives you a dark to bright ratio of approximately 1000. And if you use 8-bit encoding of your values, then you only have 265 values for 8-bit channels. That means that somehow our technology is immensely inadequate to show realistic scenarios. In some way, this is also good because you get not blinded by your display, but we have to somehow account for that. So just taking the radians output and converting it to images will carry some problems with it. Yeah. But why 8-bit image files for visually witnessed files are 24 bits of information? But usually you split them into the different color values. So you have 8 bits for R, 8 bits for G, 8 bits for B. And somehow imagine that this gives you how much red light is at this pixel. And yeah, you go to 266. Of course, for the whole color gamut, you have more values, but if you just look at the dark bright ratio of a single color, this is approximately what you have. So now tone mapping is the methods that was developed to convert this problem. So the output of a light simulation, as already said, high dynamic range because of the real world dynamic range of brightness values. And display devices usually have a low dynamic range. So what you need to do is to compress the range of our output somehow. And this is referred to as either tone mapping or tone reproduction. These are the names that you find in literature. So there are two sub-issues here. One is range compression. So how to convert high dynamic range luminance to low dynamic range luminance. This is the content of this lecture. And then you still have luminances, but usually there are standardized color spaces in which images are stored. So you don't store your own whole-brune formats, which take certain wavelength and then gives the luminance values for those. But there are standardized ways to do that. And this is covered in a different lecture about color. You see the lecture number here. So I refer you to this lecture if you want to know more about color. And here now I only explain range compression to some extent. So now a graphic example. So here we have a bright, a single bright light source with no ambient light in the scene. That means that all contrast comes only from this one light source via global illumination. So if you take, if you would photograph the scene and take a short exposure, this is what you get. So all the dimly illuminated parts completely disappear in black. And but you see details in previously two bright scenes. So that led to overexposure of your camera. So for example here, you now see the outline, so the silhouette of the bulb, for example. A very long exposure leads to overexposure of most of the image. So you do not see any detail anymore close to the lamp or at directly illuminated surfaces. But what you see is that previously dimly illuminated objects in the background now are well perceivable. So this is what you get if you take only small parts of your dynamic range and map those to an image. And what could you do to combine all this information into one image? So the most trivial thing you can imagine is I just divide by the maximum of the scene. This gives me a nice luminance values from 0 to 1. And this I can map to whatever range I want. The problem with this is that usually the maximum is not indicative of the whole scene. The maximum is usually extremely bright. So when you're divide by the maximum, the only thing you see is a single reflection in the close to the light bulb. If you clamp it, so let's say I have a range of luminances that I'm interested in, everything below is clamped by a 0. Everything above my maximum that I decide to have is completely white. This is exactly the problem that becomes obvious with high dynamic range that parts are undexposed or the parts are always exposed. So clamping also doesn't get you something. You have to take the whole range. You cannot just ignore certain parts. So one approach that gives you nice results is exponential mapping. There, you assume exponential distribution in your luminance values and then rescale this exponential function into a linear one. That means that very bright values get scaled down. Low values get scaled up. But you account for the very bright spots. So these are the things that get scaled the most. So the bright values at the bar get some reasonable white, but they do not dominate the whole scene. A more sophisticated approach is the Reinhardt tool map developed by Reinhardt. And this is the one example that we present in this lecture. As you can imagine, there are many tool map. So everyone tried some approach that works very well for a certain subset of scenes. Some methods were optimized for parallel hardware or for certain hardware architectures to give more speed there. Some presented rough approximation that work in real time. So there is already a quite wide bunch of tool map to choose from. And Reinhardt tool map is one of the most common. So if you look into some, at an open source of commercial rendering software, this is usually one of the things that is implemented. So and there is also some additional information that put this into context. There is some approach in digital photography to take multiple exposure images. So to combine exposures of the same scene that are taken with different exposure times into one image. So this is called hydonomic range photography. And usually they use similar methodologies. So the Reinhardt tool map can also be used to combine or for HDR photography. But if you put HDR photography into Google, you usually end up with images like that. And this is not what tool mapping is about. Tool mapping is a perceptually and physically validated approach that gives you realistic impressions of the scene and is not an artistic effect. So it's not that you take tone mapping, look at the image and say, I would like to have it to have more contrast in there. And then you change your tone map. I mean, this is not it didn't stand that use and the community of HDR photography to a large extent over uses this capability of the system. For example, this photograph here is a completely botched implementation or use of HDR range compression. For example, things that you see that are not correct. The haloes around the balloons. So suddenly, why the sky is brighter around the silhouettes of the balloons? This is an effect that is present in the visual system to some extent, but not to this extent. So some people think that this is nice HDR photography, but this is just wrong, simply a stat. If you like that your images look like that, then this is an artist's decision, not tone mapping. This is just contrast enhancement, you can call it like that. Also, the colors in this image get screwed up because they are now oversaturated. But this is just a warning. The tone mapping that you encounter for light simulations should not look like that because then you have a run your application. So if you would take this photo with this one exposure? I mean, this is a combination of different exposure. But also some mind-blowing. It's not just a combination of different exposure, but also some... No, it is a combination of different exposures and it's taken with a tone mapping approach. You have to do the combination somehow. As said before, you can add a device by the maximum and combine them, clamping. But usually the tone mapping give you a more realistic result, and for example, why is it obvious that here already multiple exposures were combined? Because in the very left of the left balloon, you see that you still see some kind of detail in the wood in behind. But at the same time, you can all less directly look into the sun without it having blurred half the screen. So that means that already a huge dynamic range was compressed here. But it should not look like that. It should look like that if it's done correctly. And this would be the correct application for this photograph. So it looks somehow realistic. It doesn't over-saturate the colors. It only has a very light halo around the silhouettes, which is also an effect that's present in the visual system. So this is what you should aim for. So now about tone mapping itself. There are two large different classes. So one are the global tone mapas and the other are the local. So the global tone mapas use a mapping function that converts radians at a certain pixel to say RGB value if you select this color space. And this mapping function is uniform. What I mean here is not that it's uniform. It produces the uniform output value. So you don't get an image with a single color value. But the function itself just takes as input the radians at a certain pixel and outputs RGB value. And this function is then used for all pixels of the image. More complex methodologies are local tone mapas, which not only take a single pixel into account, but also its neighbor's. And this is perceptually motivated because as you have seen before, this contrast or brightness adaptation of your eyes. In the photo receptors, for example, this is done locally. So a single photo receptor adapts to different brightnesses. So that means that you tone mapping in the human eye is a local behavior. But there are there are reasons to employ both of them. So the global tone mapas, they are fast because they have a single mapping function. So you take you can execute this function parallel on each pixel. So this makes it perfectly usable for GPU approaches, for example. But you'll incur some loss of detail because you cannot locally look as if this is already a dark patch of my scene. So I can enhance the contrast more there. Or is it at a dark bright boundary where I do not do it that much. And the local tone mapas allow this. So they allow a local contrast enhancement, but they are slow because you not only have to say, look at the pixel, but also its neighborhood. The neighborhood grows quadratically if you enlarge it. So I'm there you will incur a different complexity in your problem.
So, as I said before, the tool map we will look at was developed by Renhardt at all in Hitzigrath 2002 presented. So it has, it had already some time to prove itself, so to say. It's widely used, so most light simulation products and also HDR photography tools have it and it has both a global and a local variant. So they were both presented. Here you see the difference between a global and a local approach. In some way, I mean usually convolution is by a fixed kernel. So your kernel function is the same. Here this could be a non-linear kernel. So it's not that it's linear convolution in the sense that you have one fixed kernel that does it, but you have a non-linear mapping. In this sense, it's not really convolution, but the implementations can are in this period. So if you want to have it fast. So parallelization strategies that work for convolution is a work for depth. Usually you, I will go into the local approach later, but you usually have to adapt the feature size. So either then you do multiple passes with different discretizations of your kernel size or you take a huge kernel and vary the kernel function. Okay, here the difference between a global and a local approach. I'm not 100% true how wide it is is possible on this project. But for example, you see that the local approach has much more contrast in the Google path or for example in the in the most ethics on the chart. A technological one already works very well is faster, but doesn't give you all the possibilities that your eye would give you. So the steps. This is just a short outline. I will explain what this meant with this formula is. Of course, if you want to have in depth details on this, the reference is at the end of the lecture. So here a percent global version. So you now have one mapping function of the whole image or of the luminance values of the whole image to a color space. So the first thing you do is you compute some kind of average of the image. So since you do not know if this is this inherently a very dark image or is it globally a very bright image, you somehow have to set the baseline. So to say. And this is done with the log average. So you don't just average the sun. So you don't sum up and divide because this would give an un proportionable weight to the large values because they are somehow exponentially distributed you can imagine. So this takes care of that. So you. You average in log space and then convert back into exponential. And then you have some kind of average of your scene. So this is the approximate average brightness of the image that I'm looking at. And then you map this to a middle gray value. So A is now what you define as you want to have middle gray. This you can vary. This also depends on what the range is. You have available. And this now sets the bar. So everything above this middle gray will be brighter. Everything below will be dark. And then what you do is you take the middle gray value and scale the input accordingly to it. So now you don't have a input in some kind of arbitrary range but you have it relative to the middle gray value. And then what you do is that you compress the high luminances. So as if you remember from before this division, division by a maximum, which gave you just a single bright spot in the image. This is because the high luminances dominate, usually dominate the scene but are only of very small extent in the scene itself. Imagine looking at the sun. I mean the sun is very small when compared to the whole sky dome but it would dominate it as immensely higher brightness values than the surrounding blue of the sky. So what you do here is not that you are map your scale luminance values to the final output by compressing the higher luminances. So in the lower right you see the function, the mapping function. So as you see that a huge range, say from value 4 to 10 in the x-axis is mapped to a rather small part in the y-axis. That means that a lot of large luminances are just compressed in a small range in the, say, RGB values. And what you also have is that the small luminances, so the small luminances in the input file or in the input image are mapped to a larger extent on the y-axis. So this you can see that the slope of the function starts out as nilivertically. Go at the 45 degree you have a 1-1 mapping which is approximately a 2 and then you have a compression. So what this mapping does is it compresses the high luminances and enhances the low luminances. This gives you the mapping and you can convert this into, so I go back. So the output luminances then have a predefined range from 0 to 1 and can then be mapped to what that color space you like. So the local version, this is now the one that takes the neighbors into account, starts off similarly. So you compute the log average to get an estimate how is the average brightness of my scene. Then you map the values to average gray. That means that the value of 1 is now my the middle gray to the defined. But now I do not compress the luminance by one uniform function over the whole image but then locally adept this function by looking at the neighbors. So now this the local average apacase V now depends not only on the coordinates in the image, so x and y. So the pixel that I'm there right now but it also depends on some kind of scale. The scale means how uniform I'm a brightness distribution there. So am I at a say a silhouette where half of it is very dark, half of it is very bright or am I in a homogeneous region because in homogeneous regions your eye would adapt on a larger extent on the right to this brightness and if it's accommodated to this brightness it can differentiate details better. You perhaps have already experienced it when you look at the moon. So if you look quickly at the moon it's just a bright spot but the longer you look at it the more details you can discern it. This is an adaptation of your eye to it and this happens also at local spots on the right and this is what's simulated here. So you compute for every pixel position x, y, a local scale and with this local scale you then you compute a local average V and this now gives you how much range compression is going to happen at this location. So how to get this scale? So the scale is the extent of an area around this pixel where the brightness does not change too much. So in this example where you have a colorful church window this is that is illuminated from outside. You have three different examples on how how you can put the influence region of your scale. So the line-up tool map is by computing two radies and then look at a certain scale that I mean a way that they have a certain behavior. So in the very top you see you assume that the pixels in the center of the two concentric circles. So this is the pixel that I want to tone that now and if I lie if I put two concentric small circles around it then I see that the center area the smaller circle is quite smooth in the distribution of brightnesses but the outer circle is very smooth too. So there is no significant difference between the small and the large circle that means that my scale is too small. That means that I could enlarge my circles even more and still retain smoothness. In the center example you see the correct application so the center disk has smooth values so they are approximately the same brightness but if you take now the outer circle that intersects the window you see that suddenly in this footprint very large brightness values are. So you see that the small circle is smooth but the surrounding area is not smooth and this is what you want to achieve when you determine the local scale. So the larger the local scale is the more you are looking at the large patch of equal brightnesses and your eye would adapt to that. That means that for this pixel you can improve the contrast because your eye would do the same. Another failure case what you should not do is that you make the disks too large as shown in the bottom example because your bright values are already in the inner disk. So this scale would be too large. I go back. So what you get but this when you notice K you know in how large the reach is where you can compute your local average and this then allows you to locally compress your range. So if it's very smooth you compress less if you are next to very bright boundaries. So if you are next to discontinuities in the brightness in your image so you are at the verge of traversing from a bright to a dim region or vice versa you do the opposite. The ratio between the larger radius or smaller is constant. Exactly. This is the approach that detect there and then you have to find this. I mean they also show you an implementation how this can be efficiently mapped because I mean trying all the possible circles or disks is not feasible so you need some kind of approximation that gives you a good estimate of that. Yeah I mean I don't know how they handle this but I would guess that it just ignore everything that's out of the image. So you have to cope with what you have there. And but this is natural. In the corners of your image you get less neighborhood so you get less information of how the surrounding looks like so you will get say less optimal tone mapping there. So here another difference assuming in the image before so as you can see the inch of figures on the wall are have a much higher contrast and this is now because the brightness values there are relatively constant. So if you compare this with say the window next to it then this has a very uniform brightness even if the colors are perhaps different this is still a low range as you saw before between sun and moon it was 800,000 between black and white it was 100 or 1000. So if you have direct or indirect illumination counts much more than just color variations of your surface. So here the tone map corrects the take-in its local version that the brightness values are quite uniform in this region so it enhances the contrast. So now this concludes the right-hand tone map but there are also other tone mapping approaches. So in this C-Craft 2002 was the year of the tone map bus because there were three tone mapping approaches presented. So one of those was the bilateral filter perhaps you already heard of bilateral filtering in computer vision perhaps. There this is you can imagine it as a dorsal smoothing that does not smoothe the edges. So if you have a large difference in your image then it does not smoothe this way. So this is you can also call it edge preserving smoothing. And you can imagine that this is somehow conceptually similar to this scale estimation of a line-out because the scale estimation looks where to the brightness values differ a lot. So these are say my brightness edges in the image and then it doesn't try to propagate contrast enhancement over this edge and the bilateral filters can do the same thing because they just stop the filtering process at edges. Then another approach by Fatal gradient processing this is you can imagine an image say you have only a luminous channel of a single color then you can imagine your image as a height function. So bright spots correspond to high peaks, dim regions to low values. And what they do in this work is that they look at the gradients. So how steep are the slopes in my image and then they compress in gradient range. This allows them to preserve low gradients because these are small slopes are the details in the dimly-luminated regions and the very high slopes are the ramps up to the bright spots. So if you just say reduce the high gradients you also get a similar behavior that you only reduce the range in non-uniform regions. So these are three approaches that do approximately the same they have the same underlying idea but they have different approaches to it. So one of our hand heart goes more into what's used in photography with direct exposure plates that you can use. bilateral filtering immediately uses a well established filtering paradigm in signal processing but it's most likely very sub-optimal from a perceptual point of view. Gradient processing allows a very fast application but perceptionally of these three works that I have told me it was the best. And this speed increases that you perhaps needed 13 years ago are not really relevant now so you don't need to take approximations that were highly relevant before into account anymore and just do the and just have something that's less efficient but gives you a lot of the box nice solution. And if you'll check Wikipedia the German version as a long list they list the approximately 20 different variants. The literature for that is Reinhard not only started that but also continued research and that and he's seen as one of the say experts that to go to when tool mapping is concerned he wrote a book about that so hytonemic range imaging and you can also see these three papers. If you want to know how to speak developed you just check for example the Reinhard paper on the digital library of ACM. You can do this from here from the university so you get an account you get immediate access to the whole digital library of ACM and there you can just click on site it by and then you get the long list of papers that referenced the original work by Reinhard and then you can sort them by a year's and look what different approaches we used. Good that concludes the tool mapping parts are there any questions? Okay if not, if we know which method is the use time we also do we also do we also do some extent because I mean you're inherently losing information because okay the say the thing where you for sure lose precision is if you have before say 24 beat values and then you're in cold then we need 8 beat RGB values then you have a contestation of your brightness values this is unrecoverable this is just lost. The other thing is if you use local approaches that take the neighborhood information to account and then do different things in different regions of the image then it could be that you have some kind of ambiguity if you want to reverse this process because then it could be that you either had say in the original data large brightness values and a small scale or a large range compression then or the range was already small like that and you're just used less compression so you do not have the full information how local approaches adapt to certain parts of the image you will have to infer this somehow I would guess like an optimization approach so you would like that locally you should have the same scale that you assume that you reconstruct and then you can recover your signal but you will not be able to recover with it with global approaches it should be possible yeah okay yeah if you combine global and the local ones as the right level does I think wait in any way or about the global one and the local one if you look here the local version has this expression so the the scale luminance is then divided by some kind of local average and the global version the first two steps are identical but then it uses the scale value only at the very center to say compute this local average so the global version is a special case of the local version that there is not a certain way to implement yeah you combine the whole picture of the local picture I will not know no no no the luminance output this is what you get so for each picture location you get scale luminance between 0 and 1 by the global approach and by the local approach so here it also immediately gives you the output luminance so that there is no combination for the questions okay then thank you for your attention on this concludes this rendering lecture
Next time, what you will see is something that was missing from many of the bigger mutations of many assignments. What is the complexity of the ray tracing algorithm depend on? What depends on the resolution? The bigger the image, the more it takes, got it? It is exponential with respect to the depth. At least this implementation is, if you shoot out two rays, there is always a branching. Then this is going to be exponential. So we have taken into consideration resolution, we have taken into consideration depth. But we haven't taken into consideration how many objects there are in the scene. And if you start running the same ray tracer on huge scene because you don't want to see spheres, you want to do ray tracing like real men do, then what you do is you implement a function that can load you triangle meshes. And then you just grab a nice triangle meshes and I've seen from somewhere loaded to your ray tracer and you're very excited the run the ray tracer and you don't get anything in your whole lifetime. If you load something with millions of hours, it's not much nowadays. Why? Someone help you out. Just face to wrong. That's true, but why does it take a lot? Because you have to do a lot of intersection tests. Exactly. So I have, if I have one million objects, I have to do one million intersections every single time. That's too much. It's just way too much. So what we can do is that we can do some kind of space partitioning, which means that simple optimizations I can do. For instance, I really don't care what is behind me because I'm going to intersect something that's in front of me. So whenever it's behind me, I can immediately throw this all of those polygons out. That's immediately half of it. And if you use smart tricks and key details, smart tricks, smart data structures, you can go from linear, while in here, one million objects, one million intersections. So that's linear complexity. You can go to logarithmic complexity, which is amazing because the logarithm after a point doesn't really increase too much. And you will learn about techniques that will make you able to compute this intersection with one million objects with about four or five intersections on average. Obviously, obviously it depends on the distribution of the triangles and all of that. But on average, you can do it in four or five intersections instead of one million. So it's a huge, huge speed up. This is going to be on the next lecture. And again, it seems that I have been lying to you all along regarding this as well, because I told you that we are measuring radians for the random equation. Now, radians, I cannot really display on my monitor. What can I display on my monitor? RGB values. So there has to be some transformation that comes from radians and conversing to RGB in a meaningful way. This process is called tone mapping. And Thomas is going to tell you all about tone mapping as well. You can do it in a number of different ways. It's heavily going to be built. And a good tone mapping method really breathes life into your render images. Now, we haven't talked about filtering. This is a bit more sophisticated. Recursivariating, you should one sample through the midpoint of the pixels. For the scene, you computed this year down. With Monte Carlo integration, we are going to have many samples. So we are going to have a metric that's called samples for pixel. And these samples will not go through the midpoint of the pixel. These are going to go through the surface of the pixel, like a random samples over the surface of the pixel. And we are going to integrate radians over the whole surface. Now, you can do this differently because you have many samples over the pixel surface. And you can take into consideration them into consideration in different ways. And you can see that different filtering methods, this is what we both filtering. A different filtering methods will give you different results. And the interesting part is that you will get anti aliasing for free if you do filtering well. Because in a ray tracer, you will shoot one ray through the midpoint of the pixel. Your images, unless they are super high resolution, they are going to be elious. The completely straight line is going to be pixelated. The edges are going to be pixelated. What can you do? Fibial things like supersampling. Let's split one pixel into four other pixels, or smaller pixels, and compute the rays through all of them and average. That's the Fibial method. That gives you anti aliasing by supersampling. But this is super expensive. I mean, you have HD resolutions, and you have to bump this up by even four times. Too much. There's better solutions. You can get this for free in your preliminary examination if you do filter. So this is what filtering is about. Thomas is also going to talk. This is not one lecture. This is the next three lectures. It's going to talk about participating in media. What is this about? Well, in our simulation so far, we have taken into consideration that rays of light only bounce off of surfaces. But in real life, there's not only surfaces. There's volumes. There's smoke. Hayes. Many of these effects, where ray of light can not really hit an object, but just a smoke, and gets carried. And if you do your simulation in a way that it supports such a participating medium, then you can get volume costings. And that's amazing because I just have shown you the ring. And whatever else kind of costings you will look at, you will think of those as some 2D things that I see it on the table. This diffuse material that diffuses this radiance back to me. So you would think that costings and shadows are play hard. They are 2D things. But they are, in fact, volumes. So the shadows exist not only the plane, but they have a volume. Because the set of points that are extracted from the light source are not on the plane. They are in 3D. And you can get volumetric costings and volume shadows with participating in media. Because there will be a media in there of which light can scatter. So therefore, you will see these boundaries. You can also get god rays, beautiful phenomena in the nature if you compute the participants. You can also get something like this. This is an actual photograph just to make sure that you see the difference that the first ray is traversing air or vacuum. And the next ones have a participating video, which can give you this effect, this scattering effect. And another example of god rays, while apparently we have this do not disturb a piece of paper. So there is some luxur under and then going on in this room. You better not enter. Who knows what you will see? And you can get not necessarily set for now's effects, but the more subtle effect. You can feel that there is some haze in this image. But it's not so perfect. Now, we don't stop there because don't just think of smoke and atmosphere. You can just look at your own skin if you would like to see some participating video. Now, this is a phenomenon called subsurface scattering. And this means that some of the things that you would think are objects, are surfaces, are in fact volumes. This is your skin, for instance. Light goes through your skin, the portion of light. And we don't simulate that because when we hit the surface of the object, we bounce during the back. And if we write a simulation that makes us able to go inside these objects, then we have a simulation with subsurface scattering. And we can account for beautiful, beautiful effects like this. These are some simulations. So for instance, on the left side, you can see probably marble. There is subsurface scattering in Marwood. It seems heavily exaggerated to me. Or either there is a really, really strong backlight thing. But this is not a surface anymore. You can see the nose of the lady light, lots of the radiance, actually gets through the nose. This is one more example. This is not so pronounced. This is not so exaggerated. But you can see this j-dragum clearly has some subsurface scattering. Look at the optically thin parts. Like the end of the tail. You can see that it's much lighter. And this is because some of the light is going through it. And the optically thick parts, like the body of the dragon, have less subsurface scattering. So you can see that this is a bit darker. It's a beautiful phenomenon. And we can also simulate this. And look at this one. Absolutely amazing. That doesn't just look amazing. This is incredibly awesome. We can write computer programs if we compute this. It's reasonably well known this time. So absolutely beautiful phenomenon. Let's look at this as well. This is a fractal with subsurface scattering. I mean, how cool can someone get its fractals and subsurface scattering? It's like two of the best foods mixed together. It has to be something also. And another example of a beautiful j-crag. Well, just a bit of subsurface scattering. So that's going to be it for today. And there is going to be the next three lectures with Thomas. These are all the exciting things that are going to be discussed. And then we will complete the Monte Carlo integration. I will tell you how I like to use exactly and how to use mathematics to see through these lines. And then we will write our global illumination program. Thank you.
Excellent. So this was the heat or mis. Why heat or mis? Because the ball that I throw is either below or above the function. Now what we will actually use is the sample mean. The sample mean is different. I would like to integrate this function and I can take samples of it. Samples here mean that I have f of x and 2x. I can substitute a number and I can evaluate the function there. So I don't know the integral of the functions too complicated. But I can evaluate it. I can evaluate it at 0 at 0.15 at 2 and very little like that. How do I compute the actual integral from this function from these samples? Well, we will take a look through an extremely difficult example which is integrating x. So let's solve this with multiple different methods. What does the mathematician do? Find a primitive function. What is the primitive function of x? x squared over 2. All we have to do is substitute 1 at 0 and therefore we get 1. So I know that I am looking for 1.5. What does the engineer do? The engineer knows that this is a linear function. Therefore this is going to be the area of a triangle. What are the lengths of the triangle? The base is 1 because I am integrating from 0 to 1. The height is also 1 because if I go 1 to the right that I am going to go 1, it's upwards as well because this is x. So the area of the triangle is the base times the height over 2. So this is 1.5 again. Now we have the mathematician and the engineer. What does Monte Carlo guy do? A Monte Carlo guy didn't stand in that matrix at all. So he cannot do any of these. So what Monte Carlo guy is going to do is he's going to take samples of this function. So I evaluate f of x at 0.2. How much is it at 0.2? Well obviously 0.2. Simple as possible example. What about 0.4? Well at 0.4 this is 0.4. And so on. So I have taken four randomly chosen samples from this function and this is called sample mean. This means averaging. So let's take the average of all of these. So the average of all of these are losses exactly 1.5. So this gives me the actual perfect result for an integral that I could otherwise not solve. Now we can code this very easily. In just a few lines of code and there's already excess lines of code because of printing and whatnot. But you can we can see how small this is. This is the actual function is double f that I'm interested in and f of x equals x. So it's not really that difficult. What is the output of this program? After many samples I approach very close to 1.5 up to quite a few digits. So this works really really well. But there's something really interesting about this. So if I draw one sample from this integral. Then I have an over estimation of the result. Why? Because I'm looking for 0.5 and I have 0.87. What about 10 samples? Is this an over estimation or under estimation? 10 samples. I wasn't paying attention to the samples before because I was thinking about 1 million samples to get. Damn it. Okay. So the question is is 0.61 more than 0.5? To a good approximation. Is this 1.6 more than what? Opened the idea is more than what? Opened 5. Yeah. Exactly. So this is an over estimation. Excellent. What about 100 samples? We saw we'll have the output fully. It's an under estimation. It's 0.3. An under estimation. Perfect. That's an under estimation. Okay. What about 1000 samples? We'll have maybe an under estimation. It's an under estimation. Yeah. It's the name of it. Okay. Marker knows that this is an under estimation. And this is a weird behavior, right? Because I have over estimations and under estimations of this interval. But in the end, it seems that they are going to the deviations are going to be less and less. Okay. So this almost looks like a sign. So it's like, if you like, algebra, the convergence is something like sign of x times x. Is it? No, because it's going to get large sign of x over x. So this is like a sign that starts out with large deviations and large amplitude. Then it gets smaller and smaller. This is how the convergence of Monte Carlo estimators go. And this we call, by the way, stochastic convergence. So it means that it can be over and under the integral. But as we add more samples, it's guaranteed to be closer. Let's have another example. Let's integrate this function two times sign square of x. Yeah. You can see the function is very constant. We're going to be, as results were, function. Yes, I guess that's the figure that you always decadent where the frequency is right. Then you get to compute. There's a probability to have such favor and what you could say that yes, this can happen one. But this has very low probability because why would you have the same region over and over again? And you can also do smart things like putting a grid on a function and sampling that. So that's one thing. But what you will see later, that we will have unbiased estimators. And this means that you can expect the error to shrink in time. But this will be a couple lectures down the line. Is everything fine with copying this? That was a pretty remarkable thing. Exactly how it goes. OK, what does the mathematician guy do? Look for primitive functions. Excellent. What is the primitive function for the sine square of x? What is one half of x minus the sine times the cosine? Let's do the actual substitution. We have our well-learned pie together. What does the engineer do? Well, these are not trying those anymore. So you better look it up on more from alpha. And you will get something like this. And the result is again pie. So wonderful. Engineering works. OK, what does Monte Carlo guy do? Monte Carlo guy doesn't know what from alpha. Doesn't know mathematics. Doesn't know anything. But he has his 21-ever line C++ program. Let's take samples of this. What are we looking for? What was the number? What was the end result? It was pie. OK, so let's substitute this function, where this double f is now the sine square of x. And I have also this multiplier of 2 in line 35. So on the right side, you see that this is what I was looking for. This is what we have changed. Now, just one more time. What am I looking for? What would be the perfect result? One more. OK, excellent. And the run this program and it starts out maybe pretty well, 3.6. OK. And as I add more samples, I will get one. Not pie. I get one. OK. So I have been lying to you. I have been lying to you all along. This doesn't work at all. And we don't have the slightest idea why this doesn't work. That's one of the most important lessons during this course. Not because of this thing. Who cares? You'll have to study this thing and sort it out. But you start out, if you have a difficult problem, you start out trying to understand it with your intuition. You don't start throwing multi-dimensional integrals everywhere. You start out thinking of what is going on. There is a diffuse interaction. There is scaring in the atmosphere. How does it look like? You use your intuition. And your intuition can get you very far. So in the integration of this f of x, the intuition of the sample mean could get us the perfect solution. But there may be more complicated cases where your intuition fails. And this doesn't mean that intuition is not useful. But it means that it can only take you so far. So if you have barriers like this, but you cannot go through intuition, then it is the point when you start using mathematics. And you start to evaluate what is going on. You start to look at the details. So use the intuition to get an idea of what's going on. And then if you run into obstacles, use mathematics to sort out the details. That's one of the most important lessons out there for you when you will go out there and try to study really complicated theories. So this doesn't work. I have been lying to you all along. How can we sort it out? Well, after the commercials, we will know a bit more. The commercial will come in the form of Thomas. Because he's going to travel to Japan for a half a year long, half a year long research project. So he has a few lectures left. Three of them in particular. And he has to hold them now because he's going to take the plane afterwards. So the next three lectures are going to be held by Thomas. And I mean, the timing is a bit suboptimal because I have to cut this lecture in half. But at least you know how Monte Carlo integration works and he is going to tell you more about this. And then we will complete this unit and at the end of this unit, so before I get to complicated program, three lectures from Thomas, then I come back. We complete this lecture. We will know how to write a global illumination program. So this is exactly what we're going to do. I have implemented the whole global illumination. I think it is beautiful. It can work with me. It can be to beautiful, interactive illumination, caustics. I think it's in 250 lines. It's readable. It's understandable. And many, many people have learned how to do global illumination from this program. So after three lectures from Thomas, then I finish this. That's one lecture. The next lecture is going to be a code walkthrough. So we are going to take into, we are going to look through the code, what I have read, how this works, how is friend as long inserted here? Where do I use friend as long? How do I do all these things? You will see everything in code. It's going to be very practical.
Let's go to Monte Carlo integration. I promise you something. If you learn what Monte Carlo integration is, you will never, ever in your life, will have to be loaded anymore integrals. Never, I promise to you, I give you my word. This is a simple method to approach that integrals. Basically, what we are looking for is we would like to integrate the function and we can take samples of this function. What does it mean? We will check it out in a second. We will take samples of this function and we would like to reconstruct the integral. If we do this, this is what is called Monte Carlo integration. This was founded during the Second World War by Stanislav Lom and his co-workers during the Manhattan Project. This was the Antonin von project. They had unbelievably difficult integrals to solve. They had to come up with a numerical solution in order to at least approximate. This is what they came up with. There are two different kinds of Monte Carlo integration, the keys. I have this function f of x and I would like to integrate this for me to be. This is a definite integral. What I can do is hit our miss Monte Carlo or sample mean Monte Carlo. 99.9% of the case we use the sample mean, but just for the intuition and to visualize what is going on, I will show you the hidden miss as well. We can see how we can take samples of this function. Let's take a look at this. This is the recipe for a wonderful Viennese Nizzo. This is the recipe for Monte Carlo integration. You draw this function that you have on a paper. You close it in a box that you know the size of and let the size of the box be vague. You throw lots of random points on this paper and for every single point you have in determine if it is above or below this function. Then you have a magical formula. You use this formula and you will get the integral. The more points you have on the paper, the better. I compute the ratio of hits the points below the curve of the function compared to all the samples that I have. How does it look like? This looks more or less like this. This immediately gives you the intuition that the reds are above the function. The blues are below the function. I would like to know the ratio of blues to all samples because this gives you exactly what the integrals mean, the area below this curve. If I would be on a summer holiday, I could have some beers and get a crazy idea that I would go on top of my house and imagine that I have a pool of water. I would start throwing beach balls in this pool and after doing this for long enough, I could approximate the value of pi. It sounds like black magic. Provided that the balls are small enough and I am patient enough that this can happen. What is the recipe? Let's go through it. Let's draw a unit square somewhere. The area of this square is going to be one. Let's draw a quarter of a unit circle inside this box. This is also of unit radius. Now we start throwing these points. We compute the ratio. What is inside and how much is outside? We multiply the result by four and then we get pi. Now how is this? This doesn't sound like that it makes any sense. This is black magic and this works. Let's take a closer look at why this works. I would like to compute the integral. The integral would be below this function. This is the one quarter of a sphere of a circle. What is the area of the sphere? R square times pi. R is one. It's pi over four. What we are approximately approximating here is pi over four. When I solve this integral, what I get is a result is pi over four. What we need to do with this in order to get pi multiplied by lambda. Shadow Cooper would be proud for all of us. This is due to a lot of this. What if we have a surface not a 2D? This also works for multidimensional functions. You will actually compute such a thing in the next assignment. It will be absolutely trivial. This is trivial. It is better because the rendering equation is infinite dimensional. It has to take care of high-dimensional functions somehow.
And of course, if we have some super long live paths that are combinations of these, then obviously the ray-face-circum, or recursive ray-face-circum cannot take this into account. Why is that? That's the big question. Let's go back to the illumination equation and imagine that I can think of a diffuse surface. What do I do? I try to emphasize this earlier, but I will emphasize again that I take the perfect reflection by it. It doesn't matter if it's diffuse or specular, I take the perfect reflection direction. But if I do this, I have no idea about the surroundings of the object. I have no idea what is, for instance, above this diffuse plane. If there is some red object, I don't shoot a ray there in order to get some indirect illumination. So I will have no idea about the surroundings of this object. Now, if I switch to global illumination, however, there is this integration, and the part of the integration is the incoming light, the incoming radiance. And how I can integrate this over the hemisphere is basically sending samples out in every direction in this hemisphere. Now, if I do this, then I will know about the surroundings of the object. If there is a red wall or a red object in nearby or the desert nearby, then I will have samples of the incoming light, and therefore it will appear in the color of the object. This is fundamental. This is the very important way to understand why ray tracers are missing these effects. Now, let's talk about the real deal, the real, physically based PRDF models. How does it diffuse PRDF look like? It looks like this. So FR is the PRDF. Omega, omega prime, our incoming and outgoing directions, X is a point on this object. These are probabilities. Now, this is weird because I am used to formulate. So, if I talk about, if you are sharing, I have seen L. at that step of formula with variables. And this is a freaking number. What do I do with this number? It's 1 over pi, but does this even make sense? Can someone help me out with this? This bug was no longer with. So, this 1 over pi means that if this is a scaler, if this probability distribution is a scaler, remember that this is the distribution of the possible outgoing directions. So, imagine this scenario up here where you have an incoming direction. And if I have a completely diffuse material, it means that it will diffuse the incoming light in every direction. So, all possible outgoing directions on the hemisphere have the very same probability. And if they have the very same probability, then this should be a number. Then the whole PRDF should be a number because whatever directions I specify here, I will get the same probability. And I can scale this 1 over pi with a row, which is the albedo of the material because not all materials reflect all light. In fact, most, or if not all of the materials we know, absorb some amount of light. So, this is again a number. This can be wavelength dependent because it depends how much you absorb on the red channel, how much absorb on the blue channel. But this can be potentially 0. And then you have a black body, something that absorbs everything. So, you can call it, you can change the color of the object if I'm not using the right terms, but I'm not doing the intuitive. So, the albedo is going to give the color of the object. And this we can specify on 0. Okay, the next question is, is this a probability distribution function? Of course it is. Why? Because it's because it integrates to 1. There are some other rules that we're going to disregard. We respect the probability distribution functions. How much does it integrate to this integration 1? Why? What does the engineer guy say? Well, 1 over pi integrated from 0 to pi, what does it mean? I have a rectangle that has the height of 1 over pi, and it has the width of pi. What is the area of the rectangle? Let's multiply these two sides. So, it's a times b. A is pi, b is 1 over pi, just multiply it to, and you get 1. So, this is indeed a probability distribution function. Good to go. What about specular bRDS? These are what describe mirrors. How can I write such a bRDS? It's a bit trickier because it is fundamentally different than just diffuse materials. Why? They don't diffuse incoming light in all possible directions. What is possible is only one outgoing direction. I see only one thing in the mirror. Not the mixture of everything, like on the walls. So, this means that one outgoing direction is going to have a probability of 1, and every single other choices have the zero probability. And this is indeed a probabilistic model that can be described by a delta distribution. Delta distribution means that one guy has a probability of 1 and everyone else has zero. So, it's like elections in a tink tadership. Is this a probability distribution function? It is, but I couldn't last a while. I'm going to talk a bit more about this. But let's say for now that it is because this is one for one incoming direction and zero everybody else. So, we have the one that we're looking for. And there are also glossy bRDS. We haven't been really talking about this in the first lecture of mine. There was some bRDS which was called spread on one of these images, but I asked you to forget this term immediately. Glossy is the mixture of the two. So, it is not like a mirror, but it's not like a completely diffuse material. So, there is some view dependence. In this is material, they are completely viewing the pendant. Mirrors are completely view dependent. So, it's like a mixture of the two. It is possible that there are some glossy materials in this scene. Can you find them? Raise your hand if you see at least one. Many of you. Okay. Yes? How about the cupboard? The cupboard. Excellent. Yes. Anything else? Just show that we're in good. Yes. Is it round? Is it round? Is it round in good fit? Do you mean this? No. The floor? No, the cooking field class. Slow. Top to slow. Oh, yeah. Exactly. That's also glossy. So, there is many examples. I think the question would be what is not glossy in this scene? The better it is would be the better question. And the people you are sitting at is also glossy. It is a bit view dependent, but it's modern mirror, but it's not completely diffuse. And it also transfers the caustics. So, it has some diffusibility. Okay. Next question is, it looks good, but the mathematician guy asks how accurate is this? We have these two images. One of these is generated by means of global illumination, solving this equation, and the other one is a photograph. Do you know which is which? Raise your hand if so. Okay. One person. Two. Okay. I'm going to spoil all the front-end tie-in solution. Okay. So, look at this part. So, this is the difference that you can see, for instance, because this is an actual box that the guys put together at Cornell University. And you cannot only see the box in the photograph, but what is next to the box? Whereas, in global illumination, these surroundings are not modern, just the Cornell box itself. So, you can have a blue text. Yes, this can be distinguished from a photograph. But if you look at the actual scene, it is very beautiful. And if everything is perfectly implemented, then this is so close to physical reality that it is literally indistinguishable. So, this is really amazing that we can do this. Whatever you see out there in the world, we can mount with this equation. There are exceptions, because there are wave effects, such as diffraction and stuff like that. But these are very rare. I mean, there are butterflies who look the way they look, because of interference. And these effects. But 99% of what you see can be modeled with this equation. And the rest can be handled by more sophisticated methods. So, back to this previous question. What is the dimensionality of the rendering equation? Let's try to think it through and we will see. So, just for now, imagine that I shoot a ray out from the camera. And I hit the diffuse object. I need to sample this hemisphere exhaustively. This is not how I will evaluate the algorithm. But technically, this is what I need to do. All possible outcome directions have the same probability. So, I need to shoot these outgoing rays many of them. Now, I will hit more diffuse objects after the first bounce. And I have to exhaust the sample all of these as well. And if I take this other ray, I also have to do this. And so on and so on and so on. Until how many bounces we have concluded previously that we have to take into consideration an infinite number of points. So, this is definitely very difficult because the incoming light that I am sampling the hemisphere form is another rendering equation. So, imagine that this LI, you can insert another one of these equations. But that equation will also contain this chemical and this LI. And there is some random rendering equation. So, it is an infinite large sequence of intervals. Therefore, this is infinite dimensional. Now, I told you before that this is also singular. This is not such a bad thing. But this is because of the possibility of specular BRDFs. The specular BRDF is some kind of a delta distribution. And delta distributions are not really functions. So, in signals processing, you may have studied this function. And the first thing that they tell you about this, that this is not a function. This can be defined in terms of a limit. So, you can, for instance, imagine like a Gaussian curve. And you start pushing this Gaussian curve from two sides. Therefore, this is going to be a larger and larger and thinner and thinner spike. And you do this until you have an infinite bit of spike. Now, if you check it for the properties of a function, you will get something that has nothing to do with the function. That's a singularity. There is an infinitely quick jump from 0 to 1 in there. And we need to handle this somehow because we can take into consideration functions with an integrated functions. So, let's just solve this trivially by handling this specular interreflection explicitly. What does it mean? This means that if you have an income interaction, you're not going to play with probabilities. You are just going to grab, like in a ray tracer, you are just going to grab the perfect reflection direction as a loud-core interaction. No probabilities. No. A beauty break. We have some scenario which is ray tracing because of different things. Because the image you create by means of ray tracing, but there's literally one ray of light being reflected here many times. So, awesome laser experiments with Lux render. We will try things out like this, a bit later during the course. And another example. It's amazing what we can do with these algorithms.
Okay, so we have these two guys in the ring and we know already how to solve the illumination equation. The illumination equation. We don't measure radians, we measure intensity. It's not really unit and physics. It's just some heck-doth thing that happens to work. In the rendering equation, we measure radians and we have to do some kind of integration and this is, if you, the more you think about it, the more you possibly will sound to even the thought of solving this problem. So the first question is what can I earn by solving this equation? Because I have to be motivated to do it. So obviously, the result better be, better look really good in order to give me the motivation and the resources to solve it. So this is an image from the first assignment and this we have computed with recursive ray tracing. So you can see for instance, hard shadows. You can see that this is a reasonably boring image. I mean, it's great compared to the simplicity of the model that we have, but it's not really the greatest. Well, what is missing? Let's take a look and look very closely. Let's take a look at the very same scene, but not with recursive ray tracing, but with low-dolumination algorithms. So not the elimination equation, but the full rendering equation. Take a look at the difference. Look closely. This is full global illumination. Finally, absolutely beautiful. Let's take another look. This is recursive ray tracing and global illumination. So apparently there are some effects that recursive ray tracing cannot account for. What are these effects? Well, we have talked about indirect illumination or color reading. This is the very same thing. This means that I am hitting two diffuse objects, one after each other. Is this visible enough? Okay, I'm just pulling a bit on these pertains. So you guys can see better. Okay, perhaps a bit better, right? Yes. You're back. So these are in this case LEDV paths. What does it mean? Everyone knows you start out from the light source. You hit two diffuse objects and you hit the eye. Excellent. Now, indirect illumination is all around us everywhere. Both in the real world and both in the better computer games out there, which have approximations of indirect illumination. And you can see that on the left image, it almost looks like Photoshop. It is completely alien from its surroundings. It is almost as if it didn't take into consideration its surroundings. So you're standing in the middle of the desert, not just somewhere. You would have to have some color reading effect that you get from your surroundings. And this is what usually the problem is with many of the Photoshop images. You just repel the person from somewhere and you put it in another photograph and it looks super fake. And yes, mostly because of the illumination conditions, but even if you try to account for that, even if you try to recolor it to have more of that the same color scheme than the rest of the photograph. You're still missing the indirect illumination effect. And human eye is very keen in recognizing that. So you recognize that something is wrong, but you don't know what exactly is missing. And it's usually in the direct illumination. But there's something else. Let's take a look at this scene with recursive ray tracing. So we have refractive materials. For instance, this last sphere on the left of the mirror sphere in the middle and the completely diffuse sphere on the right. Let's take a look at how the very same scene looks like with global illumination. This is the difference. One more time. Recursive ray tracing and global illumination. So like we have talked about this before, I can see that difference is in indirect illumination. So on the upper left, I can see that some of the red color is bleeding onto the other wall and the very same with the green wall in the background. Also with this diffuse ball. So even a simple diffuse sphere looks much more interesting and much more beautiful with global illumination. Don't say anything. But I say something else. I see something else as well. Not only indirect illumination. I see some other effect on this image that I couldn't compute with ray tracing before. Don't say anything. Raise your hand if you know what I'm talking about. Excellent. How was that? Everyone. And what? Don't say anything. Okay. I'm talking about this. And this. So this interesting light effect on the wall and below this glass sphere. So raise your hand again if you know what this is exactly. Don't say anything. Because so many people know you will have to say all of you at the same time after three. Got it? Okay. So everyone. One. Two. Three. What is this? Just a little. Okay. What are the other guesses? That's technically a fraction. Yes. But that's not how we call the effect. Anyone else? Okay. This is what we call caustics. So what kind of light path is this? This is an interesting light path. In this case, this is L. S S D E Y because we start out from the light source. We hit the glass sphere from the outside. Then we have refraction. We hit it from the inside. And then we hit some diffuse object that is either this checker board down there or the red wall on the left. And then to the eye. And if we have this effect, then we are going to have caustics. It's a beautiful, beautiful phenomenon in nature that we can find the account for. And it's. And then you can you can see this many, many places. Now, let's take a look at another example. This is the famous school corridor example from Luxrender. Okay. We have recursive ray tracing and blue glid illumination. So you can see lots of indirect illumination. This reddish light on the floor and perhaps some caustics or at least caustic looking thing in front of the lovers. Okay. So next question. What is the definition of shadows again? So what we have said before that shadows are regions that are not visible from the light source. Now, an alternative definition of shadows is the absence of light. This is what definition we will use in group illumination. So there is you could say that there's no such thing as shadows. There's no. That's that's not something that's just the absence of something else. If there is less light somewhere, then there's going to be shadows. So this is the definition of shadows in local illumination and in Zen culture. And take a look at this image. We can see some beautiful, beautiful soft shadows. And the thing is that you don't even need to do anything to compute these illumination. So if I have a ray tracer, what do I do? I shoot out shadow rays from these regions and I try to approximate what regions of the light source are visible from this point. In global illumination, you don't need to do anything. You just solve this equation and outcomes, physical reality. And shadows are parts of physical reality. You don't need to do anything in order to obtain shadows. It's not like a bottom-up approach like ray tracing. So you start from a baseline and you add more and more hacks to account for more and more effects. And for global illumination, you will see that we will have a simple algorithm. That can give you all of this. And you don't need to account for shadows and costics and all of these things. Another beautiful example of costics. This is costics from the point light source because, for instance, you can take a look at the shadows. The shadows are hard. So it's likely to be a small or a point light source. And the costics are very sharp. So they have the same behavior to large light sources as shadows. And another beauty with costics. Okay, so let's assess what these recursive ray facers are capable of doing and what they cannot. Well, obviously they cannot compute indirect illumination. Indirect illumination means two diffuse bounces or possibly more. This you cannot compute correctly. We will talk about why. And you cannot compute costics. Well, costics I have written in a few scenes ago that it was LSSD. So two specular bounces and that they do because you have to go through the blastbox. And here I'm writing something completely different. I just say one specular bounce is necessary, the rest are optional. Is this true? Or how can we verify this is true? In order to find out if this is true or not, I don't even need to say a word. I can just do this. You see the costics? This is one of the costics inside. Can you two see it? Yes. Excellent. Please take a look. No one still exists my writing name. My fiance is going to kill me. Okay, you two have seen it. Okay. Okay. Okay. Nice. Beautiful. I was going to say I'm going to put it in the bin. Okay. So apparently rings have costics. Well, I start off from the light source. I think one specular object, one mirror light object and then a diffuse which is the table and then the I have costics. So LSSD is enough for costics. There's no need to prove it in any other way. Just take a look at physical reality and let it be your touch always.
How was the Easter break? With nice, too short. Come on. We discussed it already. But what would be an appropriate length? For it. It's a big day for this one. Bigger two weeks. Bigger weeks. Maybe the entire curriculum. Yes. And just extended every time that. So every time by just two more weeks or something. That's not new. Yeah. Well, I don't know about Austria, but in Hungary people usually go to their friends. You know, many people are coming over and we are hanging out with other people. And you always have to drink their stuff. So you go there and we have this drink that's called the Piling Cup. It's something like a snub, but way stronger. And I told this to some Austrian people and they were like, Oh my God, stronger than the slubs. How can that be? Very easily, at least the hundreds. So that's how it works. And you go to the very first place and you have to drink from there home room, old fool, Piling Cup. Usually it's very old fool. And you even have to say something to the about it. So you drink it, that is it. But you have to say something good about this because they are looking at you. What will be the reaction? So you say that it puts, that's really strong. And most people seem to be satisfied with that. So this is usually what I say. But then when you are at like the fifth station for 10 days, and some people just don't take no for an answer, unfortunately. So this is how it goes. Is it any better in Austria? It's more family politics. Okay. Okay, yeah, the family part is actually the nice part. So we can decide if we drink our own stuff or not. But I mean, my fiance's grandfather attempted to make some brew, some harlequat home. And well, he did something. Okay, so I mean, something was created in the process. But after tasting it, even the postman didn't want to drink it. I don't know about postman in Austria, but in Hungary they are really hardly people. So they drink whatever they find. Because obviously you don't give the good stuff for the postman. You give them the leftovers. It's like no one drank, oh, it's really good for the postman. And he's happy with that. Imagine that postman drank. Why are you doing postman? It's kind of weird here. Yeah, well, they are... They seem to like alcohol according to my experiences. And even the postman didn't want to drink that anymore. And next time we have seen him around the house. And he just came in front of the main door. And we wave to him, hey, come, we have some for you. No, no, no, I'm just going to put them in here. Immediately. Okay, so regarding the assignments, you guys and girls have done really well. So I'm very happy to see that. People realize that there is some exponentiality with respect to the depth of the city. And that's of the simulation. So the more people go, then the more exponential things become. But this is after like, I mean, it's exponential all along, but you don't know this because it starts off slow. But after like 10 to 15 bounces, you can see a very telling characteristic of this exponential distribution. And many of you have recognized correctly that this is because a reflection and reflection are sampled. All the time. So whenever I have a bounce, I am going to compute an intersection. And then there's going to be two rays, perhaps, that continue their way because one is going to be the reflection direction. And one is with reflection. And this quickly gets out of hand because wherever ray you have two more, that's the definition of something that is exponential. So well done. Let's proceed a bit and I'm going to talk just a bit about some advanced, a bit more advanced BRDF models that are mostly used with ray-tracing. So you remember this combustion BRDF that you see on the right. And this is the scalar product between L and then. So light factor and the normal. And obviously you can scale this with KD, which is some kind of diffuser video. Now if you put it next to a really image of the diffuser material, obviously it is a question, you know, what do we call the diffuser material? Or where was this for the BRDF? How exactly? But let's disregard this. And let's accept that we have this difference between the two. And if you take a good look, then it becomes apparent that in grazing angles, the simulated diffuser material seems to be completed on. And if you take a look at the formula up there, then this is a self-explanatory because the normal and the normal and the light direction can get perpendicular. And then you will see this darkness. So there are some advanced BRDF models that try to be a bit more realistic in this regard. Such example is the ORN-AIR model, which is much closer to what you would measure in real life. But let's note that all of these simplified BRDF models, these are all tags. This is not what physical reality is. People write up the actual equations that relate to physical reality and try to simplify in a way that a simple ray-facer can capture. We are going to talk about global effects and what a real diffused material looks like in a few minutes. So this ORN-AIR model seems much better. And what's even more, it can take into consideration these microscopic imperfections in different materials. And you can get a roughness parameter that can model these imperfections. What about specular models? Well, the form model, V.R, that we have talked about, is not the only way to do it. There is also the form-plane model, which is a more advanced model, and uses this H, this half vector between L and V. And it produces different results. I think this image is maybe not the best because yes, the highlights are different. One of the main advantages of this material model is that the shape of the specular reflections can get a bit more elliptic, depending on the viewing direction and the surroundings. So here you have the very same circular thing. So it's not the best example, but you can see that it's different. It looks more realistic. And we still have to think about the fact that these are still really good models. But these are still the X. There's also the Cooctorian model that's basically foam-plane. That can model also microscopic roughness. And here, maybe with the projector, maybe it's not so visible, but you can see that the specular reflection here is a bit more easier. So it's not a completely round sphere, it's not a perfect sphere, there are these small imperfections that are characteristic to viewing those materials. So this is what this model can capture. And there are some other advanced BRDF models, some of which are more easy to understand that implement that it is to pronounce the name of the authors of the BRDF models. This is one of those examples. And this is some kind of multilayer model where you have a diffused substrate and you have like a coating, a specular coating. So there are also BRDFs for car paint, where you can have these sparkly effects. And there are many BRDF models that capture given effects. Okay, what if one would like to play with these? The Disney guys have implemented this program called the BRDF Explorer. And you can load many local BRDF models and change the light source positions, look at the actual BRDFs and the impulse responses. Give it a try. So we have always been talking about cameras. So we are trying to model real world cameras. If you have an handheld camera, you will see a setting that's called the F-stop. And the F-stop is related to the size of the aperture. The aperture is the opening of the camera where the light goes in. And you can set this to different values. And you will notice that if you set this F-stop to a high value, then the aperture of the camera is going to become smaller. And if it's smaller, then this means that less light is left in. And more of the image that you get is going to be in focus. And vice versa. So if you have a low F-stop setting, then you will have a bigger aperture. More light is left in. And more regions will be out of focus. And what gives you this depth of field effect. Because whatever images we have seen and created with ray tracer yet, don't have this depth of field effect. But if you use a handheld camera, then somehow you have to model this effect as well. Because this is how an image will be created in a real world. So this is a nice chart made by Photograph First to see how exactly these F-stops will lead to an aperture size, what are the typical settings and all these interesting things. And an actual example. So let's take a look at the bottom right here. You can see that the whole image is in focus. And as you adjust the F-stop accordingly, you can see the top left. And see here immediately that the background is heavily blurred. So this is a more pronounced depth of field effect. It would be wonderful to have a ray tracer that can take this effect into account. And this is another maybe a bit more beautiful and a bit more visible example. So the left side you can see a very pronounced depth of field effect on the right. And it's close to everything is in focus. Yes. Do you also use the word OK for computer graphics? Yes. And there is also a bunch of papers on how to simulate this effect. So people even try to compute this in real time. So you have like a computer game, you would like to see this OK effect. How do you do this? And you have to take into consideration the depth. If you know what objects are wearing exactly how far away, then you can do a bunch of tricks. To get something like that in the approximation real time. And if you do this something that I'm going to show you in a second, then you will have the very same effect in your ray tracer from the left. Completely only focus image on the right. You can see again the depth of field effect, especially in the background. The further away you go, the more you see. So how do we do this? Very simple. Let's skip the text and I just put this here because people who read this at home would know about this. So mostly most of what we do is we shoot a ray through the midpoint of the pixel in our ray tracer. And this is going to touch this focal point and then hit an object. What we can do is that we would also take samples from nearby. So not only this pixel and not only the midpoint, but from nearby. And shoot all of these where is through the same focal point. And compute the very same samples, but these samples will be nearby. What we do with these samples is we average them. And this is what is going to give you this depth of field effect. So this is already some kind of integration. This means that, which if you run a ray tracer, you are going to get a completely converged image without any noise, without any problems. That image you can consider done. But this will not be solved with completely illumination. This is a speciality of ray tracers. But if you have such an effect, then you may have to wait until more and more of these samples are computed and the more smooth image you move. But we will talk about this effect extensively. It's going to be very important. And just an important question, what kind of material model can this be? Obviously, this is some quick perhaps open gm preview, but it's very apparent what I see in here. What kind of shading is this? Is it spectacular? No. These are definitely not here. What else? Yes, exactly. So this is the Lumberian model. You can also see this effect that it goes completely black in this facing angles.
So let's talk about the assignment. We are going to play with Hall-Hackford's Business Card Rate Tracer. This is going to be an enlightening experience because it says it's a Business Card Rate Tracer. This sounds really good because the whole code can fit on the Business Card of yours. Well, you will see that this does not mean that this is well suited for educational purposes. So it's not very easy to understand what is happening in there. In the package, I have also included a version that's a bit more eye-friendly. But when we are going to be talking about global illumination, we're going to use a simple global illumination program. And it's called Small Paint and I wrote it myself in a way that it would maximize both understandability and conciseness. So here is the first part which is not expected to be so convenient. And then the next global illumination program will be much more convenient. So to compile this, you use whatever tool you wish. I'm not going to confine you to use any given tool. This is a one or maybe two free file rate tracer. And in the zip file, I will show you I have put a compiler for Windows, which is called the mean GW. This is a port of different Unix compilers to Windows and they work quite well. So you can use this. If you are a Windows user and you don't like Visual Studio, you don't have Visual Studio or you would like to try a GCC on your Windows that you can use this. If you would like to use something else, you're fine. And if you're a Linux guy, you're a power user anyway, so you can deal with that yourself. But if you need help, just write to me. So what is the practical part? Well, get this file and there's going to be instructions on the readme. And let's make images with different maximum depth values. This means how many bounces are we computing? And do this from one to five ounces and see the difference in the output images. The question is, what did you experience visually and why? There's going to be an observations text file and this is where you need to write these stuff. But I'm going to discuss this on the next slide. And what I would like you to do is to cram out the depth that are you to a really large number and see what happens. Because maybe interesting things will happen. And the question is, what will be the dependence of the runtime, the execution time of the algorithm, with respect to this depth variable? And does it abruptly change at a given point, or somewhere, or not? And usually there is a set of questions for pros. So if you feel like a pro, you should definitely answer these. Sometimes these are really difficult questions. Sometimes not so much. And when I first had this course, I thought that maybe 10 to 20% of the people did want to try, because they are really interesting exercises. And to my surprise, almost 70 or 80% of the people in the first two years of this course, did all the pro exercises. And some of them even came up with exercises on their own, because they thought that, yeah, this is so much fun. I changed the code this way with what I got, what happened there. And if you find out something amazing, then show it to me. So I can also marvel at that. So set of questions for pros. This is plus points for the exam. If you don't do them, you still can get the maximum amount of points for the assignment. Let's take a look and try to do them, because it's a really interesting journey. The first is, what would be the algorithmic complexity of this step variable? So this means that I cannot only vary the execution part of the algorithm with respect to this step, but I can also write up the complexity of the algorithm with the bigger notation. The bigger notation, this is something that tells you the complexity of algorithms with respect to variables. So if I change, let's go back to algorithms and data structures. First is, dice-wise algorithm, if I remember correctly, it's of quadratic complexity. So if I have, is it? I think it is. So it's really favorable. So it means that if I have a big larger city where I need to find the best route between two points, then the complexity of the algorithm is not going to be raised so much. So this means that if it's n squared, means that if I have double the size of the city, then the algorithm is going to run four times as long. So I would like to know the complexity of this algorithm that you have. What do you respect to this big old notation? Second, four question is, what could we do to make this more favorable, whatever weird examples and ideas go? And if you did some change, then tell me the new complexity of the algorithm. And a regular set of questions play with the AOV variable. It's very easy to find out what it does already. The question is, what did you experience and why? And just a note that there is a, I think, more readable version of the same C++ code in the Z5. The format of the table that I would like to see is that we take the different numbers for depth values, maximum amount of bounces, and I would like to know the execution time in seconds. And this is a text file. But after filling such a text file, I would like you to plot this with whatever tool you have. I don't mind. If you like new plot use that. If you like more from alpha or not, what ever. I don't mind. And please put a PNG file of the plot also in your solutions. And one more set of light paths. This is waiting for you. So please draw a camera on this image, where it is exactly. And please denote what kind of light paths do I have here? And please tell me a few words about whether I see these light paths or not. So for instance, I definitely see a light path that is LV, because I can see the light source. So the ray that connects the eye to the light source is definitely accounted for, because I can see the light source. So what about the other light paths? And save it as a PNG or JPEG file. And this is just the names of the different files that I would like to see in your submission. And this is how the submission itself should be named. And about the deadline, I don't know yet. Apparently, Easter is coming. And when I first had this course, I told the other people that, well, next week there's going to be another lecture. And they said, well, not really, because there's Easter rig. And I come from Hungary, where Easter rig is one Monday. So it means that on Monday you get drunk and on Tuesday you go back to work with the hangover. And then they told me that it's not only Monday. So next Wednesday is going to be skipped because it's Easter rig. And I said, well, maybe they are following me, but maybe it's true, I don't know. Okay, then the Wednesday after that. And they said, uh-uh, not even that. And I was like, I'm surely, I'm surely being trolled by like 20 people at the same time. And then they told me that Easter rig in Austria is two weeks, at least in the universities. And I was like, that's amazing because I'm wondering, on that Monday that you have this Easter rig, everyone is drunk. It's ridiculous. Like the whole city goes crazy. And in Austria, I imagine that they're saying may happen, but for two weeks. It's an amazing country. Thanks for your attention and see you sometime. I will announce when the next lecture happens. Thank you.
Now, let's jump back to recursion. So if I would like to compute multiple bounces, I need to handle this somehow. And we have talked about this briefly. If I intersect the first object, I need to reflect the way off of this object in some way. And after the shading is done, diffuse specular ambient shading, then we can trace the light further. And this tracing step can be both reflection and refraction. You remember, for now, as law and as law, we are going to put them in use in a second. But what I need to tell you is something super weird, but you won't feel why this is weird. So in a ray tracer, in a recursive ray tracer, not normal illumination, the indirected relation, and these goodies that we are going to start with, next lecture. In a ray tracer, if you encounter a mirror, an ideal specular reflector, you will bounce the ray back in the ideal reflection direction. So exactly what you would see in the mirror. It's 45 degrees, it is 45 degrees back. And you do the same with diffuse surfaces as well. So you continue the ray in ideal reflection direction. And now this sounds reasonably okay. But when you will study global illumination and how the whole thing is done, how real men compute the rendering equation and images, you will see that this is a super, super huge simplification. I remember the faces of students when they think back, when they know all about global illumination, when we talk about simple recursive ray tracing. And I asked them how is this reflecting? And they are science. Because they will know that in global illumination, it's going to be so natural that the diffuse surface means that it will reflect right in every direction with the very same probability. So the perfect reflection direction has the same probability as coming back the same direction where it enters the surface. They all have the same probability, all directions. And then suddenly our ray tracer says that even a diffuse object I'm going to treat as a mirror. So this is going to be super weird. And please remember that I say this later on if you take a look at ray tracer's after global illumination. Now how does the recursion work? I hit something, I reflect away. So the ray tracer always the perfect reflection direction. And I'm going to restart the whole process. I'm going to start to trace a new ray that starts on the object. Remember, this is when you get the self intersection t equals 0. I increment this maximum ray depth value to show how many bounces I have computed so far. And I start again. So I start a new ray. I imagine that this object is now the camera. And this is where the ray is starting from. Is there a question or okay. So we got everything. How does it look like in terms of mathematics? This is the illumination equation without recursion. But now I need to add one more term. Let's quickly recap this. The first term is the ambient term. This is to warmen up the completely black shadows. This is basically a bunch of hex that it looks good. So we are going to be okay with this for now. Then we scale the amount of incoming light with the material properties. So a diffuse and a specular shading model. These are weighted by KD and KS. These are values that tell you how diffuse or house-packular this object is. And not only house-packular it is, but what color does it absorb? What color does it reflect? So what is the color of the object? What is the diffuse and specular or beetle of the object? I'm using so many terms not because I'm inconsistent, but because people use all of these terms. And therefore you should be familiar with this. And there's some weird stuff that I now added. And this is the recursion part. So KT is the Fresmission coefficient. This is the probability of refraction. Because you remember that I hit this hair glass interface from different directions. And if I hit them from different directions, then the probability of reflection and refraction is different. So depending on incoming direction you have seen, this laser is stronger in one direction than the other. And we have to account for this with these transmission coefficients. And the IT is the intensity that is coming from that direction. The kr and IR are the other way around. So if there is reflection, not refraction, then I'm going to go in that direction. And I'm going to scale this with the intensity of the incoming lights from the reflection direction. A quick example. Where if I have a glass that's blue? So some kind of glass that looks almost entirely blue. Then this Fresmission coefficient is going to describe the color that's blueish. Therefore all the energy that comes through this ball is going to be colored to blue. And the final reflection coefficient can be whatever. So we are now interested in the transmission. So this is how I can define materials like that. This is the recursion part. And for this, I need to start perhaps multiple rays. So if I hit this object and I say that, hey, but this is a transmissive object, this is a glass. What do I do? Because there is a probability, positive probability usually for reflection and refraction. Do I start two recursions? Do I start two new rays? What do I do exactly? And in the assignment that I'm going to talk about, you will see a piece of code that does something. And then you will see the effect of something. I'm not going to spoil anything. And just a quick introduction to headverse notation. This is important because if you know this kind of notation, then you will be able to discuss what kind of ray tracing algorithm can render, what kind of light paths. So as a status quo, all light paths go between light sources and the eye. If it doesn't hit, if it doesn't hit the lens of the camera, it's not going to be recorded in the image. So every light source is every light path is going to be written as L, something, something E. Or as this is by direction, or you can imagine the other way around. So you can say, E something, something L. The notes once one diffuse interreflection during the way, S is one specular interreflection during the way. And the asterisk means any amount of diffuse bounces, perhaps even zero. So LDE means that either I hit the eye from the light immediately, or there is one diffuse bounce, or maybe an arbitrary number of diffuse bounces. This is what the asterisk tells you. And we can also denote the choice. The choice means that there is only one either specular or one diffuse bounce. And with this small but powerful notation, we can discuss all the algorithms there are to render photo realistic quality images. So for now, some of this will be intuitive. Some of this will be not so intuitive because we don't know global illumination yet. But first, ray casting means that we hit at most one diffuse object. As all it can render no recursion, nothing. I just hit one diffuse object. I do the diffuse shading and the bi. Radiosity can compute something like indirect illumination because multiple diffuse bounces are possible. So remember the example. The light comes in to the classroom through the window, hits the white wall, and then hits the over. And therefore, the over is not going to be perfectly black. This is called indirect illumination. Radiosity is got that covered. Recursify tracing what we are doing with this for now transmission and reflection thing. We know that what we can do is indirect illumination definitely not because we treat a diffuse object also as a mirror. We just use a different shading function for it. So we don't trace rays all along the hemisphere of the object because it collects light from every direction. This is why it doesn't change if I move my head. This is why the sight of it doesn't change. But we cannot account for that. This would be a huge dimensional integration problem that we are going to solve in the rendering equation. So at most one diffuse bounce, but you may have as many specular bounces as you need. So this is why recursive ray tracers usually show you mirrors and glass balls and reflective things like that because it is capable of rendering it, but not so much more. And global illumination that's the full package, an arbitrary number of diffuse or specular bounces. This can also be glossy, whatever kind of complicated material model you have here. This DS can be anything and in any amount. Well, let's take a look at an example with the hardware notation. So here we have light paths and they start up from the light source. So on the right I have something like LDDE. That's exactly what I have been talking about. So I start from the light source. I hit a diffuse wall. I hit the diffuse ground and then I hit the camera afterwards. So that's LDDE. Let's take a look at for instance LSSE. So I start from the light source. I hit the glass ball from the outside, this left glass ball. And then I go inside the ball. There's going to be reflection at least. Let's imagine that there's going to be a reflection. And then I hit it on the other side as well and I come out. So this is two specular bounces, LSSE. So we can denote light paths and understand what algorithms can render what exactly. So here if we imagine that this is a ray tracer, this is an image with a ray tracer. The question is what did they do? And this is a rather low quality image. But let's, it seems to me that the shadows are not completely black. Therefore in their shading models, they definitely use the what kind of term? Raise your hand if you know the answer. So normally this would be completely black because I should a shadow ray towards the light source and it is going to be uploaded by the table. So intensity is 0. Imagine that like all possible shadow rays are blocked. But this is still not completely black because I'm adding a term to it in order to warp up and make the image appear more realistic. So this would be which term I'm doing. This would be the ambient
Let's talk about cameras. I'm just going to rush through this because obviously everyone knows about pinhole cameras. The most basic camera model one can imagine is basically your box and you make super, super small hole on the box and you put a film in it. Basically some amount of light will flow into this and it is going to be caught by the film and therefore you are going to see an image from this film. We are not that interested in this model but what we are interested in is for instance a perspective camera. Perspective camera means that I have the lens of the camera. This is what you see the T-POT on and this is where the image is going to be formed and I have a point somewhere and this is going to be where the rays are starting from. So I have this eye and I am going to shoot rays towards the lens of the camera and I am going to be interested in the continuation of these rays, what objects the heat and what color they are. That is multiple things that I need to specify when creating a perspective camera. This plane can have the width, the height, some kind of field of view and the aspect ratio is given by the ratio of the width and the height. What about the field of view? Well if you like to play first person shooters you have probably fiddled with settings like that but the field of view is basically what you see here. And different camera models or different eye models have different fields of view. So for instance it is quite different for a horse. If you would like to model how a horse sees something then the field of view would be much larger because the eyes are on the sides. So you can always see what it is behind it. And we have a given field of view that we can model here and this can be arbitrarily changed if you have a good perspective camera implementation. Now let's quickly put together an implementation of that. What I'm interested in is that I'm going to give an X, Y pair. These are pixel positions. Give me the 0 pixel in terms of X and the 5th pixel of with respect to Y. And this is going to give me back a world space coordinate where this is exactly on the lens. So I'm subdividing this lens into pixels. I only want to care about the pixels because each pixel after each other they are going to be computed. How much light is going through these pixels. And therefore these world space coordinates are interesting. So if I insensiate such a perspective camera the height and the width is definitely given and the field of view with respect to the X axis is also given. And the desired pixel positions are going to be X, P and Y, P. What are these variables are supported on. So X, P and Y, P are on 0 and W and H. So these are really the pixels which pixel am I interested in. The field of view can be reasonably arbitrary but same choices on 0 pi. And the field of view with respect to the Y direction can be computed from the aspect ratio and the other field of view. This is the end result. And before we try to understand what's going on let's try to play with it. And this I do because usually if you read literature, math, books, whatever you never see the journey behind things. You get the answer. And this is usually a huge formula that doesn't seem to make any sense. So let's get a bit more experience on how to play with these formulae. How can we understand these. So for instance let's forget in X and Y let's forget these tangent terms and let's just play with the fraction. So I substitute X and Y equals 0, X, P and Y equals 0. So what do I have for the X coordinate? Well it's 2 times 0 minus the width over the width. Therefore this is minus 1 and I have the same for Y. So this is 0 minus H over H, that's minus 1. So for the 0, 0 pixels I have world's pace positions of minus 1 and minus 1. Therefore this is going to be the bottom left. So far this seems to make some sense. What if I substitute the other end for the pixels? Well if I have w for XP then I have 2 w minus w over w. And therefore this is going to be 1 both for X and both for Y. So this is going to be the upper right. And whatever I specify for XP and YP between these two X-threading values then this is really going to give me the world's pace coordinates of this camera model. We have forgotten about the tangents. Well let's put them back. I don't know what I just did now but it's working again. Yes. I wonder why this presenter has like 2,500 buttons. But okay, let's not progress. So I multiply back these numbers with these tangents and then I can see that basically what it gives me, more perspective distortion. So the higher the field of view with respect to X's the more perspective distortion I'm going to get. Well this is already a good enough description that I can put in code. In fact I have already coded it up and this is a very simple function that does exactly what we have been talking about. It's simple as that. So if you don't take the prototype of the function this is basically five lines and this is still readable. So this could be even less. So not too shabby. I mean a perspective camera in five lines of code passed. There are also photographic cameras. This is a large difference between from the perspective camera because the rays of light are also parallel with each other and they are perpendicular to this camera plane. So basically they don't start from one point looking outwards. They are perfectly parallel with each other and perpendicular to this lens. And they also don't meet at the eye and you can see that the perspective distortion is completely omitted here. So you can see here the same image, the same scene with the same settings with an photographic and the perspective camera. And you can see that the realism is completely different in the two. There's another example with Luxrender. In the next image you won't see the environment map in the background but disregard that because the implementation of environment maps with orthographic cameras is in a way non-trivial. So lots of perspective distortion. Well maybe you don't notice because this is what you're used to but if you have an orthographic camera then this is a perfect distortion-free geometric shape. And back to the perspective camera. So this fall gives you a significant perspective distortion.
Okay, philosophical question. What is the definition of shadows? And because run along with those people already went for now. But let's compute for now, only for now, that shadows are regions that are not visible from light sources. And again, if you're a public transport or if you're a board, a lecture, such as this one, but hopefully not this one. Well, you can take a look at the shadow of regions and you will immediately recognize that these are the regions that are somehow occluded with respect to the light source. Let's take a look at an example. This small red dot on the top, this is a light source. This is a point light source. And this black thing is a sphere. And behind that, what would respect to the light source, we have an umbrella. This is a completely shadowed region. This is the name of the completely shadowed region. And if we are going to shade these points in the ray tracer and if I want shadows, then I need to compute whether this ray, whether I am shading the point, is obstructed or occluded from the light source or not. Now, this is very simple to do. So imagine that I would like to shade this point below on the plane. And what I would do is I would send a ray that I call a shadow ray towards the light source. And what I'm interested in is, is it obstructed means that it hits something before it's blocked by something. So the first question is, is an incredibly difficult question. Is this obstructed or not? It is very straight. It is obstructed. What about this guy? What do you think? This is obstructed. What about these guys? These guys are good. Okay. Cool. Now, it is for now, very simple concept shadows. It means that I also have a visibility signal that I multiply the intensity with. And this is binary. Obviously, the ray either hits an object or it doesn't. That's it. So very simple. This intensity, that is not radiance, but this is the hack that we use in ray tracing. This is the simpler version of things still. I'm going to set to zero. Whatever shading I have at that point, I don't care. It is in shadows. It's going to be completely black. So this is the simpler version. What about wind life? Well, in wind life, point light sources don't exist because the point by mathematical definition is of measure zero. It means that it's infinite is small. And something that is infinite is small. Well, we called it a light source. So this is something that's infinite is small. But it has given amount of energy. Well, if you ask Stephen Hawking, he would say that this is a definition of a black hole. So we would have a black hole. And if this would happen, we would have much bigger problems than computing shadow rays. So that's definitely out of our interest at the moment. So we have an error light source and we still have the umbra because none of the rays make it to the light source. But we have a different region that we call a number which are partially shadowed regions. So things are going to get much more interesting. Now I'm going to shoot two shadow rays from the surface towards different regions of the land source. What about these guys? What about the right shadow ray? It's okay. What about the left? It's not okay. Excellent. So this is already doesn't seem binary anymore. And this visibility signal is going to be continuous. So there may be points which are visible from some part of the light source. But not visible from another part of the light source. And therefore we have some kind of partial occlusion. And the question is how do we compute this? How can we write a program that can give me a beautiful anumbra and not just hard shadows. So if we only have the umbra, this is in the literature. This is called hard shadows and penumbra is soft shadows. We're going to see examples of that in a second. So very simple. Let's try to approximate the area of the light source that is visible from that point over the whole area of the light source. Let's see an example of that. But first I'm interested in how to approximate this because I'm talking about areas. And this sounds like some kind of integration problem. And for now we are not going to evaluate intervals over this. For instance, what we can do is we could shoot a large shadow rays and try to approximate this area. So the approximation is going to be the following. I'm interested in the physical shadow rays, the number of visible shadow rays over all the shadow rays that I have computed. Well, example, how is this region going to look like? Well, I'm going to shoot 100 shadow rays from these small black dots. And I'm interested in how many of them is going to make it to the light source. What do you think? Well, out of 100 shadow rays, does 100 hinter light source? I'm not sure. Definitely not. Well, about 50? Probably not. Well, it's quite reasonably dark there. So let's say that three of them is the light source. It's very simple approximation. I shoot 100 shadow rays, three of them. Therefore, this is what I'm going to multiply this intensity that I have computed with. OK, what about the next region? This is a bit far from away. Out of 100, does 100 of them hit this region? Definitely not. How for them? What do you think? How for them definitely? OK, cool. And if we go even more out of the umbrella, then I have this white dot. And I'm interested in how many of these could hit the light source? Well, I think that there can be there are regions where which are definitely obstructed. But it's not much. So let's say that 95% of these shadow rays hit the light source. So I can already compute in a way soft shadows, not only shadows, but now soft shadows. You're going to see examples of that. And what we have done here is actually Monte Carlo integration. And you're going to hear a lot about Monte Carlo integration. It's in every list. I don't know. Some teenage people look up the top 10 billboard list of the best music clips of Lady Gaga and the others. What I do myself, I confess, I look up the top 10 mathematical techniques nowadays. And I tell you that Monte Carlo integration is always on the billboard's top list. It's one of the most effective techniques to evaluate intervals. And we're going to study them extensively. And it's going to make your life infinitely simple. Now, a quick bonus question is can such an image be physically correct, but obviously it's a drawing. So it's not correct. But there is something that unbelievably so incorrect that it would it would need some attention. Who can tell me that? Yes. That is true. But unfortunately, this is a mind reading exercise. So you have to figure out what I have taught. And just that's absolutely true. Yes, please tell me. Well, as far as I know physics, the line should bend a little bit inwards from the black object. But that's apparently not the. We are very far away from that. But you're absolutely true. So if I say in terms of shadows, wait. I would have said something about the shadows in the air. Shadows in the where? Well, the shadows between the object and the ground. Yes. Okay. What's what's up with the area wouldn't be shadow? The area wouldn't technically be shadow itself. Which arrow? The area. The area. This one or? No, no, no. The the airport between the object and the. At this. No. I don't understand any of this. The area between the surface of the object. There is empty space. Yeah. Which is. Yeah. What about if I ask about this transition from here to the outside? So if you imagine these dots that we have, if I would put it in the umbrella, and I would slowly move out of there, would I experience such a jump that I see here? Okay. Why not? Because if I start from the umbrella, it may be that yes, I cannot construct any kind of rain that is the light source. And as I move outwards, it will continuously increase this probability. There is not going to be a jump that you see this abrupt corner change. It is going to be a perfectly smooth gradient or almost perfectly smooth gradient, depending on many other physical properties. But this is more or less what I would see. And there's going to be an example of that in a second. So this is what I have said for those who are reading this at home. And the question is very simple question. What kind of light source do we have on this image? It's a point light source. Excellent. Why? Because I don't see the moon, I see much of it. Excellent. So this should be a point light source. And what technically you could say that if you have a smaller area light source, but only one shadow rays, so you won't do this integration, just for one shadow, you can have something similar for this. But generally these are point light sources. What about these guys? The left one is point, the left one seems to be a point light source. The right one, I can see this beautiful continuous change. And this is definitely a point light. But if I take a look at this region below this object, then I can also see some kind of point light. So it might be that this is a small on the left. It's a small area light source that is close to the object perhaps. And this is why I don't see the moon. But other places I see it. And here on the right it has a really pronounced effect. So this is definitely an area light source. Well, the next question is that in physical reality, we usually don't see perfectly black shadows. So if I take a look around in this room, I see shadows, I see there, a region that is lit, and then some kind of rock and the moon, rock is something, but this is anything about perfectly black. Because of the balance of the whole area, some reflections, so you never have perfect on the moon. Yes, that's true. So there is an effect that we are going to talk about next lecture. And it is called indirect illumination. And this basically means that in the ray tracing program, we are only accounting for the direct effect of an light source. But in physical reality, it is possible that the light comes in through this window, hits the wall first, this white wall, and then hits the ground in this color region, and then it goes towards the light. And therefore, some of the energy is going to be picked up, so the effect of this white wall is going to make this black or dark shadows lighter. And this we cannot compute yet. This is multiple diffuse bounces after each other. We cannot take this into account. We would need to solve the full rendering equation for this. So what we have is direct illumination. And this is where the ambient term comes into place. What we have been talking about, this ambient term is just basically adding something to this intensity that I have. Why? Because this works up the image of it. So I would have perfectly black shadows. And for instance, for this classroom, I would have an ambient intensity that is a color that is grayish. And therefore, these regions would not be perfectly black, but I would have this sixth grade number to it. And therefore, it would be a bit more gray. So this is a really crude approximation of indirect illumination. But it more or less works. At least it is an accepted way to cheat back this last energy in a ray tracer. Yes. I have a question for the Monte Carlo technique, so I will cast it there, shadow rays. How we how do we determine where at the surface of the light, where we are shooting the points, because there's a surface how do we pick some random points on the surface? These are difficult details of ray tracing programs. There are techniques that help you to pick a uniformly chosen random direction on a sphere, for instance, that I would shoot or uniform directions or points. So I choose a random point on the sphere, and I'm going to connect this to that other point. And so perfectly uniformly chosen random points. This I need to generate on the surface of the light source. And this I would need to sample. And there's also optimizations for that. What if a light source has a non uniform radiation. So some light sources are really intense in one direction, but not in others. How do I account for that? And there is even optimization techniques for that. And the short beauty break. Well, we like clocks render a lot. And but it seems that apparently some nerds are leaving their dreams in our program and creating people like that. There is lots of programs that help you achieving these realistic things. And later on, we will talk a bit about how skin realistic skin can be achieved, such as the one that you can see here. Because skin is not a surface. Skin is a volume. So not everyone knows. But some amount of the light is penetrating the surface of the skin. And it goes beneath the skin. It gets scattered and absorbed maybe even a thousand times there. And it may come out somewhere else from your skin. And this is why the older computer games, this is why humans look really fake and plastic because they don't account for this effect. And the newest computer games can compute this or something like this in real time. And this is what makes them so beautiful.
Okay, let's quickly study how to compute surface normals because we are going to need that. If you remember the diagrams, we always have the surface normal for the diffuse specular and the ambient shading. And just a quick remark, in the previous lecture we have talked about diffuse and specular shading and also ambient, but these are perhaps the most important ones for the simplified VRDS VRDS roles. And you can see them everywhere. So when you are in public transport, you can think about what object could be what is the case. Some objects are like a mixture of these VRDS roles. It is possible that I have a diffuse object with a glossy or specular coating on top of it. And you can move your head around like I told you before and see how the look of the object changes. So a lot of beauty to marvel at. And you will be able to also understand that, for instance, if you watch someone performing stand-up comedy, there are usually a lot of guys saying humorous things, and they almost always wear makeup. And the makeup artist tells them that yes, you need to wear makeup. And the artist says that you don't want to wear makeup. And they say, well, I don't care, you have to. Because otherwise you are sparkly. And this means that they start swearing. And if they start swearing, the skin is going to get a bit oily. And then it is specular. It's more specular. So it means that if I turn my head around, it will look a bit different. So that's going to be specular highlights. And if you use makeup, then this specular highlights disappear. And the whole face is going to be almost perfectly diffused. Therefore, it doesn't distract your audience. So light transport is everywhere. So if you ever wear makeup, then think about this. It's specular and diffused in the reflections. Okay. But I digress. So surface normals, I have an inquisite equation f of x, y equals zero. I would like to know the normals of this surface. How to construct a normal? Well, the normal is given by the gradient of the function. And just a quick just a quick reminder, the gradient of the function is a 3D function of x, y, z. And the gradient on every coordinate gives you the derivative of the function with respect to a given coordinate. Let's see an example. Imagine an elliptic parallel. You don't have to imagine that this is going to be an image. So this is x squared over a squared and so on. I'm not going to read formulae. And this is how it looks like. And if I would like to put together something like this, then I have to know that a and b are the curvatures of this thing in different dimensions. And therefore, these values are scalar values. Well, let's compute the surface normal of this elliptic parallel. But this would include differentiating the whole equation for the first coordinate with respect to x. So let's differentiate this with respect to x. Well, x squared is going to be 2x and x squared is going to remain there because it's a scalar multiplier. Why does it depend on x? It doesn't depend on x, therefore it falls off. z, it doesn't depend on x. Therefore, the first coordinate will be 2x over a squared. What about the second term? The second term is the function differentiated with respect to y. Okay, does x depend on y? No, this term is going to be 0. What about this? y squared is going to be 2y over b squared. z doesn't depend on y. Therefore, this is going to be the second term. What about the third term? A differentiated function with respect to z. z doesn't depend on x. Someone let me know. I mean, correct. It doesn't depend on it. Does he depend on y? It doesn't. Well, what's going to be the derivative of this expression? It's going to be minus a bit louder. Minus 1. It's going to be minus 1. Okay, let's see. Okay, we got this. So we can construct the surface normal of an elliptic parabola. Excellent. So when we do this intersection routine in ray tracing, I have a ray and I would like to intersect this against every single object that is in the room. The question is, what is the first object that the ray hits? Which intersection am I interested in? So there may be many. So if I look somewhere, I may intersect many different objects, but if things are not as distant or things are not as parent, then I'm only going to see the first intersection. And that's it. And the first should be the closest. So this should be easy because we are using parametric equations. They depend on t and t is the distance. So basically what we get as an output, this is going to be a list of t values that I'm interested in. I'm intersecting these objects. A list of t is 2510 minus 2, things like that. What the question is, which one do I choose out of this list of t's? Someone help me out. Smallest positive t? The smallest positive t. Okay, so the negative ones are not interested in like I told you, no politics, politics free zone. And I'm going to be interested in the smallest positive t. This is a more or less true. Negative t's, we are not interested in. We have discussed this. And the question is, can we take t equals 0? And I'm not telling this because I would be an annoying mathematician. I'm only half mathematician and I'm among the kinder ones. Okay, so I'm not asking this because I would be an annoying mathematician. I'm asking this because this is going to happen if you write a ray tracer. So lots of people are, you know, something is not working and I have no idea what went wrong. It is possible that t equals 0 and we need to handle this case. So just a second. Raise your hand if you know what t equals 0 means in an intersection. Okay, excellent. I will ask someone who I don't ask me. So, give your mind. Okay, have I asked you before? Yeah, just a minute ago. I will be out of order. Who did raise your hands? Okay, yeah, that's correct. My hands do not. Okay, so what's happening when t equals 0? I lose it's not worth the amount of camera. Sorry. We will shoot the ray from the original. Yes, exactly. So if we have some amount of bounces, if I get t equals 0, this means that I am exactly on the surface of the object that I am bouncing off of. So if I intersect against the sphere and I bounce the ray back the next, after the next intersection routine, I am almost guaranteed to see machine precision matter. But, mathematically, I am guaranteed to see t equals 0 because it's self intersection. The ray comes to this direction bounces back from the table, but it is on the top of the table. There is going to be like a trivial intersection, which is from the starting point of the ray. So we are not interested in this. So we are going to take as a conclusion the smallest non-negative and non-zero t. So in cold, we usually put there an epsilon like a very small number and we say that if it's the very least this number, then I will take the intersection because it's not self intersection anymore. Okay, after we adjusted this, a small beauty break. This is an image rendered with LuxRender, our glorious renderer. We are going to use this later during the course. And some more motivation. We'll be able to compute images like that. Isn't this beautiful? The previous image, the background was the renderer, so it's just a photo. That's cheating. That's cheating. Well, if you are in the mood to model, like an extremely high polycarbonate. Well, maybe it's from Ceger or more. Yeah, people do that too. But it gives you a really realistic lighting of the scene. You can use this thing in a later assignment to create beautiful images. By the way, there's a course gallery. Make sure to take a look because from previous years, people have created some amazing, amazing scenes. Raise your hand if you have seen this gallery. Okay, from the people who haven't seen, raise your hand if you're going to take a look at this after the course. Okay, excellent. I didn't see your hand. What's up? You have not looked, but you have to. Okay, because there's seriously some amazing things. I wouldn't say some people should have gone artists instead because this would say something about their knowledge. And it's not the case at all because these students are really smart guys, but they have some amazing artistic skills. And I'm sure that there are some artists inside some of you as well.
Okay, so I don't just talk. After being immersed into the beauty of Fresno's equation and Snasmo, we are going to continue with kind of putting together a new ray tracing problem. We know all about air-guests interactions and things like that, but we for instance don't know what the representation of a ray of light could be. So let's go with this. So a ray is basically starting somewhere and it is going somewhere. It's basically it. This is what I have written here mathematically. So this is a parametric equation. We'll talk about this in a second. So always the origin, this is where we start from. This is a direction vector. This is where the ray is going and T is the distance that it had gone. It's basically it. So if T is a large number, then the ray had traveled a lot and if T is one, then that's the distance. Now we are always going to talk about vectors of human life if we are talking about direction vector. And most vectors are normed in global illumination anyway, but I would like to state this because now this T is meaningful. If D is a human life, then T is a scalar. It's a number and it tells you the distance that it's traveling. And this notation is a bit weird for many people because this are depends on T. And if you come from the regular math courses, most of what you encounter is implicit equations. So this could be an equation of a surface, f of x and y equals 0. For instance, this is an example. This would be the implicit equation of a sphere. And this is an equation. So basically you can say that whatever x and y that satisfies this equation is going to be the point of this sphere. And this is going to be this collection of points that gives you a sphere. And parametric equations don't look like that. So with these parametric equations, you can see that the x coordinate I can dig out from a function that depends on T, the y coordinate I can dig out from a different, perhaps a different function, but it also depends on T. And I can write off this whole thing as a vector form. So I'm not talking about x, y, but probably vectors. So let's see an example. The equation of a ray is such an example that you have seen above, but we're going to play a bit more with this. And the first question is, why are we doing this? Why parametric equations instead of implicit functions? Well, you will see it's enough. When we encounter a problem, and this is going to be easy to solve with parametric equations. So this is a secret. And now let's try to compute the intersection of a ray and a sphere. So I cast a ray, and I would like to know which is the first object that I hit in the scene. And if I have a scene of spheres, then this is the kind of calculation that I need to do. So the expectations are the following. I have a sphere and the ray. And it is possible that the ray hits the sphere in two different points. Well, what is impossible, if two hit points are possible, then one hit point is also possible. This is essentially the tangent of a sphere. It is just hitting at the very side. Well, this is the rare side, but this still exists. And obviously, it is possible that the ray does not hit the sphere at all. So we have, again, listed our expectations before doing any kind of calculation. And we will see that this will make things much more beautiful. So the solution for this whole problem should be some kind of mathematical something that can give me two solutions, one solution, or maybe no solutions. If I do the intersection routine and I get whatever else, then this should be incorrect. So this is what I expect to see. There is possibility of two one or zero solutions. Well, this is the equation of a sphere. P is, the P's are going to be the points on the surface of this sphere. And the C is the center of the sphere and R is obviously the radius. This is the equation of the ray. We have to mix these two together in some way in order to get an intersection. What I'm going to do is I'm going to substitute this R of T in the place of P. So what it will give me is O plus TD minus C times O plus TD minus C. It was R squared. So this is a big multiplication between the two parenthesis. And if I do this actual multiplication, then I will see that there's going to be a term which gives me the TD times TD. So there's going to be something like T squared, D squared, here. And there's going to be another term where the O minus C is multiplied with a TD on the other side. And this happens twice because both sides. And the rest is going to be a scalar term because O minus C. I'm going to multiply with O minus C. So this is going to be a scalar. I don't see T in there. And this is already quite interesting because if we smell this equation, what does this equation smell like? Raise your hand if you have some idea of what this smells like. Yes, that's going to be correct. It's a point of moment of creation of this code degree 2. It's exactly. Exactly. So this is. But I have to smell it first. So yes, indeed. That's a quadratic equation. So I have T squared, T and the scalar term equal zero. What are the coefficients? Well, simple. And T is going to be D squared. What about the B? I mean, not the T by D gain. We apologize. The B is going to be the 2D O minus C because this is what I'm going to apply. T with and the scalar term is going to be all the rest. So this should be very simple to solve. I have as a solution that possible T1 and T2 that satisfy this equation. And now the question is, is it possible that this equation has two solutions? Someone help me out. I missed some courses at kindergarten. So I don't know anything about this. Is it possible to get two solutions for this? I can't hear anything. OK, excellent. Excellent. This is the interesting part during the lecture because the teacher asked something and no answers. And this can mean two things. One thing is that no one knows the answer. And the second is that anyone knows the answer and it's so trivial that no one wants to look like an idiot. So no one says anything. And I would imagine that maybe this is the second case. So is it possible to get two solutions for this? Yes, yes. Excellent. OK, cool. Well, it's simple. If this B squared minus 4 AC is larger than 0, then the second term under the square root is going to be a number, some real number. And therefore, this is going to be T1 is minus B plus this number, T2 is a minus B minus this number. And therefore, this is going to be two different solutions. Maybe I should be wrong. So much here in the course. Can you hear this? Can you take it off? No, not always. I could take them off. Yes, and look like an academic. OK? Well, it's one solution possible. I still cannot hear anything. Yes. Very cool. When someone tells me the same thing is equals zero. So if this term is zero, then I'm adding zero. I'm subtracting zero. This is the same thing. So T1 is very simple. And it is also possible that we have no real solution if this square root term, I mean, the term under the square root is less than zero. Excellent. So this is quite beautiful because we enlisted our expectations. And it indeed needs to look like something that can give me two one or zero solutions. And if I do the math, this is exactly what happens. So this is the beauty of the whole thing. And let's imagine that I solved this equation and I got the result that T1 is 2 and T2 is minus 2. And now, what if I told you that these T's mean distances. So I'm solving the parametric equations in a way that this T means a distance. So it means that the first intersection is two times the unit distance, therefore two in the front. There could be a solution, which is a classical case for a quadratic equation where I get the second solution that is minus something. What does it mean? Yes. I think we can dismiss that because it's behind our eyes. So we don't really care about that, do we? Precisely. Precisely. So it's possible that the race starts in the middle of the sphere and then this is indeed a perfectly normal thing to happen. That there's one intersection in front of us and there's one intersection to our backs. And obviously we don't care about it too much. And if we find a solution like this, we discard it indeed. So we're studying computer science and we're not studying politics because if we would be studying politics, we would be interested in what happens behind our backs. This is computer science. So we can discard all this information.
There is still one thing that we don't know and it's about angles. So we know about probabilities. Whatever you give me, I can't tell you what is the probability that life gets reflected or refracted. I know the probabilities. But I don't know about angles. And what we need to know is that light, rays of light slow down. They travel with a speed of light but they slow down as they enter a medium. Because there is atoms, particles in there and it's more difficult to get through. So light slows down. The index of refraction tells you by exactly how much. So the index of refraction of a medium is given by this fraction. The speed of light in vacuum that we know over the speed of light in this medium. So this is how we can write it up. Let's look at an example. The index of refraction of glass is 1.5. So we know exactly what the speed of light inside glass is. Well, it's 300 million meters per second in vacuum. And what we know is this equation. So this is the index of refraction. So I can just reorder this thing and conclude that light, if it travels in vacuum, 300 million meters per second. But in glass, it loses the third of its velocity and it's only 200 million meters per second. So that's a pretty easy calculation and it's pretty neat. And another absolutely beautiful thing, hopefully you have studied the Maxwell equations and pointing vectors in physics. Light is ultimately a wave. So here above and below you can see some wave behavior. And the ray is essentially the wavefronts of these waves. So light can be imagined as rays if you take into consideration that I would need to compute many of these wavefronts in order to take account for the wave behavior. And don't look at the red. Only look at the blue. This is above. This is vacuum and below this could be for instance glass. And you can see that the waves flow down in this medium. And what this means if we go back to the definition of the wavefronts, the red lines, then they are essentially bending because the wavefronts are going to look like this. So it's very interesting because if you imagine light as a wave, it only slows down. But if you imagine light as a ray, then it bends. It changes direction. So I think that's absolutely beautiful. And the question is why is the light refracting inwards? Because what I would imagine is that it continues normally, it continues its way with this theta t equals theta i. Because it will just continue its way. And it doesn't continue its way, but it it's denser. And the question is why. And now we have Khan Academy time. Raise your hand if you know Khan Academy. Awesome. Well, well educated people. So this is shamelessly stolen from Khan Academy because this is the best way to describe how a refractual works. So basically you imagine that you have a large car and the air vacuole interface is now road and mud. I mean, the road is the air and mud is glass, for instance. And imagine as you are approaching this boundary line between the two, then the first wheel of the car, like the lower left on this image, is entering the mud. But on the other side, the wheels are still on the road. So therefore this wheel will slow down in the mud, but this is still going as fast as it used to be. So what do the car do? If this happens, it will start to turn. And you know exactly where it will turn because this is going slow. This is going faster. So therefore it will turn inwards. So this is, I think, an amazing interpretation of the bull thing. I think also it's easy to explain with the waves because when the waves are slowed down, then the direction, then we can see that the circles will get bigger radius. And if you go perpendicular to the waves, then we will go down. Exactly. Like in the previous figure. That's another intuition. That's one of the first pieces actually. Because I don't know things. No, this is intuition. That's why it's a bit misleading. It's nice. This is strictly intuition. So if you would start to model a race of like trucks on going to... We're going to encounter problems. You can take a picture because this part of the wave hits the median first and by that the entire wave is rotating. It's like a problem. I wouldn't have prayed. No. I tend to give multiple ways to interpret things because different minds work differently. Some graphical ways are working for different people better. Okay. So, Stas law. And we're almost done for today. Stas law tells you in what angle-refracted race are going to continue their path. And this is given by this expression signs of the angles against velocities, against the reciprocal of indisputable refraction. Okay. So let's do the error plus example of the previous image. Let's state our expectations before we go. So I'm interested in the relation of theta i versus theta t. So I know these expressions exactly. How much is theta i? How many? 60. It's 60. Okay. Excellent. How much is theta t in degrees? Very far. It's something around 35. Exactly. Okay. So the light is reflected in worse. Therefore, the theta t must be less than the theta i. So this must be less than this. Let's compute the equation and see if this works. And if it doesn't work, we're going to call out the physicist. So let's just reorder some things and let's put there the indices of refractions. And the incoming light angle that we know. And just some very simple reordering. We are almost there. And if we actually compute the size of 60 degrees, we get this. Well, this is all we can also carry out the division. But at this point, I'm not interested in the size of theta t. I'm interested in theta t. So I would multiply both sides invert the equation by multiplying with the inverse of the theta t. So this theta t should be the arc sign of this. And if I compute everything back to degrees, then I will get this theta t, which is 34.75. So whoever said that 35 was very close to the actual result. But also not to forget that there are different kinds of glasses. I mean, there's multiple ways of creating and manufacturing glasses. And they have different indices of refraction. More or less the same, but it's still different. But we can see that this is in a really good agreement with what we see in real life. Well, what did we say? Theta t should be less than theta i. But 35 is definitely less than 60. So again, physics works and physicists are smart people. And just another example. If you think about the car example or whichever example you like better, you will hopefully immediately see that if you would be going with the yellow arrow, this is going to bend inwards after going back from the water to the air. Now, whoa, hold it right there. What is happening? I don't see any reflection whatsoever. Right? So it seems to me that if I go back at around how much is this in degrees, that's 50 degrees. Exactly. So something fishy happens at 50 degrees. Well, I don't know what is happening. I'll tell you the name and we're going to compute whether this is possible or not. Well, if it's not possible that our math sucks. But let's see. So what we call this is total internal reflection. There is a critical angle. And after this critical angle, there's no more reflection. There's no more reflection. Only reflection happens. So many examples of that and there's many applications of that. This is one of the more beautiful examples. So let's compute what's going on here. What I know is that I have the indices of reflections I know. I know this degree that we just dealt with. And something interesting should happen here. And something interesting already happened. So I just plugged in everything what I have seen on this image. And I get this. And this is awfully horribly, terribly wrong. Someone please help me out. Why is that? It's okay. Yes. I can be bigger than one. Exactly. So the support of the sign is between one and minus one, at least according to my experiences. So it's saying that the sign of an angle is more than one. It's mathematically not possible. So it says that there's no such angle. What would be the angle of reflection if I would use using 50 degrees? Then it says something that mathematically doesn't make any sense. So math actually suggests to you if you use the right numbers. It suggests to you already that this totally internal reflection would happen. Let's try to compute the critical angle. And this, I just reorder things. This is hopefully the critical angle that I will be trying to compute. Well, if I have this theta one, this is relatively small, I would then there is going to be a reflection. There is this critical angle on the second figure, at which I have this 90 degree reflection. So it says that at the critical angle, this thing is going to be 90 degrees. And after that, so this is smaller than this. After this critical angle, it's only going to be reflection. Now let's try to compute this. Note is 90 degrees here. So what I put here is this is what I'm interested in. And this happens when this reflection is at 90 degrees. So I put there this 90 degrees explicitly, and I want to know this theta one. That's going to be the critical angle. Well, if I actually do the computation with the 90 degrees, then I'm going to get one for the sign. So this is n2 over n1. Well, I'm still not interested in the sign of this angle. I'm interested in the angle. So I have to invert the other side, both sides of the equation. And this is the definition of the critical angle. And if you write it in Wikipedia, critical angle, you are going to get the very same formula. But the most interesting thing is that you can actually derive this yourself. And this is not a huge derivation. This is very simple. This is where this 90 degree reflection happens. So what is our expectation for this critical angle? Let's look at the reality again. What? So this is, I'm just trying to hint without saying, telling you the solution. Let's try it without hints. What could be the critical angle here? Raise your hand if you don't answer. Sorry, I will, for a pedagogical reason, I will ask someone who I haven't asked before. Let's see if it's correct. I have to ask you. Not to be important. Not to be important. How do you get important questions? I have to ask you something smaller than 50 degrees. Because that was only... Exactly. So the usual answer I get is that 50 degrees. Because I see total internal reflection. But total internal reflection means that after some point, after that is only going to be reflection. So it doesn't mean that if it doesn't mean that this is that point. I will be trying 60 degrees, I will also see reflection. But this doesn't mean that 60 degrees is the critical angle. It's before that, that's some point. What is your answer to? What is your answer? Yes, but I was thinking that if it's critical angle, for example, say 50 is critical angle. Maybe we have also reflection and then horizontal one. Exactly. What you have seen on the figure. So at the critical angle, you see this. So this is over the critical angle. So it has to be less than 50 degrees. Less than 60, 50 degrees. So this is very simple from here. Let's just substitute the indices of refraction. 41.81. Physics works and we are still alive. So that's basically it for today. We have used reality to VR judge. We are not just writing formula on paper and then be happy about how much we can understand or how much we can memorize of them. We put everything to use and you will see all of this in C++ code. Not so long from now. So that would be the introductory course and I'll see you next week..
Now, we have some image from physical reality. We have an interface that is air and glass. What I see here is that there is reflection and there is reflection. So in case not everyone understands the term, reflection is least reflection on. And reflection is reflection, right? With my horribly broken German. Thank God this course is not in German, everyone would cry. But you guys and girls have it easy because in Hungarian you would say, think this is a very dish and think the dish. So I think reflection and reflection is much better than that. At least much more convenient. Question, which is stronger? Which effect is stronger? Raise your hand if you think that reflection that this is this effect is stronger. You are asking this question about this example in the example. Yes. So I can see that the reflection has a more pronounced effect than reflection here. Because we will deal with the cilistackrum. And what we can do is we can actually write up the vectors that we have been talking about for a case like that. This is towards the direction of the light source. There is a surface normal. It looks like this so the normal looks upwards. This is where it is reflected. This is where it is transmitted. We don't have a vector for that. And we have the different angles for this. Wonderful. So let's take a look at the simplified version of the analysis equation. At the simplified version of the analysis equation, this one at least is called Schlich's approximation. We have no idea what this is about. This is not so complicated thing. What this gives me is the probability of reflection. So R of theta is the probability of reflection. So as I have seen the image, I am interested in what is the probability of reflection and reflection. Because I imagine that the probability of reflection is higher in this case. And I would like to compute it in some way in my computer program. Well, let's take a quick look at this. R of theta is the probability of reflection. And this is important to remember because during our calculations, I will forget this approximately 15 times. So may I ask your name? Lisa. Lisa, OK. Well, if I ask you what is R theta, you will tell me that this is the probability of reflection. The probability of reflection. Yes, and exactly. Because seriously, I will be forgetting it all the time. R0 is the probability of reflection on normal incidence. This means that light is coming from, if it would be coming from above, what are the chances that it gets reflected? And R0 can be given by this expression. We will be talking about this. N1 and N2 are basically indices of reflection. We will have examples with that too. But let's quickly go through this and see if physics makes any sense. Well, for an R vacuum medium, we have, let's say, the index of reflection of R is 1. And this N1 is the medium that we go into. So for instance, here glass. And let's see what happens there. But before we do that, T is the probability of transmission. Obviously, if the light is not, we forget absorption for now. If the light is not reflected, then there is reflection. It's simple as that. So if I add up these two probabilities, I give 1. So let's play with it. R at 0 degrees is R0. Why? Because cosine of theta, so the cosine of 0 degrees is 1. So on the right side, I have 1 minus 1. So the second term is killed by the 0. Therefore, I will have R0. And R at 0 degrees, this is the theta is the degrees, the angle of the incoming light, 0 degrees. So it means that it comes from upwards. What is it? This is the probability of reflection of normal incidence. So this is basically the very same thing. So if it comes like this, what is the probability of it coming back, bouncing off of the glass? What's up with 90 degrees? Well, 90 degrees, the cosine of theta is 0. Therefore, I will have both of these terms. Very simple. And this is going to be 1. So the probability of reflection at 90 degrees is 1. Because imagine that I'm coming from above. Then this means that there is a very likely chance that I'm going to get through. So imagine a super crowded bus in the morning. And you just cannot fit in there. How would you like to go in if you don't care about the health and the effectiveness of the other people? Well, you will just run in there and hopefully they will make some space. I have the best probability if I run towards them. If I would be running from the side, it would be very likely that they would just push me back. There is a high chance that I would be reflected. So I want reflection. I want to get on the bus. So as I raise this angle from normal incidence, the more probability there is for the rate of bounce back. So this, for now, seems to make some sense. But is it still reflection at 90 degrees and not just the much itself? You have to think in terms of limits. So what is the probability at 89 degrees? It's going to get reflected. If you just raise that, you are going to approach the probability of 1. So this is a bit boggling the mind that you can see that there is a continuous transition. From here, there is a high probability for reflection. As I go towards 90 degrees, there is more probability for reflection. And we define that at 90 degrees, we say that it is reflection because I am moving along the boundary and entering the glass. By the way, that's a great question. I was thinking about this too. So let's say that index of reflection of glass is 1.5. Let's compute this thing quickly. It's 0.5 over 2.5 squared. And I do the very same substitution for the rest of the equation. But before I get this, what do I expect from the plot? That's another important mathematical principle. Do this all the time. Before you compute the result, state what you would expect from the result. Because this gives you a much higher level of understanding. Well, I'm interesting in R of theta. What does R of theta mean? How about building the reflection? Excellent. Please note it again. So the probability of reflection at 0 degrees, I would say, is something very low. So I have written this here. That R of 0 is less than 0.1. Because it's the probability of... I mean, R 0 is the probability of... Oh, sorry. So the probability of reflection is low. I would say less than 10%. So if I come from upwards, reflection is likely. I'm very likely to get on the bus if I ran into the people from the front. What's up with, for instance, 60 degrees? Well, we know exactly what happens at 60 degrees. Because how many degrees do we have here? What is the incidence now? It's 60. And we can see that at 60 degrees, there is a chance for reflection and reflection. And the reflection chance is... Higher. Higher, exactly. So what I would imagine is that at 60 degrees, there is a higher chance of reflection. There is... Then the previous one, I mean, but reflection is still stronger. As you can see on the image, we are going to compute this and we are going to be... We're going to let nature be our judge whether the calculation is correct or not. So 60 degrees, I could be converted to radiance. That's more or less 1. And so R of 1 is 0.2. This means 20% chance of reflection, 80% of reflection. It seems to be in line of what I see here. But this is just my expectation. And what we have been talking about, 5 or 2, this means 90 degree angle. I would expect it to be 1. So if I convert it to radiance, then this is 1.7. So R of 1.7, I expect it to be 1. Let's put all of these together and let's do what engineers do all the time. We'll come from Alpha and try to plot this. So I imagine that R of 0 is less than 0.1. So getting on the bus easily, well, R at 0 is less than 0.1. So far so good. R at 1 is less than 0.2. This is the 60 degrees, both later, reflection. Well, R at 1 is less than 0.2. So checkmark. And R at around 1.7 is indeed around 1. So apparently physicists are small people. And physics makes sense. So, particularly. But there is something fishy about this. So this is correct. I mean, what we see here is in line with our expectations. But there is something fishy about this plot. Raise your hand if you know what it is. Okay, I'll give you a hint. This plot R of theta, which is. This is the probability of reflection. Okay. What happens after, I don't know, if I would just extrapolate. What would happen if not 1.5 or 2? What would I measure? I think it would be something like the very least minus. It's going upwards. But it would be at least 3. I don't know about you, but I don't know about probabilities that can be larger than 1. So this would give me some fishy results if I would substitute something like that. So let's try to share some more light on it. What if I have a vacuum, vacuum interaction? So below, I don't have glass anymore. I have vacuum. Well, the index of reflection of vacuum is 1. So let's just substitute 1 here. So this is going to be 0. And I'm going to have the second term. 1 minus cosine of theta to the fifth. Why? Because this is 0. And this is 1 minus 0. So I will keep this term. Okay. Engineering mode. What do we expect from this plot? I have vacuum or air, if you wish, if you will. And vacuum again. I start a way of light. What will happen with probability of what? Reflection or refraction? Raise your hand if you know. There should be no reflection. There should be no reflection. Exactly. Why? Because there is no interaction of doing medium. Exactly. So the definition of vacuum is again, nothing. There's nothing in there. There's nothing that could reflect this light, this ray of light back. If there is vacuum, we expect rays of light to travel in vacuum indefinitely. There is no way that it could be reflected. So since r0 is the probability of......thin this r theta. What do I think this should be? If there is only refraction, then this r of theta will be fantastic. Well, let's plot this. This looks like 1 minus cosine theta to the fifth. This is already fishy. But let's take a look. Blah blah blah. This is what I've been talking about. So I expected to be 0. Let's plot it. And this looks like this, which is not constant 0 by any stretch. So the question is, you know, what went terribly wrong here? And it's very... See, the sleek approximation is an approximation. It's good for interfaces which are vacuum or air and something. And not something, something. So A and B. It works well, but it's a quick approximation because this is the original Fresnel equation. And this is much more expensive to compute. And this other one was much quicker, but it's a bit limited in use. So let's give it a crack. So what would this say about a vacuum interaction? Well, I would substitute n1 and n2 equals 1. This is the index of refraction of vacuum. Again, the very same thing back just the n1s and n2s are gone. I'm going to use a trigonometric identity, which says that the square root of 1 minus the sine of square something is at the side. So let's substitute these for the cosines. So what I see here is the cosine of theta minus cosine theta. So what is this expression exactly? How much is it? 0. Exactly. And this is what I was expecting. So apparently physicists, again, are smart people.
Let's go with a simplified version of the whole thing. We're going to talk about simplified BRDF models. Well, there's going to be the ambient BRDF. How does it look like? Well, first, first. On the left side, I see I. This is intensity. Well, what is this? Well, no one knows because we have not radians, not something very physical here. This is going to be simplified version of the whole rendering equation. Basically, a bunch of hex, if you know something that is vastly simplified, it doesn't really have a physical meaning, it doesn't have physical feelings, but it works. It's beautiful and it's a good way to understand what's going on. So the intensity that we measure is going to be an ambient, the product of an ambient coefficient of an object. This is dependent of the object. This means that this means something like the color of the object. And the eye is going to be the intensity, the ambient intensity of a scene, or the light source. And later on, we're going to be talking about why this is interesting. So this is an example. We have a blue object over here and it's the same color everywhere. Why? Because the farther that doesn't depend on anything. There's just one coefficient that's multiplied by this intensity of the scene. So that's an ambient shading. What else is there? There's the diffuse BRDF. This is what we compute. It's a diffuse coefficient. What is the diffuse color? The diffuse albedo of this thing. And there's going to be a product of L and M. This is what we did before. Diffuse objects look like that. Please raise your hand if you have ever done any kind of diffuse lambershine model in graphics. Okay, excellent. Great. And just another thing. This diffuse coefficient is the very least RGB. Okay, so this is how much light is not absorbed on every different wavelength. Because I cannot describe colors in one number. The very least RGB or a continuous spectrum. Just for the background. And now it's looking better because I can more or less see where the light source is for this diffuse shading. There's also a specular BRDF. What I compute is V dot R times specular coefficient and V is the vector pointing towards the viewer and R is the reflected. There's going to be examples of that. Okay, so just that you see the formula here. And there's an M which is a shining as factor in the next assignment you will play with this yourself. So for now I will keep this a secret what this exactly does. And whoops, I'm going to jump through this because I would like to ask a question later on and you're going to find out. Yes, I'm looking at, excuse me. So this is how the specular highlights look. And if I add up all of these things ambient and diffuse and specular I get some complex-looking model that looks something that is that approximates physical reality. So I just simply add all these terms up. Okay, well I have something like this here and I have on purpose removed the light source from this image. But probably everyone can tell where the light source is expected to be. So raise your hand if you know where the light source should be. Okay, cool. Where should it be? Exactly. So it's going to be above the spheres. This is exactly where it is. So these material models are descriptive in a way that I get images that have some physical meaning that resemble physical reality. Well let's take a look at an actual example. The question is what would this region look like? The one that I marked, this pixel existed in the real world. Would it look the same if I wound my head in reality? And that sounds like a tricking question. I have seen the answer. Yes. Well let's say that this part is purely diffuse. I don't see any specular reflections in there. The diffuse is L dot n. So light vector direction times the normal. Does it change if I move my head? Well how to answer this question? You don't only need to see what is in an equation. You have to be aware of what is not in there. Doesn't change if I move my head. Raise your hand if you know the answer. It's very apparent to many of you. Yes. So the answer? Yes. It does not change if I move the head. It does not change if I move the head. It does not change because the specularity might move. Yes. That's very true. So it does not change because we know that it does not change. The walls look the same if I move around. I mean I'm not talking about shapes. I'm talking about colors. They don't change. The mirror, however, does change. The mathematical reason for this is that the view direction is not in this equation. I can change the view direction all I want and nothing will change in the diffuse. The idea. So this is like a general mathematical trick or principle that you can use in a number of different things. Don't just don't just look at what variables are in there. Try to think of variables what you would imagine that would be there. Okay, why are they missing? That's also information. That's what they're not only what's there but what is missing is valuable information. So what about these regions? These are specular highlights. These are described by the specular V or V of R. So viewing direction times the reflected light direction. Let's actually compute what's going on. So I would be interested in the intensity. This fake something of this point where this is the light vector. This is where it points to. It is probably reflected somewhere there because it comes in and it's an ideal reflection. So it's going to be reflected in this direction. And this is where I am just for example. So I'm interested in V dot R. Well, this is going to be a cosine. There is a small angle between V and R. So if there is a small angle that's cosine of a small number, that's large. That's close to one. And that's going to be a huge scalar product. Therefore, this point is bright and this is indeed bright. And the question is, which is very easy to answer in a second, is doesn't change if I move around. Does it change? Obviously, it does change because V isn't the equation and if I change this around, this is going to be different. For the specular BRDF, this is going to be bright. Just one of my favorite intuitions of this V dot R because otherwise this is just letters. This means how much am I standing in the way of the light? So, a life lesson. If you can't find the water droplets on the floor after having a shower, move your head around. Because that's specular. If the windshield of a car is too bright and you just can't take it anymore, move your head around. This connects to the physical reality around us. And that's good tips. In case you didn't know that you need to move your head around. Thanks, thank you. Now you know. Okay, so this is the point where we can just for a second stop and Marvel have how beautiful things we can create with such simple equations. And the rendering equation is going to be even more beautiful than that infinitely more beautiful. And there is some additional beauty that you can think about when you look at images like that. Okay, how would I shape this point? Is this diffusive? Is this specular? Why does it look the way it does? So, you can, if you have nothing better to do, you can think about these things when on public transport. Let's call this thing the illumination equation. This is the simpler version of the rendering equation. Now, what is in there? Most of this is familiar. There is an ambient-sharing term. And then there is the diffuse L.M. There is the specular V.R. We add all these together. And we multiply this by the amount of incoming light. Because if there is no light sources in the scene, there is no light. The whole light is not coming from anywhere. Therefore, this is all multiplied by zero. If there is a bright light source, that things get brighter. So, we multiply by this incoming light. And what is important to know is that this is only the direct effect of light sources. This sounds a bit as a taric at the moment, but later on a few lectures down the road. We are going to be more about indirect illumination and goodies like that. And this is neglected, and the ambient term is used to make up for it. You will see the examples of this in the next lecture. And this is a crude approximation, but it's still beautiful. It's easy to understand. And it serves as a stepping stone to solve the real rendering equation. But this is not done. One thing is that if there are multiple light sources, the scene is expected to be brighter. So, I would compute the whole thing for multiple light sources. So, there is going to be a sum in there. And inside the sum, the indexes are the number of light sources. Basically, I just didn't want to overcomplicate this. But still, something is still missing. This is not done. I arrived to a point. I compute this specular ambient and diffuse shading. And I am not done. Let's discuss how ray tracing works, and we will find out. So, the first thing is that what you see here is non-trivial, because what you would imagine is that you start shooting rays from the light source. And then, some of the rays would make up to make it to the camera to your eye. And most of them won't. So, we go with a simple optimization that we turn the whole thing around and then we start tracing rays from the camera. Because if I start tracing from there, I can guarantee that I didn't with rays that are not wasted, because I am not interested in the light rays that do not make it to the camera. So, if I start from there, I can guarantee that this is not wasted computation. So, how do we do this? There is this camera plane. We will discuss how to construct such a thing. And we construct rays through this camera plane. And what I am interested in is the projection of the 3D world to this plane. This is what you will see on your monitor. So, I should raise from this camera, and I intersect this, I guess, objects that are in the scene. I want to know where is the light stopping? What objects does it hit? And where does it get reflected? So, the second is intersection with scene objects. I have to realize that it hits this sphere. Then I stop there, I compute the basic shading, turns, lighted diffuse and the rest. And then I don't stop there, but I am interested in where the light is reflected. I need to continue from here. And this light ray may be reflected or refracted. And I need some kind of recursion in order to account for that. And the recursion works in the following way. I stop at this point where I hit the ball, the sphere. And what I do is that I imagine that this is now the starting point of the ray. And I am shooting the ray outwards and I start this ray tracing algorithm again. So, this is how the recursion works. This was missing from the formula. And this is just what the text version of what I have set for those who are meeting this at home. And you will be in live refractions for some.
Now there's another fundamental question which is what makes the difference between different materials? And the other question is how do we model it? Well different materials reflecting coming right to different directions and they absorb different amounts of it in different wavelengths. That's the answer. We are going to talk a lot about this, but this is an example. These are different material models. So the specular case, there is one incoming direction and there is one possible output in direction. That's it. This is what always happens. This is for instance a mirror because I see exactly the reflection of myself. There's no other thing that I see in the mirror. But for a diffuse surface, for one incoming direction, there is many possible outcomes in many possible directions. And this gives a diffuse surface. We are going to see examples of that. It writes spread. Please forget this term. Let's call this go or see instead because this is what it is. This is like the mixture of these two. So these are some basic material models that we are going to see in our renderers later on. Now to formalize this in a way, let's create a function that's a probability density function with three parameters. So this is a three-dimensional function. One variable is the incoming light direction. The other variable is a point on the surface. And what I'm interested in is how much light is flowing out from this point in different directions. Now a bit more formalized. This FR is going to be this function. I'm interested in the incoming direction and the point in space. This is true is what I have. And I would be interested in the outgoing directions. What is the probability of different outgoing directions? And this is how we will write formally. Omega is an incoming direction. The x is the point in space that we choose. And omega prime is the outgoing direction. And this we call the BRDF or by direction or reflect as distribution of function. So this is a very complicated name for something that's very simple. BRDF. Now what about materials that don't reflect all incoming light? There are some materials that transmit some of it. So for instance glass, water, gemstones and such. But it could look like that. And here above you can see some BRDFs and below you can see some things because it's not reflected. It's transmitted. There are some materials that let them flew. So here's an example. Well everyone had seen windows and things like that. Well, the question is why, like just a physical question, why are these objects transparent? Sorry? Yes, they transmit the light. But what is happening here exactly? So just some physical intricacy that the most fundamental question, you know, what is inside of an ever? And the best answer is nothing because an atom is 99% empty space. There is the nucleus, which is the whole atom is the size, for instance, of a football field. If you imagine that. Then the nucleus is a small piece of rice in the middle of the football field. That's the nucleus. And the electrons are also very small things like small rises, which are orbiting the nucleus from very far away, like the side sides of the football field. And in between, there's nothing. Absolutely nothing. So the more interesting question will be why is not everything transparent? I mean, there is absolutely nothing in there that would divert the or absorb the light. Right? Everything just everything should go through. Why is not everything transparent, not only glass, but everything. And the reason is absorption. So these electrons are orbiting the nucleus. And what essentially is happening is that electrons can absorb hormones. Phonons are, if you imagine, light as much rays or not waves, but particles, then the phonon is the basic particle of light. So electrons, they absorb hormones. And if they do, they go from an inner orbit, like a lower energy level. They jump to a higher energy level. Because it's basically you after lunch, you eat something, you get more energetic, you get more jumpy. So it jumps to an outer orbit from the nucleus. It's a bit further away. So it absorbs the light so the light doesn't go through. So this is why most things are not transparent. But the question is why is 10 glass transparent? And the answer is that these orbits, these different places around the nucleus, they are so far apart that in the visible light spectrum, if the electrons absorb a photon, they don't get enough energy to jump to the next orbit. This is why most of the light is going through these plastic materials. And the interesting thing is that this is not always the case. This is the case for visible light spectrum. There is another spectrum, which is absorbed. So if you have a spectrum that gives that is a higher energy spectrum, then it may give enough energy for this electron to jump to a different orbit. And we can easily find out what spectrum it is. Because for instance we use glass for a number of different beneficial things. Well, for instance, you cannot get sunburn if you are inside of the house and you have your windows closed. And we are wearing sun glasses in order to protect our eyes from something. So is there someone who tells me what this spectrum is? That is exactly just a bit louder. Ultraviolet. So ultraviolet is a spectrum with a higher amount of energy. And if you absorb it, then this jump is possible. So this is why it is absorbed. So just some physical intricacies. So lights may get reflected. If we have a material that most of the time reflects light, then we call it the BRDF. The BR is the interesting part. That is the reflection. And if it transmits, it is possible with the material model. We have the BTDF, which is the bidirectional transmittance distribution function. And as an umbrella term for both of these, this is basically the whatever term is BSDF. So bidirectional scattering distribution function. I am not saying this because this is lots of fun. I am saying this because you are going to find these terms in the literature all the time. So BSDF is basically things that reflect and things that transmit. Okay, what are the properties of BRDS? And after this, we will suddenly put together something beautiful, very rapidly. So there is handholds reciprocity. It means that the direction of the ray of light can be reversed. What it means mathematically is that I can swap the incoming and outgoing directions. And I am going to get the same probabilities. So the probability of going here to there is the same probability as coming from there to here. If I look at things from both sides, I will get the same probabilities. So that is often useful in physics. Positivity, this is a suffix for a Tory. Well, it cannot be less than zero. A probability cannot be less than zero. For every outgoing direction, there is some positive probability or there is zero. That is it. Nothing else is really possible. So formally this is how it looks like and it makes some additions awfully happy. And there is energy conservation, perhaps the most important probability. An object may reflect or absorb incoming light, but it is impossible that more is coming out than the incoming amount. Well, obviously we have light sources and things like that, but we are talking about street interior models. So this means that if I integrate this function for all possible incoming directions, then I get, if I take into consideration light attenuation that we have talked about, this is why it is so hot and known and why it is so cold at night, then I am going to get one or less. And this is because if it equals one, then this means that this kind of material reflects all light that comes in. And if it is less than one, then this means that some amount of light is absorbed. Okay, we are almost there at the rendering equation. Generally what we are going to do is that we pick a point x and this direction is going to point towards the camera or my eye. This basically means the same thing. It is just an abstraction. And what I am going to be doing is I am going to sum up all the possible incoming directions where light can come to this point and I am interested in how much is reflected towards my direction. And let us not forget that objects can emit light themselves. And we will also compute this reflected amount of light. So just intuition, light exiting the surface towards my eye is the amount that it emits itself, it is a light source, and the amount that it reflects from the incoming light that comes from the surroundings. And this is how we can formally write this with this beautiful integral equation. Let us see, let us tear it a some and see what means what. This is the emitted light. So this is light from point x going towards my eye. How much of it? Well the amount that is in point x emitted towards my eye, if it is a light source like that one, then I definitely have this amount. And there is an amount the battery left that is reflected. Let us see what is going on. This is what I just told you. And again, and this is the integration. This is the interesting part. So I am integrating this omega prime. So all possible incoming directions. What you have seen the hemisphere on the previous image. hemisphere means basically half the one half of the sphere. We are integrating over a hemisphere not over a full sphere because if we take into consideration the cosine, if the light comes from above that cosine 0 degrees is 1. And as I rotate this light source around this point, then this cosine will get to 90 degrees. So from here to there. And the cosine of 90 degrees is 0. Therefore there is going to be no throughput if it comes from that direction. And if I have something that is higher that would be negative. We don't deal with these cases. So this is why I am integrating over a hemisphere. So some light is coming to this point in different directions. And what I am interested in is how much is this of this light is reflected towards my eye. This is multiplied by the incoming radiance. There is the PRDF and light attenuation. That's it. This is still a bit difficult. This is still a bit conducive. So first we are going to train ourselves like bodybuilders on smaller weights. So we are going to create an easier version of this. Because apparently this is terribly difficult to solve. If you take a look and if you sit down and try to solve it for a difficult scene where you have objects and geometries and different PRDFs, different material models, you will find that this is impossible to solve analytically. And one of the first problems is yes. This is the equation is just for one point. So we are looking at one point and then we want to calculate. Yes. And here comes the catch. So I am interested in how much light is going towards my eye from this point. How much is it? Well it depends. If I turn on other light sources then this point is going to be brighter. Because the radiance coming out of this point depends on its surroundings. Is the window open? Are the curtains cool or not? So x, this point x depends on this other point y, for instance. All other points. Then we can say let's not compute this x first. Let's compute this y point instead first because then I will know x. But this y also depends on x because how bright light is on the other side of the room also depends on how bright it is in this side of the room. So there is some recursion in there. And if you don't think out of the box this is impossible to solve because you don't know where to start. This integral is hopeless to compute in closed form because there may be shapes of different objects in there and this will make integration immensely difficult. The integral is infinite dimensional. Later you will see that if I compute 1 bounce this x that I have been talking about that's okay. But I need to compute multiple bounces. I need to start tracing rays from the camera and see how much light is entering the lens of the camera. But 1 bounce is not enough. Is 2 bounces enough? So after the x I continue somewhere else. Is this enough? Say something. It's not enough. Okay. But I think maybe 3 is enough. It's 3 enough. It's not enough. Okay. Well, you guys are very picky. Okay. Is 10 bounces enough? Okay. Why not? Because there is still some amount of energy left. If I would continue this light path I would encounter other objects and I don't have any knowledge of that. We need to compute an infinite amount of bounces. Even 1000 is not enough. So and this rendering equation is going to be 1.1 bounce. And if I want to compute the second bounce that's going to be embedded there's going to be embedded another integration another rendering equation. And this goes on infinitely. This is the biggest equation in the whole universe. It's impossible to solve. And it is often singular. I will later show you why. So even if you would want to integrate it you could. So this is far a bit difficult. This seems impossible. And apparently at this point we cannot solve it. So this is the end of the course. And we have an impossible problem. There is no reason even to try. And goodbye. See you. Never because there's never going to be any more actors. But in order to understand what's going on first we're going to put together a simple version of this equation that we can understand and we can work our way up. There is another formulation of the rendering equation and not going to deal with this too much. You can imagine this other version as moving points around. So there is a light source in P3 and there is the sensor at P0. And this is one example light path. And what I'm doing is I'm not stopping at one point and integrating all possible incoming directions. Because this is what I did with the original formulation. What I do is I compute one light path. I compute how much light is going through. I add that to the sensor. And then I move this P2 around. I move it a bit. I compute the new light path. How much is going through? I move this P2 around again. So imagine this moving everywhere. And imagine also P1 moving everywhere. So all these points are moving everywhere. And I compute the contribution of this light source to the sensor. So this is another kind of integration. I'm not going to go through this. What is interesting is that there is a geometry term in there. And this describes the geometry relation of different points and light attenuation between them. I'm not going to deal with this too much. I just put it here because if you are interested then true your way through it. In literature they often write it this way.
Okay, so let's jump into the thick of it. What do we measure in a simulation? A quick recap from the last lecture, this is going to be just a few minutes. So, first, radiant flux. This is the total amount of energy passing through a surface per second. What does it mean? I imagine some shape anywhere in this space, and I count the amount of energy that is passing through this shape every second. What is the unit of it? It's watts or joule per second. This is the state. And this is apparently not enough. This is not descriptive enough to create light simulations. And please raise your hand if you know why. Okay, well, let's take a look. So this says that amount of energy passing through a surface is measured per second. So when we measure a high radiant flux value somewhere, we don't know if we have measured a lot of energy that passes through a small surface, or if we have drawn or imagined a large surface, and there's just a bit of energy passing through. This is the same amount of radiant flux. So this metric is ambiguous. It's not good enough. And this is just an image to imagine what is really happening. So, what is the solution for the time being? Let's compute the flux by unit area. This we call iridians. And this unit area means that we don't imagine any kind of shape. We imagine something that is one square meter. So we normed by square meters. So I have explicitly said that it's going to be this big and whatever is going through this, this is what I'm interested in. Okay, well, unfortunately this is still ambiguous. And the reason for this is that we haven't taken into consideration what angle the light comes in. And you will hear about this in a second. So it matters whether you get a lot of energy in a big angle or a small amount of energy in a small angle. This is ambiguous. So let's remedy this by also norming with the angle. So we are talking about unit angle. So this meters, this square meters, we also divide by steridians. Well, what does it mean? So steridians is basically angle in multiple dimensions. Because in the textbook, there is only one angle to take into consideration if you draw a triangle. But if you would like to look at, for instance, you, it matters that I turn my head to the right direction in this direction. But if I would be looking here, I wouldn't be seeing you. So I need to take kind of another direction. So this is what we need to know of the piste steridians. So multiple directions. Next question is, so this was radians. What's normed piste square meters, normed piste steridians? Why is this still not good enough? Raise your hand if you know the answer. Well, nothing. It's fine as it is. So it's going to be questions like that. Make sure to think it through because I think last year someone was almost folding out of the chair. I know. I know. I know. Okay. This is fine. You can build simulations. Okay. So how do we do the actual light simulation? What I'm interested in is how much light exits the surface at a given point. So I pick a point in space and the direction is going to be the direction of pi i. How much is light? How much light is coming through from there? Solution is obviously the Maxwell equations. Why? Maxwell equations tell you how electromagnetic waves behave and light is an electromagnetic wave in a given spectra. That is around visible light as you heard in the last lecture about from 400 nanometers to 730. That's more or less the visible spectra. Well, apparently some people are overly excited about from Maxwell equations. Myself included. Well, I don't have a clue to have to like that. I reserve this part for the rendering equation at some point. Let's see about that. So, but unfortunately this doesn't work. Hopefully Thomas have said some things about this. The basic principle is that if it's really nanometers, then we would need to have a simulation on the scale of nanometers. And that's impossible. That's the simple way to put it. And the solution is going to be the rendering equation. And if you would like a tattoo of an equation, I would propose definitely having the rendering equation. You will see how beautiful it is. But at this point, we are not ready to digest all of it. So, let's have some theory before that. This is the trivial part. Okay. So, scalar product. Scalar product is a number. So, on the left side, I have two vectors. On the right side, I have a number. And the scalar product is of A and B. Vectors is the length of A times the length of B times the cosine of the angle between the two vectors. In this course, if even if I don't say anything about the length of the vectors, a length of one is assumed. Almost every single vector is going to be normalized. So, if they are normalized, then A, length, and B, length is one. So, this is strictly going to be the angle between the two vectors. So, the cosines are going to be angles. I mean, the cosine of the angle. Okay. Sound notation. This is what you are going to see in many of the figures in the literature. What's going on? This point of, this is, axis the point of interest. This is where we compute some unit. And V is the direction towards the viewer. It's flipped on purpose. I'm going to fix that in a second. So, V is a direction towards the viewer. Okay. So, if I have this projector above me, the V vector would be pointing towards me if the axis is there. And is the surface normal? L is the vector pointing towards the light source. Okay. So, if I would be at this point, then this L vector would be towards, for instance, that light source. R is the reflected ray direction. This means that I have a point. I have a light source. Light is coming towards that point. And R is where it's going to be reflected. So, again, an example. There is the projector. This is the point text. This is where the light comes from. And this is the reflected direction. So, this is flipped along the surface normal. You will see examples of all of these. And theta i and R are incidental from your reflected angles. And because we are going to be computing scalar products and things with vectors, it is important that these vectors that we are talking about are starting from the same point. So, generally in the images, you are going to see this X and some vectors that are pointing outwards all the time. Because these vectors I can use for computations. And just another important thing. This is the mathematical definition of R. This is how you compute the actual reflected vector. But I think you have done this before in previous courses. I think is it not the ECG, but unfortunately I don't remember the name. But there is some basic ray tracing. Is there? There is some ECG. You will need it for a shadow. But even if you haven't seen it, you will see this info. And you will see how this works. Let's talk about light attenuation. With some experience. Let's be practical. So, the sun shines onto a point of the surface from above. What portion of the output of one ray will hit the surface? Well, this is something like a diffuse shading. So, I am going to compute a dot program between L and N. L is towards the light vector and is the surface normal. Well, it seems to me that L and N is the very same thing in this scene. So, this cosine is going to be 0 degrees. So, the cosine of 0 is 1. So, I am not going to have any kind of light attenuation in this case. So, let's take another example. So, the sun is around here. And this is the light vector and you can also see the R. Just as an example. This is where it is reflected. So, I am computing this diffuse shading formula again. So, L dot N. Now, there is some angle. Let's say that this is 45 degrees. 45 degrees is the cosine of 45 degrees is 1 over square root of 2. So, the square root of 2 is 1.41. So, 1 over 1.41, that is around 1.7. So, there is some light attenuation if the sun is located here. And what about the extreme case? Another extreme case where it is almost at a 90 degree angle. Well, the cosine of 90 degrees is 0. So, this means that there is tons of light attenuation. And this is the reason why it is the hardest point of the day is moon when the sun is exactly above us. And after that, it is just, it is usually, if you do not take into consideration anything else, then it is only going to get colder and colder. And this is why it is so cold at night. So, we can neatly model this light attenuation with a simple dot product, which is the cosine of these vectors.
We are going to start with the most difficult thing in the entire semester. It's about how to pronounce my name. Okay? Can you please find the lights for this spotlight because it's a bit hard to see? Oh, alright. I forgot this. Okay. This one? Yes. Okay. So, I am from Hungary and this is pronounced as Kaui Jone. Kaui is essentially the equivalent of Kaui in Deutsch or Charles in English. Jone is often there is no equivalent for that. So, I am sorry. So, it is pronounced as Kaui. If you imagine this as like an English word, then you forget the L and you just pronounce it like that. So, it's Kaui. Okay? So, I'd like to hear some examples of Kaui. Kaui. Okay? Kaui. Yes? Kaui. Excellent. Kaui. A bit louder? Kaui. It's the answer to the answer. So, it's Kaui. So, it's like a Y at the end. It's Kaui. One more time. Kaui. Lada. Kaui. Yes. Excellent. Kaui. Kaui. Wow. It's amazing. Okay. Now, how comes the hard part? So, this is pronounced as Jone. Okay? So, it's Hungarian. It's a weird language where Z and S is actually one letter. So, if you take a look at the Hungarian alphabet, there's a letter that is Z. There's a letter that is S. And there's a third letter that is Z and S together. So, it's pretty ridiculous, isn't it? So, this is pronounced as Jone. So, the Sch is the difficult part. Okay. Jone. Wow. Jone. Yes. It's a good word. Jone. Hi. Can you pronounce Jone? Jone. Yeah. It's just a common thing. You don't need to be meaning anything. Okay. Jone. Yes. Jone. Yes. Jone. Yes. Wow. Are you Hungarian? No, not really. I mean, Hungarian. Yes. Parents may be sicker than maybe. Mining, okay. Jone. Yeah. Wow. It's great. It's not Jone. Jone. Jone. Okay. Is there someone who I have forgotten? Or everyone knows what's up. Okay. So, this is what we're going to be doing. So, the amazing thing is that when you see images on the Internet like that, sometimes it's difficult to find out if this is computer graphics or is it a real photograph. And this is one of these examples. This is another example. And this is the work of amazing engineers and amazing artists. And we are going to be talking about how to compute images like that. And if you look at this, well, when you download the slides at home, you will see that on the lens, the dust is modeled. Here you can see just some small splotges, but you can actually see pieces of dust on the lens of the camera. And this is computed with the computer program. And by the end of the semester, you are going to know everything about this. How is this exactly computed? Every single pixel. Just a few things about organization. There's going to be assignments. They take up 40% of your grade. And these assignments will have, most of them will have theoretical parts, pen and paper, how to understand what's going on in nature. And there will be also programming exercises, but they are not really that programming exercises. It's mostly using programs, understanding what they are doing, and maybe modifying them here and there. But you are not going to write huge rendering engines and things like that. So don't worry about this. The 60% part is a world exam. And this is going to take place after the semester with me. So this is some friendly discussion about what you have learned. Or not so friendly discussion if you haven't learned anything. But that's never been case. So I'm just kidding. And this is going to take place with me. But you can choose. So if you would like to have the exam with Thomas, that's also fine. But I would like to note that I am an engineer. And he's a brilliant physicist. So if you choose who to try to deal with, I would choose the engineer. I don't know about you, but. Just a suggestion. And there can be some kind of individual contribution. So if you find some errors on the slides, if you add some figures that, hey, I don't like this figure, I've drawn something better than that. There's going to be programs that we work with. If you find some bugs in there, or if you can extend it in any way, you can get plus points. And this applies to basically any kind of contribution that you have. This is the book that we're going to learn from. So there is a trying to cover everything. And at some point, I will say that, yeah, please open the book if you would like to know more about this. But whatever I tell you here, it's going to be enough for an exam. So it's not going to happen that, hey, why haven't you read, I don't know, page 859. Don't you remember that? This is not going to happen. So this is to augment your knowledge. If you would like to know more, and this is a great place to do. And this has a website. This can be bought at different places. There is some sample chapters. And before buying, you can take a look whether you like it enough or not. So it's pretty cool that it has sample chapters. Let's start with what you shouldn't expect from this course. I'll just run through it. There's not going to be rigorous derivations of every imaginable equation that you have. So there are courses where there is this never ending infinite loop like in programming. An infinite loop of definition theorem, corollary definition theorem, theorem, lemma. Raise your hand if you've been to a course like that. I'm not going to tell anyone. I'm not going to tell anyone. So I've had a lot of these courses and I've had enough. So I'm trying to do it differently. There's not going to be endless derivations. There's not going to be an endless stream of formulae as well without explanation. There's going to be formulae, but with explanation we're going to play with all of them. And you are going to understand the core meaning of these things. And at the same time, please don't expect to understand everything if you open, for instance, the Luxrender source code. This is like a half a million line of code project. One of the best renders out there. But there's many. There's many really good renders. You will not understand every single thing that is there, but you will understand how the light transport part works. As thoroughly as possible. And the most important thing I've put it there in bold because this is what students love. You don't have to memorize any of the formulae. I will never tell you that, give me this formula and you will have to remember it off your head. I don't care. If you're an engineer at a company, you'll see that you need to solve a problem. Remember something, what you do. Google. And you look for it. It's not important to remember things. It's important to understand things. So if you look at the formula, you will have to understand what is going on. And that's intuition. This is what I would like you to have as much as possible, but you don't need to memorize any of these. Now, what you should expect from this course is how to simulate light in a simple and elegant way. This is going to be a surprise at first because things are going to look complicated. And by the end, we're going to derive really simple solutions for that that can be implemented in 200 lines of C++. So these 200 lines can compute something that's almost as beautiful as what you have seen here. And I have written this piece of code and every theorem that we learn about, you are going to see them in code. In fact, there's going to be an entire lecture on code review. Let's go through this renderer. And see there is Schlich's approximation. There is Snell's law. There is this and that. And everything you learn here, you are going to see in code. It's not just flying out and doing no. You will know why nature looks like as it does in real life. And you will be wondering that there's so many beautiful things and why haven't I seen them the way they are. Why are they looking the way they are? And you will know about also most of the state of the art in global illumination. This means that yes, we will start with algorithms from 1968. And we will end with algorithms from this year, like from two weeks ago, or in the next few weeks, because the C-graph is coming, like the C-graph paper, the biggest conference with the best of the bunch is coming in the next few weeks. And I'm going to read through it and the materials will be updated to the very latest works. And another thing is that really important is that you will be able to visualize and understand complicated formulae in a really intuitive way. So I would like you to learn something that's not only life transport specific, but you will be able to use this knowledge wherever you go, whatever kind of mathematical problems you have, this knowledge will be useful. And you will see from the very first lecture. And the most important thing is that you will see the world differently. There is lots of beautiful things in nature and you won't be able to stop looking at it. So you will perhaps like taking the train on public transport a bit more than before, because there's so many intricate, delicate things to see that you haven't seen before. You've looked, but you haven't seen them before. Stay up.
you you you you
Today is a temporary day at Dria, it's the top habit that it wouldn't be going Connected to Dria is notions
...'
Don'ts everybody, sorry. Thank you.
you
Friendly greetings to everyone, my name is Karo Jornai and I promise to you that the pronunciation of my name is going to be the most complicated thing in this unofficial talk. This piece of work is a collaboration between Activision Blizzard, the University of Saragusa and the Technical University of Vienna. The quest here was to render images with really high quality subsurface scattering in real time on commodity hardware. To render photorealistic images, we populated scene with objects. Add the camera and the light source and start tracing rays from the camera to determine the incoming gradients. Now even though there is a large volume of research going on on how to do this efficiently, this is still a really time consuming process. What's more, this figure here shows light transport only between surfaces, meaning that we suppose that rays of light propagate only in vacuum. If we extend our program to support participating media, we can render volumetric effects like smoke, haze and many others and also translucent materials such as skin, plant leaves, marble, wax and so on. However, this extension bumps up the dimensionality of the integral we need to solve, making the process even more time consuming. But the reward for this is immense. Here on the left you can see how our skin would look like without subsurface scattering. It is indeed a very important factor in the visual appearance of many translucent materials. It is not a surprise that the motion picture and the gaming industries are yearning for a real time solution for this. There are fortunately simplified models to render subsurface light transport in optical lithic materials. What we do here is take an infinite hog space of a chosen translucent material, light an infinite azimoli thin pencil beam from above in normal incidence. This beam will penetrate the surface of the material and will start to attenuate as it becomes more and more submerged into the medium. During this process, these photons undergo many scattering events and eventually exit somewhere away from the origin. Counting up these photons exiting at different distances, we can build a histogram that we call diffusion profile and we'll denote as RD. This is an actual simulated diffusion profile, what it looks like if we look from above. Another important bit of preliminary knowledge is that we can directly use these diffusion profiles by convolving them with an input irradiance map to add subsurface scattering to it as a post-processing step. This is how the result looks like after the convolution is applied. Now this is remarkable as we don't have to run a fully ray-traced simulation with participating media. However, these signals are stored as images, so normally this means that we compute a 2D convolution between them. Unfortunately this is very costly, but there are techniques to reduce this problem to several much cheaper 1D convolutions. One example is Dion's excellent technique. He takes into consideration that in a homogenous and isotropic medium, the diffusion profiles are radially symmetric, therefore it is possible to take a 1D slice of this profile as shown below here and trying to fit it with a sum of Gaussians which are individually also radially symmetric. This means that we can use a cheaper set of 1D convolutions instead of using the 2D profile directly. This is an example input signal and the results with Dion's technique with different number of Gaussians compared to the true diffusion kernel. It is important to point out that even the mathematical properties of Gaussians, this technique requires one horizontal and one vertical convolution per Gaussian. These are 1D convolutions. This also means that if we'd like to obtain high quality subsurface scattering, we need at least 4 Gaussians and therefore 8 convolutions. This is not suitable for most real-time applications. However, it is a really smart idea and hasn't been really improved since 2007. And honestly, when we started this project, we didn't think anyone could realistically come up with something that's better than this. Our quest was nonetheless to obtain high fidelity results with a separable kernel using only 2 convolutions which is marked with green up there. Visualizing the SVD of the diffusion profile, it is clear that the signal is non-separable. It is not possible to write this 2D function as a mere product of 1D functions. However, the sami-lock plot tells us that the higher ranked singular values decay rapidly, meaning that most of the information here is not random. It has a lot of structure, therefore a rank 1 approximation sounds like a good starting point. The plan was to treat the diffusion profile here on the right as a matrix for which we compute the SVD. Here you can see the one singular value that we're taking and the corresponding left and right singular vectors that we hear denote by 8. We then compute one horizontal and one vertical convolution using these singular vectors to reconstruct the diffusion kernel and obtain the output. This is the input and the rank 1 SVD reconstruction. This would be the ground truth and now we can see that the separable SVD approximation is indeed looking very grim. There is a world of a difference between the two. So wow, this is surprising, especially considering the fact that the Eckhart Young theorem teaches us that the SVD is the best reconstruction in terms of the Frobenius norm which corresponds here to the RMS error. This is the absolute best reconstruction we can obtain with respect to the RMS error. Very disappointing. Here is the own algorithm, the one with the one D slice and fitting with one Gaussian. This means the same amount of convolutions and hence the same execution time as the rank 1 SVD and the ground truth. This is how the SVD looks like on the real world scene compared to using the true kernel. Looking at it again in our disappointment, we notice that the SVD yields an overall darker image therefore the reconstruction is not energy conserving. A new idea came up. Maybe we would try this before putting the project on ice and calling it a day. What if we would solve a minimization problem where the reconstructed kernel would still be as close as possible to the diffusion profile but would also be energy conservant. This should definitely be a viable option. And the results are dreadful. Herendus, I just don't know what to say. Look at the notes. Somehow as if the inputty radiance signal showed up as a really nasty ringing artifact. And we had the same around the ear. We visualized the actual kernel on a disc of light to see what went wrong here. And yes we see that it is indeed dreadful, nothing like the simulated diffusion kernel. But we hadn't the slightest idea why this happened. Visualizing the kernel itself in a simple 1D plot and staring at it for a while, it looks like a great separable approximation with respect to the RMS error. Most of the energy of the signal is close to the origin and the optimizer tries to reconstruct these details as closely as possible. Please note that the kernel plots are deceiving. These signals indeed have the same amount of energy, but the tail of the fit is extending really far away and this makes up for the seemingly less energy of the signal. This very delicate thing took a week of my life and I kind of wanted back. So what if we would minimize not only the RMS error by itself, which forces the optimizer to concentrate the energy to the origin of the signal where the energy spike is. But we would add a guide function which behaves like an envelope that tells the optimizer to possibly reconstruct regions not close to the origin but focus a bit more on far end scattering. This is the fit we originally had. And this is a very simple distance weighted guide function I had in mind. Imagine that we now have a more general model for which we used k equals 0, a constant envelope to obtain these horrendous results. I will now tell the optimizer to use k equals 1, which means that we would give higher weight to the regions further away from the origin. This is what we obtained. Very intuitive, we have a signal with the same amount of energy as if we pushed it from the top to neglect the origin and add this energy to the tail of the signal to focus on the reconstruction of far end scattering. And now we can even go k equals 2, which is essentially squishing the signal a bit more to emphasize far end scattering at the price of neglecting sharp close range details. Back to the original fit. That's weighted by distance a bit by plugging k equals 1 in the optimizer. Almost there. Okay, let's go k equals 2, a bit more emphasis on far end scattering. Now this looks remarkably close to the ground truth. This is the journey behind the guided optimization technique that is separable, requires only two convolutions and is one of the techniques we propose for applications with strict real-time constraints. We also propose an other technique which we have mathematically derived and for which I admit not having an intuitive story. So before I dare showing you the next slide, take a big breath and let's go. So sorry this is how it looks like and what is remarkable about it that this follows a completely different paradigm. What we're aiming for here is not to make our kernel close to the original diffusion kernel, but trying to make the result of the convolution the same. It is almost like minimizing the L2 distance of the resulting convolved images, not the kernels themselves. Stealing for images, not kernels. This is impossible to solve for a general case, so in order to accomplish this, one needs to confine the solution to a class of input-eradient signals, input images where it would work well. In our derivation, we plugged in one of these signals as inputs. This means that the technique should behave analytically on signals like this. And the most remarkable thing that this is a mere rank-1 separable approximation that is really analytic for these class of signals. This means that it mimics the effect of the true kernel perfectly. Let's take a look at this on a practical case, of course, not all signals are one these signals. This is our result with the analytic preintegrated kernel and the ground truth. Very close to being indistinguishable. Furthermore, it is really simple to implement and it has a closed form solution that does not require any kind of optimization procedure. One more interesting detail for the more curious minds, this technique is analytic for a greater class than only one of these signals, a class that we call additively separable signals. So what about artistic editing? The preintegrated technique is great, but it does not offer any kind of artistic control over the output. The guided approximation requires optimization, but in return, it also offers some degree of artistic freedom on how the desired output should look like. We also have this technique, which is simply using two separable Gaussians of different variance values, one each to provide perfect artistic freedom in adjusting the magnitudes of close and faring scattering. Note that these two Gaussians are not the same as in Dian's approach with the two Gaussians, as we do not use the radially symmetric one this signal directly. A real-world example, this is the input irradiance, heavily exaggerated faring scattering, heavily exaggerated close range scattering, and a more conservative, really good looking mixture of the two. Rapping it up, the SVD is great for applications that can afford higher rank reconstructions, the kernel preintegration is a simple technique that is analytic for additively separable signals, guided optimization, a more general version of the preintegration that can be conveniently tuned with one parameter, manual approximation, many degrees of freedom for artists, while it has a quite reasonable accuracy that is comparable to four Gaussians with previous techniques. Different techniques with different levels of scientific rigor and different target audiences ranging from scientists to artists working in the industry. Now even though we used examples with skin to demonstrate our techniques, it is important to point out that they work for a variety of translucent media such as plants, marble, roads, in the steel life, and milk. The most important take home message from this project, at least for me, is that it's entirely possible to do academic research together with companies and create results that can make it to multimillion dollar computer games, but also having proven results that are useful for the scientific community. Thank you.