![]() ![]() So what’s the nuance here?Īlso the graph for destructive interference would look pretty much the same, I’m really confused about this too cos there’d be a lot of overlap? Like at a thickness of 133 nm, 650 nm has a phase difference of 2.2π and dominates the light intensity so the film looks predominately red. ![]() I’m only taking into account perfect integer phase differences, but even imperfect ones have consequences. Whereas with 0.1 mm, there is only one integer value (between 0.88 and 1.165) that allows for perfect constructive interference. So that means more wavelengths of light can constructively interfere, according to my understanding. I made λ the subject of the equation and I inputted the scaled thicknesses, 0.1 mm (10^5 nm) and 133 nm, to get a function for λ:įrom what I can see, there are more integer n values for the larger value of d. The LHS π is there because only one light ray is phase shifted upon reflection, the other one is not. I am not sure how exactly I could show this but I know the formula that relates the phase differences for constructive interference is: He tells us to figure this out by comparing the colours at thickness = 0.1 mm to the thickness at 133 nm. This is in the lecture, circa around the 56:30 mark: I was working along the Walter Lewin lecture on thin film interference, and he says we cannot see interference in thick films because there are so many colours in the visible spectrum for which constructive interference can be observed, there is not one colour that dominates so you’ll see white light. Hashim Ishfaq Asks: Why we can’t see thick film interference (Walter Lewin lectures)? In the picture below you'll see one plot with the simulation and the observaton data. Or is a restricted brute force fit my best option? Is there a simple method that I overlooked. So I cant use a least-square fit method, because there are multiple possible minimas. The parameters are more or less dependent on each other. The problem is this takes a lot of time with 20 parameters. I thought about using the savitzky golay smoothing.Īt the moment I am using a brute force method to try out all possible parameters and simulated the corresponding light curve. Should I also smooth the observation data or would I lose true features of my data. The observation data I use is not evenly binned, so I used the interpolation function of Scipy. Is there a another way to measure for similarity? I read something about the Chi-Square Statistics, but I am not sure how that works and how this could be applied to my problem. But I am not sure if that is the best option, due to the fact that the Light Curve with the highest Cross-Correlation Coefficient not always looks like the best fit/simulation compared to other simulations with a lower CC-Coefficient. I want to compare the Light Curves and check for similarityto find out which simulated curve fits best respectivley which parameters simulate the Light Curve the best.Īt the moment I do it with the Cross-Correlation function from numpy. In my work I have an observed Time Series and Simulated ones. If then # process the large images parallel -j8 convert "" ::: $PNG_FILES fi echo " " echo "Completed resize operation" touch " $TOUCHFILE " install.Wiesel Asks: Similarity Measure of Simulated Time Series vs Observed time Series ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |