I read the first book (not including the prequels) in Isaac Asimov’s Foundation series, and thoroughly enjoyed it. I was more than a little trepidatious about Asimov’s trilogy simply because classics can always be a bit of hit or miss, and I am not that big into Space Opera’s. I should not have been worried because this first book was quite easy to get lost in.
Foundation (book 1) is the story of a dying empire, but you wouldn’t know that from the face of it. The empire as thrived for some 12 thousand years with prosperity and (I think) peace across the galaxy. The only hint that the galaxy is doomed comes from a mathematician is has become an expert in the fictional field psychohistory. Psychohistory proposes to use the data of the present and history to project forward what will happen, to various level of probability. The Foundation is the result of this mathematician, Hari Seldon, who suggests we need an encyclopedia galactica to hold all our knowledge of the arts and sciences. It is not merely a love of culture that Seldon clings to; it is the notion that an extended period of dark ages in the galaxy will come to pass and only with the Foundation do we hope to lessen (not stop) it’s severity. That is, 30,000 years of darkness, but with Seldon’s help we may shorten that to 1000 years.
Big picture, I loved the premise and the writing. It’s a little bland. Our characters don’t have much depth. The story is presented less as a coherent narrative than a progression of linear short stories (or novellas) as we begin to fall into the dark ages. As some reviewers have noted (on Goodreads), Asimov doesn’t do much to build his characters or allow them to evolve, nor he does he do much to convince the reader to feel invested in the Foundation. While I love a good character driven narrative, I personally still loved this one. I think the reason it worked was because of my inherent love of science and history. Asimov assumes the reader will have an appreciation for how necessary these aspects of our society are. Naturally, I latched on to the need to preserve scientific thought, and that alone really got me excited about the premise.
Fast forward to today, some 70 years after the books was published, and I can’t help but see similarities to the current Climate Crisis. Anthropogenic warming is incontrovertible. Denying that is like denying the covid vaccine, the moon landing, or 2+2 is 4 (is it though?). Heat waves, droughts, and floodings are happening at record rates across the world. In 1990 the IPCC predicted temperature rises between 1.5 to 4.5C by 2050; we are at 1C). They also predicted 30 to 50cm of sea level rise; we are at 20cm. That was 30 years ago. We’ve seen deniers of Covid despite the reality staring them in the face, and the problem is exacerbated in climate change because it is even harder for people to see because it requires a modicum of forethought.
Even if we stopped producing greenhouse gases entirely, the negative effects would continue throughout the rest of the century. We have a global problem that is irrevocably damaging the planet, and even in the best of situations, the outcome is dire. Without concise action, the issue will magnify and eventually become irreversible; imagine trying to terraform Mars overnight because the longer we wait the more our problem becomes one of that caliber. Much like the book, we have leaders who don’t much care about what the future holds because they don’t have to live in it. It takes present consequences to promote action in the series, so at what point does that happen in the climate crisis? I fear only when it’s far too late.
The purpose of this blog is to explore an idea I had about how we can understand the astrobiology of Titan’s early system, specifically in it’s oceans. Not a lot is known about Titan’s early system because it’s surface is very young (at most 1Gyr). Nevertheless, loose constraints have been proposed for Titan’s history, and from those, there is the potential to explore the deeper implications on the habitability of Titan. Dragonfly is planning to visit Titan in the 2030s primarily to investigate its potential to harbor, or foster, life. Titan’s conditions make it rich in organics, and when mixed with liquid water, biomolecules, like amino-acids form. I’ve talked about this before, but here I want to think about a different aspect of Titan’s habitability. That is its ocean.
Titan has great conditions for the origin of life. It does not have great conditions to sustain it. In the melt of it’s impact craters, liquid water may flourish for decades, even centuries, but it won’t last. It’s ocean is likely beneath a 100km thick ice crust. The ability for life at the surface to make it down is unlikely. On going work has shown that it is possible, but it’s still limited. However, it has been hypothesized that Titan’s shell hasn’t always been so thick. A thinner shell will likely be more apt at overturning (like say Europa). This would suggest a great deal of mixing. Unfortunately, we don’t even know if Titan had it’s large reserve of organics during this time. If it did, the higher impact rates likely facilitated a great deal of mixing. We just don’t have the data needed to know this for sure, and we may never. Nevertheless, we have made a great deal of progress at expanding the picture of Titan’s past, and despite the limitations, I think we can make that picture a little clearer.
A while back I wrote blog post about Titan’s likely history of outgassing driven by it’s evolving interior. This is likely the biggest hurdle in constraining Titan’s history, but it is necessary to get a good estimate of organic production. Therefore, the first step in getting that estimate is to consider the various processes that would have instigated the outgassing of methane. This can likely be modeled, assessing the stability of methane and/or other volatiles in the interior. Once the most likely causes are constrained, we next have to consider the timing of said causes. The evolution of the core is something that can be modeled. It can be constrained, at least to a range of possible scenarios. This might lead to a range of potential outgassing profiles, but as with anything, it will provide limits by which to work from. I recognize that these cannot be constrained absolutely, but we can create reasonable scenarios, based on evidence, by which to work from. This could be a project unto itself: categorizing the possible range of outgassing events in Titan’s history. From there, it’s a photochemical problem.
Photochemical Production and Deposition
Models exist to predict the production rate of various organics in todays conditions. Of course, to understand production through time, these models would need to account for changing solar radiation and atmospheric conditions. The former is likely the easiest to constrain. The latter would likely entail using available information to make predictions of how atmospheres would act under various methane loads. For example, Tobie et al 2006 explains how the initial outgas would saturate the atmosphere and soak the surface with methane. Models can be used to predict the thermal profile of the atmosphere, which would effect it’s thickness. Therefore, it should be possible to ascertain an estimate of an atmospheric profile to model production under. This process would be repeated at different points in time (or possible designed to evolve with time), and, if feasible, performed for a range of outgassing profiles.
A more difficult task may be to predict deposition rates. This has been attempted to some extent (e.g. Lara et al., 1993), but this is a process that is not well understood (or at least that is my impression). Furthermore, slight fluctuations in atmospheric conditions, likely well within our ability to constrain it, may effect the process, but ultimately, we will have to make the best estimate possible.
Impact Cratering of Surface Overturn
Cratering rates are one of the more well understand aspects of Titan’s history. With this, we can use existing models (or perform our own) to predict how much overturn occurs for impactors of varying size. This will allow for estimations of total mass transfer over time. Alternatively, it may be as simple as estimating the rate of complete surface turnover. This may be one of the easier tasks. We take a given shell thickness and impact it with impactors of a range of sizes. Each impactor can be estimated to overturn some area of the surface. A Monte Carlo approach (if I am remembering that correctly) could be used to predict how long it would take for the entire surface to be overturned.
Chemical Evolution in Titan’s Ocean
With that, we would have an estimate of organic material transferred, and from there, we can predict the rate of changes and the abundance of organic material. This is necessary information to understand 1) how likely life is to arise in this environment and 2) how sustainable an environment it is through time. In my imaginary proposal, this would be less of a dedicated project and more of a large scale overview of habitability of the environment using existing information about the evolution of organics at these conditions. As for sustainability, abundance of organics may decide whether life can thrive given the resources available.
I recognize that this post contains both a lot and very little at the same time. This is likely an impossible thing to try and accomplish, nor would many be likely to consider it a valuable use of resources given all the unknowns. I still think it presents a fascinating problem. Like many big problems, it would necessarily be incremental, either performed by many, or slowly by one. I’m positing this because I’m finishing my PhD (hopefully) in a year, and I need to start thinking about post-doc ideas. This problem stuck out to me. This blog post is my first attempt to really think through it and put pen to paper (if very loosely). This is an embarrassingly outlandish idea riddled with problems, so hopefully there is something here I can work with because I am honestly terrified to discuss it with my lab tomorrow.
Way back in March I wrote about my upcoming work at LPSC. I recommend you check that out for a full introduction to this, but I will give a quick run down before I get into the major details of this post. Pluto’s crust is thought to be entirely water ice. However, the extremely low temperatures lead to other ices being stable on the surface. The primary ices are carbon monoxide (CO), methane (CH4), and nitrogen (N2). We know ice can be very malleable on geologic timescales (e.g. glaciers on earth, craters on Europa), but the rates of medication vary based on the ice parameters and the conditions of the surface. Water ice on Pluto is extremely rigid and strong, but the same cannot be said for the other ices. They are both more prone to viscous relaxation (i.e. flattening due to gravity), and at least nitrogen is known to cycle from the surface and into the atmosphere (and losing some of its supply in the process). My work posits impact craters as a means to constrain the ways in which these ices would contribute to the modification of craters. As I discuss in my previous post, the two processes primarily effecting impact craters is escape erosion (like nitrogen) and relaxation. The extent to which this modifies the craters will depend on what ice the crater is made of. My work using the degradational state (how shallow is the crater) to constrain what type of ice the crater must be formed in, and we use the surficial compositional data to test (or more aptly, constrain) what the ice is made of. This gives us information about the volatile content in the region of the crater and the history of these volatiles.
In my previous post, we conclude that H2O, N2 and CH4 ices are the most likely to have craters form within them. In my figure I show a serious of possible scenarios. 1) a crater formed in a very thick layer of N2. Over time this would easily relax (flatten) and the N2 is almost certainly lost as well, leaving no trace today. 2) The same situation occurs, but in CH4 ice. This is both stronger and less likely to escape away. Nevertheless, it will still relax on the timescales of the solar system (i.e. Pluto), so we would expect to see a crater formed in CH4 ice that is shallower than expected. 3) Imagine scenario 1 but the crater dips into the bedrock H2O ice, creating a crater formed in an upper layer of N2 ice and a base of water ice. The water ice is too strong to relax, and the N2 will likely be loss. Therefore, we are likely left with a crater that appears to be formed in H2O but is significantly degraded. 4) we reimagine scenario 3 with CH4 instead of N2, and the upper layer doesn’t escape. In fact, we are left with a pristine crater that appears to be formed in CH4 ice. Then, 5) (not shown) we have the standard scenario of a crater formed in pure water ice which would be unlikely to modify at all.
As the title suggests, I am working on finalizing this work and compile it into a publisher manuscript. I had completed most of the work by LPSC. There were two major steps left for me to complete it. 1) I needed to figure out how small to go in the craters I measured; I only measured down to 15 km sized craters because of time constraints. 2) I needed to add a step in the code to remove the terrain slope (large scale topographic variations that craters likely impacted into). This would essentially put the terrain at ~0 km and leave only the crater topography. I did this when I measured Titan impact crater depths, but I didn’t for Pluto do to time constraints. This is a major step because it requires me to redo all the measurements I’ve done. This isn’t hard, just tedious. At 8 profiles for each crater with over 300 hundred craters, that’s thousands of crater profile measurements. Alternatively, I could use the profile positions (assuming I saved them) that I measured for each profile and automate it to find the height at those same positions. I would love some feedback on that idea, but right now I am planning to do it all manually. I have added the step in the code to remove the terrain slope which leaves the step of processing the craters again (one way, or another).
Sadly, we are not done yet. After presenting at LPSC, my conclusions prompted me to consider a major assumption of mine. That is, are the surficial composition measurements reflective of the underlying ice? The surface is covered with material, including surficial ices. I posit in this work that there are only so many possible formations a crater can form, and it speaks to the type of ice it needs for that crater shape to be viable. The compositional data is intended to act as further confirmation, with limited reach. Nevertheless, I wanted a way to demonstrate this is a fair assumption, so I started to consider what type of measurements I could take to test this claim. Let’s take another look at my figure above.
My figure demonstrates where we would expect to measure the highest amounts of the ice the crater is formed in with orange highlighter. In the areas of orange strips, this is expected to be covered, at least in part, but surficial deposits. My work at LPSC considered the crater composition of the rim to rim. That is, we would expect to include some of the purist and most biased regions of the crater. If I want to test whether these measurements are reflective of the crater ice layer, I should be able to measure the composition in each region and show the rims are richer in the predicted ice than the floor of the crater. With 300+ craters, the big question is how do I do that? I could map precise regions in ArcGIS, but that would take so much time. Still, it is likely the most accurate approach. The alternative, that I am currently working on, is to import the data to MATLAB. I can automate the process and take measurements of set sizes around the rims (10%, 20%, etc). Except, I don’t know how wide to make this. Nevertheless, I am currently in the process of doing this. The other option is to process the eight topographic profiles and mark where I want the rim and flow to be measured. This would take about as much time as in ArcGIS. I wish I could just do that with the points I use to take the depth measurements, but the region where I tell it to look for the peak rim is not necessarily what I would constitute the entire rim. Why? Simply put, profiles get really weird.
Now, here I am. The main purpose of this post is to think through what I am doing and request feedback. Although, after writing this I am beginning to think the best option is to map these regions in ArcGIS but only do it for the largest 100 craters (or some sufficiently large sample size). I don’t necessarily have to test all the craters, but in this scenarios I might want to focus on the craters richest in N2 and CH4, seeing as these are the ones where the assumption applies. Now I am very disheartened because I’ve spent days working to do this process in MATLAB, and I am seriously considering switching to ArcGIS because I literally talked myself into it. Let me know what you think!
I’m a fan of String Theory, but I came to this book more than two decades after it was written. Because of that, one thing bugged me throughout this book. How much has actually changed in one of the most fringe areas of physics? The book starts out with a recap of basic physics (i.e. quantum vs relativity). The problem is I am familiar with all the ideas explored in this book. I’ve read all but Brian Greene‘s newest book, Until the End of Time, before this, and that, coupled with all the other material I’ve consumed, made the recap feel more distracting than anything. While I am a big proponent of constantly reconsuming things, especially ideas outside your realm of expertise, this book is necessarily less well developed as everything that has come since. The sign of a good scientist and author is to learn how to communicate better with time. It isn’t particularly bad, but it was easy for me to zone out.
Then we get into string theory. Even here, most of the major ideas I was familiar with. I was hoping to leave this book with a better appreciation of the finer details of the theory, but I found it was most effective at communicating the broad ideas. Then the finer details were really hard to get through and failed to make a lasting impression. I feel like this book would have been a much more positive experience if I had read it earlier in life because it would have been an excellent introduction to the field and precursor to Greene‘s own follow up book, The Fabric of the Cosmos. I do think it is time I return to Greene‘s other novel, The Hidden Reality, which was the first book of his I ever read; that was with very little background.
If you’re interested in learning more about this theory, I highly recommend watching the Loose Ends video I posted at the top of this post. For a brief review, string theory is a theory that the smallest things of nature are these tiny vibrating strings of energy, where the vibration of each string is what defines the type of particle it is (e.g. quarks, neutrinos, electrons, etc.). These strings can perfectly reproduce our current model of particle physics, but it comes at a cost. 1) it requires the existence of many more dimensions, and 2) it suggest all of our particles have a twin symmetric particle. Why don’t we see these other dimensions? They are small and folded in onto one another. If you have a problem with the idea of tiny dimensions, I found it helpful to remember our current 3D space used to me much much more constricted before they began to expand. They don’t say this explicitly, but my mind figures, perhaps the process of expansion only applied to the 3 dimensions we see. I wonder what Greene would say to that logic? Take it with a grain of salt. This theory is fundamentally mathematical, and we have yet to show it experimentally.
The physic’s true success is in connecting Quantum Mechanics with General Relativity because the math of the two fundamentally disagree with one another (i.e. I think in particular situations like a black whole with large gravity in and very small spaces). The true beauty, as Greene suggests, is not that it necessarily needs to be a description of reality; it is that String Theory proves the two laws are reconcilable. It may be that this is not necessarily the true description of our reality. Nevertheless, it shows that a connection can exist. Now is it worth believing? That’s where things get really complicated.
The theory itself, I love despite my issues with the book. It’s a fascinating concept with compelling motivations. There are many Goodreads reviewers who seem to approach string theory with a level of cynicism. Some who dismiss it because they struggle to understand it. Others who dismiss it because it breeches into the currently unknowable. However, there is a strong argument to be made about using the information we have available to best describe the nature of the universe. As we strive to improve these descriptions we can push ourselves forward in hopes that it can be improved further. That may or may not happen. The problem I have with opponents to this theory is that they seem comfortable dismissing a theory that may very well be the nature of reality simply because the physics is so difficult to constrain. Such a mindset will merely ensure that what is currently unknowable remains unknowable.
The Large Hadron Collider was hoped to show indirect evidence for String Theory. The energies and technology needed to observe strings are far outside our wheelhouse, but string theorists had hoped the energy at the LHC would be enough to produce the larger by products of the theory, the symmetric particles that we have yet to observe. This did not happen. However, string theorist had already noted it may be more difficult to reach the energies needed than those achieved with the LHC. The simplest explanation as to why string theorists were unable to simply make a fixed prediction of what energies are needed to produce the predicted particles is that there are a large array of possible configurations of string theory. At one time, it was small enough to brute force the process, but we now recognize far too many solutions exist to truly test them. It is, in that way, currently unfalsifiable. Nevertheless, we are brought back to the point I made before: it is still the best way we have to describe reality.
If you are interested in this topic, you could read this book. It’s worth noting most people I see enjoyed this book much more than myself. However, there is an ample supply of more recent resources you can pursue too, or you could read the book and follow up with the most recent discussions available. Here are some of the resources I sought out. The first video I posted at the top of the blog was a fantastic discussion about the history and current state of SH hosted by Greene at the World Science Festival in 2019. Sean Carroll did a discussion with Greene, where Brian Greene put his bets at String Theory being a real description of reality at 50/50 shot (obviously an off the cuff comment). This was a great casual discussion. Another episode of Sean Carroll‘s podcast had a more formal, string theory specific, discussion as well. Lastly, Greene discusses String Theory, black wholes and other topics with Leonard Susskind (one of the founders of String Theory) in the late 2020 on the WSF YouTube channel.
Of these, if you are coming in blind, I would recommend you check out the WSF YouTube discussion first. If you’re someone more familiar with it, you may find these other resources interesting too. Lastly, there is, of course, Greene‘s adaption of this book on PBS which I have not watched, but I will soon.
Thank you to NetGalley for a copy of this book in exchange for a fair and honest review.
The Disordered Cosmos is probably the best book I have read all year. The book starts focusing on cosmology and particle physics giving a broad background. Then it evolves into being a focused discussion on the author’s primary focus of research, one area being Dark Matter. In this way, it works well as a science book. She gives a good background of the science in a way that I think really helps get the reader interested in what it is she does and the cosmos. This is common in science writings, especially in cosmology. I found her writing as good as, if not better than, many people who write popular cosmology books. I have noticed some reviewers complain because they find this section difficult to get through, but I would urge you not to be turned away because of this. There seems to be this assumption that if you can’t understand everything in a book then it isn’t worth reading. Well, I’ll tell you a little secret: no one understands moderately advanced topics in science their first time exposed to it. It takes time, and part of that process means being willing to get confused. You’re likely to still leave this big with a better appreciation for the science than when you started. If you’re interested in pursuing it further, then you can, and if not, that’s okay too. This is still meant for the average reader.
I think what really makes this book shine is when it transitions into being a larger conversation about race in science. She starts with discussion about the science of blackness, for example focus on melanin. She uses ideas in space physics to study blackness to give a new perspective on what it means to be black. The decision to do this is both fascinating and an effective transition from the cosmological discussion to the broad sociological discussions she has in the book. She goes on to discuss life as a scientist. She explores what it means to be a scientist, especially for her as a queer agender black Jewish fem scientist. In doing so, she explores how discrimination and racism has integrated itself into the institutions of science and the process of science itself. Then she goes on to talk about the ways in which it needs to be improved. One of the major ideas she explores is on the interconnectedness of everything. As a physicist, she is able to take this to a quantum level, but it extends far beyond that. Everything we do in science is influenced by the society we live in, including the colonial and racist mindsets within said society. If we do not acknowledge how we interact with our science, then we will continue to do flawed science. Part of that means ostracizing other voices and leading to the low level of scientists who are black or who challenge the traditional gender binary.
For those who are interested, there was a recent(ish) paper specifically on this topic in AGU Publications titled, “Double jeopardy in Astronomy and planetary science: women of color face greater risks of gendered and racial harassment,” Clancy et al., 2017. This discusses just how prominent an issue this is within our (the planetary science and astronomy) community. Furthermore, if you are interested in exploring more books on science, gender, and race, I would direct you to the list of books Dr. Chanda Prescod-Weinstein says inspired her in the writing of her book.
Now I could go on and on about this book, but I think really the best bet for you is just to pick it up and read it. I recommend it for everyone. While it may be someone esoteric in its science, I think you are seriously depriving yourself if you do not give it a shot. If you decide to pass on it because of the science, you would also be missing out on more nuanced conversation about science, representation, and the black experience in science. Read this book!
This is an introduction review for what I cover in my upcoming LPSC oral presentation (2555).
Anyone familiar with Pluto knows it has a diverse range of geologic features. I show just a few of these in the figure above. The range of terrains that exist on Pluto diversifies it’s surface giving it a unique beauty, even compare to other worlds like Mars or Venus (that’s a science fact). I cannot help but compare these images to the picture of Pluto I had in my mind growing up. The picture I had was define by the episode of the Magic School Bus when the Friz takes the kids on a Journey through the solar system. At the end, they visit Pluto (considered the last planet in the Solar System at that time). It was this dark desolate place similar to the moon or Mercury.
New Horizon’s shattered that picture in my mind, and I think the difference between the two is part of why I love Pluto so much. It proved to be much more than I ever expected. Why is that? Perhaps it was a lack of imagination. The team at the Magic School Bus failed to consider how significantly volatiles might modify the surface.
Much of what we see on Pluto’s surface are in fact driven by volatile processes. Anyone moderately familiar with Pluto is likely to know about Sputnik Planitia, the giant lake or sea of N2 that traps heat and produces beautiful convection cells. There are other lakes, dendritic networks, glacier flow. All of this exists because the volatiles methane, carbon monoxide and nitrogen are unstable at Pluto conditions. This range of modification can be studied and quantified on an individual basis, but on a global basis as well. That is, we can consider how it shapes Pluto’s crater population which can work as reference point for how much degradation is occurring.
The focus of this post will be on Stern et al., 2015’s work looking at how Pluto’s volatile inventory can be responsible for the loss of craters. The figure shows how erosion will decrease the crater population count (R) and erosion with relaxation. What does this mean? When we think erosion, we think slow degradation driven by the removal of material, usually through the application of a physical force. Stern et al is referring to the loss of material to the atmosphere, I think by sublimation. Pluto is thought to have loss a great deal of volatiles over time. They enter the atmosphere and get sputtered away by various means. To be clear, Pluto’s bedrock is water ice. Stern et al is referring to craters that form in thick layers of volatile ice, specifically N2. These structures will fade away, and with it, the crater itself. Similarly, craters that form in N2 (or even CH4) ice will relax on the timescales that Pluto has existed. Relaxation will flatten craters, but rims are retained, but those rims can easily be loss.
I want to take this discussion a bit further because it suggests we can make predictions about the type of craters we should expect to find. Obviously, craters formed in pure water ice should be retained. Craters formed in sufficiently thick N2 ice (as thick as a crater depth) may relax on the order of thousands of years (millions at the most). Craters formed in thick CH4 ice would not be as conducive to relaxation. At the temperatures observed on Pluto, we might expect this to happen on the order of billions of years. We may see some craters entirely degraded (or flattened), but there may be some that are partially relaxed. CH4 is also less volatile than N2, so the structures would not be as susceptible to loss. This implies we should expect to see craters rich in methane and partially degraded, likely because it is formed in methane ice.
There are other in-between states to consider as well. What about craters that form in a thick layer of N2 or CH4 but not so thick that the crater can relax. In this instance, the crater base would be made of water ice. This base would uphold the other ice, preventing the crater to relax. However, N2 ice is likely to still erode away. Therefore, we will likely see partially degraded craters, depleted in N2 and CH4 (beyond surficial deposits) because they are the remnants of a dual ice crater. With CH4, the ice will be less conducive to erosion, so we might expect to see fairly pristine craters rich in CH4 because they have a water ice base.
These predictions have been tested by New Horizons. We have elevation data and compositional data as well. The compositional data is surficial, so there are limitations to what we can say about a crater. However, it offers first order constraints on what is present and what type of ice the crater may be formed in. We have used this data to study the degradation of craters on Pluto and relate them to the volatile abundances in their region. In doing so, we can constrain the volatiles in the region and the history of volatiles on Pluto.
Check out our talk at the 52nd LPSC to see our results!
When was my last research update? I think it was in November, so let’s take a moment to catch up. If you’re curious, I did use a Dice-QNG, but I got the “pick myself option.”
Pluto research at LPSC
I began the year updating my Pluto remote sensing project results for an LPSC abstract with Catherine and Dr. Veronica Bray. This was a bit of a bumpy ride because the process wasn’t as streamlined as I had hoped. I had to do a good deal of editing to process the data again. I am not sure how much of it was past mistakes versus me not remembering exactly the process I had set up. I am 85% sure it was the former. Nevertheless, I spent a couple weeks getting it ready to streamline the process to produce more results. The results I had on my poster were not complete due to a mistake during the project that I had to rush to finish. Nevertheless, I persisted. I realized part way through processing the results that I did not need to process all of the data. I only needed enough to represent the greater population. In the end, I got a good amount of data; it was enough to write the abstract and apparently get me a talk. I am not particularly excited about another virtual talk.
Paper Drafts and Updates
In December, I got my paper out to my co-authors. I got feedback in early to mid January, after my LPSC abstract was up. In addition to the feedback from co-authors, I presented to the Dragonfly astrobiology group who had some recommendations. There were no major revisions on behalf of the co-authors. However, the Dragonfly group and Catherine recommended I amend my results to include a smaller concentration, 1 ppt (0.1%) HCN. The basic idea being that may be how low the actual HCN content will be. My co-authors also recommended I update one of my temporal graphs to have consistent color bar limits, but that was much easier.
I was a little nervous about the addition. It shouldn’t be a difficult thing to accomplish, but it’s always a hurdle to feel finished and have to dive back in. I got the sf2 (mushy layer model) results fairly quickly, and the only other step was the 2D heat transfer model. When I approached this, I decided to reproduce my other results too. I had made an assumption my first go around that did not get a great response from the co-authors. They didn’t push back against it, but they were confused. Ultimately, I decided it was better not to use that assumption. It’s not worth fixating on what the assumption was, but since I was reproducing my results, I figured I would do so at a slightly higher resolution.
This proved unnecessarily difficult. I don’t think the results appear particularly smoother. What’s more, I had to lower it again for 100 ppt melt of 250 m thickness because the melt started reaching 250+ ppt as it froze, which is outside the boundaries the model has to use to approximate the amount frozen in the ice. This was resolved by using as lower resolution, simply because it did not become as concentrated in larger increments. There was another issue of results not being convertible to a matlab .mat file. I didn’t understand why originally, but now I recognize the file size was too large, and not because of my increment size or my melt sheet size (although the latter plays a role), it was my time steps were all saved magnifying the file size. Then a larger melt sheet has more time steps, which led to my largest melt sheet files continuedly not working. Clearly, I figured it out. Yay me.
Some additional changes I made was to make the HCN concentration axis (x) to be logarithmic. The upper and lower profiles are now distinct in the figure. Unfortunately, this may prove somewhat problematic. The profiles are not as distinct across the initial concentrations used. It is hard to say whether this is limits in the model (it is at very low thermal gradients) or an actual characteristic of HCN. The other result figure is updated as well with constant limits on the color bars, and it definitely improved the figure. One of the coauthors expressed dislike for the jet-color map. I meant to change it, but I forgot. It just doesn’t seem worth recreating this figure for that.
I have made corrections to some comments in the paper, but I need to finalize that. I had hope to get to that by the end of January, but the figure updates too more time for the reasons I mentioned above. I am going to try and get it done today and tomorrow. I would say Friday, but I need to grade Friday. I also need to get this out so we can get one more round of edits to send it for publication. I also need to switch my attentions back to Pluto because I have about a month to get a substantial update to that project. I think that is enough time. With the MATLAB process streamlined, it is mostly tedious work to do (i.e. mapping, extract crater depths, etc.). Lastly, I need to sit down and get a review of something Rick requested as a part of my comps exam follow up. I am going to aim to do that at the group meeting following the deadline for my LPSC presentation submission. I don’t know when that is. I don’t even have an email (that I know of) telling me I got a presentation.
This is an ongoing post of research updates during the month. Updates are provided every few days, and you can easily reach the update by clicking the link in the calendar.
11/4/20 – Plans for the month and election update
I have quite a few things to accomplish for the month. Lets break it down.
With my first draft finished, I need to move forward with the Pluto results. I think, I would like to have this mapped by our next meeting (11/18/20) and have the depth profiles by the end of the month (12/2/20). Then we can set up a meeting with collaborator in December to decide how to move forward.
Catherine and I have a meeting 11/13/20 with McGill. I need to prepare for that. That will include reviewing notes from last time and prepare for updates from the team. What I hope to bring is 1) my own results and plans to move forward and 2) a proposal for an experiment idea we discussed before. That is, an experimental apparatus to model an organic freezing in water to test the SF2 model.
On that note, I need to think about how to move forward. That means adapt the SF2 model to work with an alternative chemistry, ideally an amino acid that is denser than water. The first steps mean reviewing the material we have on how it acts in water like with HCN. At the same time, I would like to update my results for my paper while it is being edited by me and my coauthors. I have the SF2 results; I need to process them to get a new set of fits (extrapolation) and thermal model. I think I can do this in the amount of time edits will take. However, I would like feedback on if this is necessary. If yes, lets aim for final results by end of the month. Similarly, I would like to have my initial research for an alternative chemistry by the end of the month. That paves the way for implementation in December.
In the same vain, I need to put together a presentation for my lab on some topics discussed in my PhD comps. This is related to chemistry and the water-organic relationship, so I think the two goals go hand in hand. The goal is to present these results in December.
Lastly, I need to make a poster of my results. I think I will just apply my presentation. I am not sure what the timeline is for AGU, and I’ll decide on this later in the month.
In other news, one of the worst scenarios is playing out in the election. That is, Biden’s path to victory is a slow one dependent on absentee ballots. Trump has already screamed fraud and called for them not to be counted. If this isn’t a crime I don’t know what is.
11/18/20 grading and editing and last minute modeling
Earlier this month we had the Dragonfly meeting. That was a long yet fascinating experience. I look forward to following future meetings. I’m in a bit of a unique position to join titan research at the end of a long mission (Cassini), and now I get to watch the making of one. It was at times a little technical for me, but I still enjoyed the experience. Naturally, I wish it had been in person. Not only for the lovely trip to Baltimore (or wherever it would have been), but because it’s difficult to stay focused behind a screen with no one around to see you distracted.
I’ve also been working on the second draft of my HCN paper. Catherine had some great advice (naturally) on how to improve the paper and the results which involved a few more model runs. Luckily, I had already been running more models since I was finalizing the paper, so I was in a position to update my results right away. Unfortunately, things did not go as planned.
When I was finalizing my first draft, I tabulated my variables and constants in my model. Naturally, I validated all the sources I had listed for my values in my code. It was then that I noticed I had one of my values off or my source had changed. I updated it, but it didn’t seem like a major update so I didn’t expect it to change much. However, after looking at the new data I had ran while I edited my first draft, I realized it was a fairly significant offset. Que full system check.
I did several things. One, I reran all my codes for new results. What I found was that updating my values had streamlined the code. It ran faster, and worked at more extremes that it struggled with before. This means, if correct, I may not have to extrapolate. Then second, I needed to make sure the change was because I updated my values and not a major mistake in the code I made. This meant I tried to recreate my old code with the old results. As hard as I tried, I couldn’t do it. I wouldn’t say I tried everything, but I felt like I could only afford so much time for this. Alternatively, I downloaded Jacobs fresh code and input the values as my current sources documented them. It reproduced my most recent results. This means, while I may not be able to recreate my original results (which concerns me), I can at least test assured that the change isn’t a mistake I made in the main code in some other area. All I can say is, my parameter values are accurate, and I can reproduce these results from scratch.
I have some ideas on what was the key change that caused the results to change. However, without investing more time on troubleshooting I chat say for sure. One thing I have failed to do, largely because of confusion, is install GitHub which I believe saves various versions of your code for reference. This would have made troubleshooting a much easier process.
For now, I have been reproducing all my results, and more. To the point that, I think I won’t need to use extrapolation. However, I am pushing my luck when it comes to finalizing this second draft by Friday. I very well may be able to finalize my SF2 model results today and begin the heat transfer model. Although, I have doubts. It seems very likely this is going to take a few more days. If I don’t get this buy Friday (I have lab Thursday too), I think next Wednesday is achievable.
I have not even mentioned that my AGU poster is due Friday. That will be done today, perhaps tomorrow, so I have time for feedback from Catherine.
PS: my power was out when I woke up this morning which meant I one, woke up late despite planning to write this, and two, had to write this in my phone.
October is, objectively, the most wonderful month of the year, and it felt like it came and went faster than any other month. October began in a rush to submit my DPS presentation. I was not that concerned. I had just presented my research in July (which felt more recent than it was), and the virtual platform opened up how we give the presentation.
The final stretch leading up to the 9th (the submission of my presentation) was to produce some final results for my project. this required sacrifices. That is to say, I had to settle for simplifications rather than continually strive for perfect results. Catherine made it clear, at some point, I have to settle for what I have and move forward. This means I used the results of the SF2 model for lower concentration and extrapolated for the higher concentrations since getting a full profile at higher concentrations was a big hurdle. I think this was a good fit because the minimal data I got for 50 and 75 ppt matched up well with the fit. The next step was the 2D thermal model. This was mostly effective. I had to download an updated version and modify it for Titan, and in the process, I struggled to get the model to take the higher order fit of the HCN-water phase diagram. That is to say, I had to use a lower order fit that is less precise. Lastly, I struggled with the time steps being output because the results I presented had a thin liquid level, but it is effectively frozen. It should be entirely frozen. What’s more, the time scales are half the length, if not more, of what other predictions have for models of this size. These are all things I need to improve moving forward.
In terms of the presentation, I was frustrated with the DPS set up (going in, and after). However, I intended to make use of the prerecorded method they used. I regularly film and edit YouTube videos. This is has not only prepared me for easy editing techniques, but it trained to be fairly comfortable talking to a camera. I debated trying to record all at once or breaking it down to each slide. Each slide, I could perfect the conversation, but I risk sounding rehearsed. The entire presentation, I risk making mistakes or going over in time. I opted to go be slide. This was not effective. I got burnt out very quickly, and I found I would never be satisfied with what I said. So I stopped, and recorded all the way through. I did that one time, and it was fairly good. It was too long and had several mistakes. Rather than rerecord, I decided to give it the YouTube treatment and piece together a concise and continuous conversation with abrupt cuts throughout. This is a common occurrence on BookTube. I remove mistakes often, and I often have a bad stutter. I also do it when I want to trim down excess conversation. I had to make sacrifices to trim this down, and I did so fairly easily. It is a tedious process but one I am fairly fluent at. I am curious to hear peoples thoughts on that approach, especially as it fits into a professional setting. For the slides, I exported them as images and input them into my video editor. In retrospect, I could have had higher resolution slides by recording my screen of the presentation because I could not control the output of the slides. I am sad that I have a poster for AGU because I would have liked to do this again, with this knowledge, at least rather than a poster.
As we shifted to the actual conference, I also began to write up the results of my work for the manuscript I started earlier this year. I finished that on time, and it wasn’t that hard to actually write. The tough part was sitting down and writing. Once started, I find putting my thoughts to paper fairly easy. I had adapted my PhD proposal into the paper at large earlier in the semester to the point that I only needed my results. I also needed to finalize a couple tables, but that was easy enough.
Check out my November research update (hopefully an ongoing report) for my plans moving forward!
In the meantime, enjoy some October bike ride photos.
I am beyond grateful to be offered the opportunity to review this book. I just recently finished one of Carroll’s older books, and it is one of my favorites of the year. I know this book is already out. Nevertheless, the copy I was granted expires on the 31st of December, so intend to finish it before then to provide feedback for the copy I received. When I reviewed From Here to Eternity, I tried to review each part of the book. I think the result was a bit of mess; it was also a lot of work. Here, I will stop after each chapter to very briefly summarize his points and to discuss how effective it was as a chapter. Summarizing it will help me get a sense of how well I really understand it. Basically, I’m blogging my entire experience with the book. When I’m done, I’ll summarize my thoughts above my blogging experience (right after this).
I absolutely adored this book. I am so grateful to NetGalley for providing me with an e-ARC of this. I didn’t even realize it was already out, and I ended up using the audiobook (also amazing) to read the book. I still am happy I got the ARC because I may not have read it otherwise. I have only just started on NetGalley. I am a fan of Carroll, so I wanted the chance to review his newest book early. Even if it was already out, I may not have read it without the ARC because that was really the biggest motivator (the need to provide a review).
Otherwise, I might have read a different book by him because I was honestly very afraid of this book. The first time I saw the synopsis (prior to finding it on NetGalley), I read quantum mechanics and thought this was not for me. I have never understood it and was unlikely to start trying now. Then, with the added incentive, I decided to give it a try. Dear Sagan am I happy I did. I left this book feeling as though I actually understand quantum mechanics. Then add on the extra benefit of being beyond fascinated, intrigued, and excited by his discussion of Everett’s Many World’s hypothesis. I go in depth in my thoughts on that in my live bloggingwhere I responded after each chapter. I would refer you there, jjoshh.com, if you are interested in reading that.
All in all, this book did everything I want from a science book. It challenged my fundamental way of thinking all the while in a clear and structured manner. What’s more, it is one that doesn’t shy away from the tough parts of science while not creating a story that completely hinges on your reader to have an expert level understanding to follow along. I highly recommend this book and Sean Carroll (and his podcast Mindscape). This will probably be one of my top 10 books of the year. 5/5 stars
I will probably do a review on my channel as well, but that will be in a week or so when I have time.
Rating Break Down Writing Style: 10/10 Content: 10/10 Structure: 10/10 Summary: 9/10 Engagement: 10/10 Enjoyment: 10/10 Comprehension: 8/10 Pacing: 9/10 Desire to Reread: 10/10 Special: 10/10 Final Rating: 4.785/5 Note, each rating is weighted based on personal importance (see blog for more details).
The book is already out, so I should be okay to quote it. Lastly, I am reading this via the e-arc in conjunction with the audiobook (on Scribd). The audiobook is narrated by Carroll himself, and it is very well done. If you haven’t already, check out his podcast, Mindscape where he gets guests to discuss leading topics in science. I mention that here because the first thing I noticed was how much the audiobook was like listening to this podcast. It feels natural well performed.
Carroll uses the Prologue of this book for a very simple purpose. He is here to talk to us about Quantum Mechanics, but before he does that, he has to make has to make us care. He takes a subject that, I suspect, most people assume is resolved, and explains why what we think we know is wrong. What’s more, he hints at how he intends to make us look at Quantum Mechanics in a brand new way. He does it in a way that highlights how skilled a science communicator he is, and it gets me beyond excited to dig deeper into this book.
Part One: Spooky
In Chapter 1, His first step is to explain exactly what quantum mechanics tells us, generally speaking, and where it sits within the realm of physics. Basically, it is a foundation chapter. He discusses how quantum mechanics compares to classical mechanics in how we go from a world of concise reality to one of probability. He sums it up as follows: “What we see when we look at the world [through quantum reality] seems to be fundamentally different from what actually is.” Quantum mechanics works similarly to classical; that is, the system is set up and is let to evolve. The difference comes with the act of measuring. The fundamental problem addressed in this chapter is to understand that quantum theory, as it currently exists, doesn’t explain how reality works only that it is how it is.
The concept seems simple enough, and his background feels like a good description of what quantum mechanics is. In Chapter 2, Carroll takes us on a journey to how this all came to be understood. He tries to make his point, stated in Chapter 1, that there is something missing in our understanding. Carroll explains the difference between epistemology which is the state of our knowledge versus ontology which is the state of reality. Essentially, this says there are ways of getting to the result without fully understanding how we got there. I get a little lost as he transitions to thinking about QM in a different manner. He treats the idea of a wave function as reality. where everything is literally a wave and when we observe it as otherwise, we aren’t observing a fact of reality, simply a piece of reality lacking a bigger picture. The impression I get from this is that the problem with QM isn’t an ontological one but an epistemological one.
I can’t pin point exactly how he goes from each point to the next, but I find his explanation overall effective. I’ve never quite understood what it meant to be a wave function. Now I think I do. Waves aren’t just a construct, they are a fact of reality, where reality acts fundamentally different than we perceive it in classical mechanics. That is, the universe is as much in a state of superposition as the quantum particles that make it up. That leads Carroll to the idea of Many Worlds, where many worlds are simply an extension of quantum theory. “The potential for such universes was always there,” and each world is a realization of that each position. This may be the best explanation of the many worlds theory that I’ve ever read (not a cosmologist). What’s more, Carroll doesn’t hold back that this could be wrong, and he takes the time to address other possibilities.
Chapter 3 felt like an introduction to quantum mechanics. Carroll provides a reader with the history of the science that lead to our current understanding. He concluded by explaining how the scientific community came to the understanding that quantum mechanics is fundamentally probabilistic despite many attempts to assign it a deterministic nature. It was a fine review, but I found myself wondering what the point of it all was until he spelled it out that they never really explored the implications. Overall, I can’t help leaving the chapter unsure what it means to be probabilistic. Ideally, that is the point; I just wish I could, as a reader, have ascertained his point without him spelling it out.
Carrol is an apt story teller and science communicator. He uses Chapter 4 to explore probabilities, or more specifically, the nature of uncertainty, further. It seems the most important thing to understand is that the wave and uncertainty descriptions are not a broad description that works with gaps of knowledge. The physics that governs this world is fundamentally different than the rules of classical mechanics. I’ve got a background in that area, and it makes sense to me.
He finishes his discussion by focusing on the nature of what it means to be a wave. It was probably the most difficult material he has covered yet but still easily understood. He gets into a conversation on spin that feels esoteric and a bit over my head. Luckily, he doesn’t leave us stranded. He uses the information to guide us in our understanding. The nature of waves is a confirmed fact. The act of measuring quite literally appears to alter the wave like nature of a “particle.” I think he explains it best but it is fascinating.
Chapter 5 is what feels like, a concise discussion of the nature of entanglement. It is a doozy. I’m here reviewing the material trying to make sense of what Carrol is saying, but I am having a tough time. It seems entanglement is when two electrons share the same spin. The trick is, their spins are in superposition, and they don’t consolidated until measured. The trick is, once one is measured then the other is guaranteed to be measured as such too. What I don’t get is how we know this isn’t a correlation; why must it be an entanglement.
If a photon is used to force particle a into a fixed spin that doesn’t change the spin of what it is entangled to, it only passes that entanglement on to the photon used to change the spin. That suggests a shared dynamical relationship not an intrinsic entanglement. I have to assume there is an independent way of identifying them as entangled.
My initial impression at the start of the chapter was that entanglement is the way the wave function of the universe (or of these two particles) is intertwined. That is more than a coincidental correlation. All that is to say, the chapter is complicated, and I hope it becomes clearer later in the book.
Part 2: Splitting
Chapter 6 was easier to read. It discusses the nature of decoherence and it’s implications on the many worlds hypothesis. I can’t say I left the chapter absolutely convinced, but it was a much more compelling story to read. Now we are getting into the nitty gritty.
Chapter 7 tackles the nature of probability and the effect on the multiverse. I think the first very compelling point was how it doesn’t feel like we live in a multi-world universe, but the same was said about the earth rotating or the earth orbiting the sun. Sometimes, our intuitive senses aren’t enough. I found this chapter immensely fascinating. The nature of probability means all that can happen does happen. Now I’ve heard that before, but I’ve always wondered what the realistic effect is on the macro scale (vs micro/atomic).
If the position or spin of an electron can be in superposition, what difference does that make on the classical physics of the world. I still don’t really know, but one fantastic point Carroll makes is how we can discuss probabilities. Say we do a random number generator our interpretation of that will vary. If we assume the RNG is quantum (which Carroll’s actually is) then a string of 16 spin directions (1/0) will produce a world where every possible line of 1’s and 0’s exists. In that world, Carroll’s use of this list in his discussion would be directly effected by unlikely results like all 1’s or weird patterns. It’s fascinating to think of the different directions his book and life would take in those scenarios. It’s debatable how big of an effect it would have, but it’s a substantial example of a direct influence of these quantum superpositions on the macro world.
Carroll finishes the chapter exploring how we might differentiate between more likely scenarios. This part highlights my biggest problems with the book which my inability to comprehend the more esoteric discussions. That said, Carroll continues to keep us grounded by walking through each piece such that I leave understanding (I think) the points he is trying to make. Unfortunately, I don’t have the time to study what he’s saying to fully appreciate every step along the way.
The fascination continues in Chapter 8 as Carroll begins to attack, head on, the question of whether the Many Worlds perspective is (1) the most logical conclusion and (2) really science. The quintessential simplicity of the theory is that anything else would have to add on or change the laws of quantum mechanics as we understand it. Basically, if you want to deny the existence of an infinite number of worlds, you have to complicate our own. As far as occurs razor is concerned, that just doesn’t work. Then as far as science, it is said that a theory must be falsifiable, and one cannot deny that the law that implies the multi-world perspective is entirely falsifiable simply by disproving the laws of quantum mechanics.
Chapter 9 is dedicated to the opposing theories that have been proposed to counter the Everett Many-Worlds interpretation. I thought it was a great overview and comparison. To be fair, we have multiple theories condensed to one chapter with 2/3 of this book to talk about Everett’s view, but I thought it Carroll did a good job defending against them. Granted, I may struggle to explain this myself without further review.
What we got in Chapter 10 is really what I’ve been waiting for all along. He talks about the implications on us. He delves into the question of free will, consciousness, and whether these quantum processes can really be assumed to extend to the choices we make. He makes a compelling case that it is unlikely that our choices are in fact quantum. That is to say, the processes that govern it are probably not probabilistic. Nevertheless, he talks about opportunities that we might introduce such randomness into our decisions. We can use quantum number generators to help make decisions to ensure multiple versions of our-self, however minor.
Now, I came to this revelation last month, and ever since, I’ve been striving to make decisions by it. Right now, I’ve used it to decide which books (or the order by which) I read. This may be minor, but books can have profound effects on us. I can imagine a world where I read one book and not another and it seriously effecting me. This book is a prime example of that. I may expand on this discussion in another post, but I’ll summarize with how exciting I find this all to be. The ability to actively create multiple versions of one’s self is so enthralling to me.
Part 3: Spacetime
In Chapter 11 Carroll begins to explore what this actually means for reality. That is, where are the other worlds, and how are they connected to us. My understanding is that these states all coexist in the quantum realm, but there is something about our entangled selves that then experience these physical laws for our specific reality given. However, the others can be thought to be there, experiencing reality slightly different. I think he did a good job explaining this. It is still very abstract but overall a good take on how this relates to the greater universe.
I found Chapter 12 to be a bit esoteric. He seems to be discussing the nature of quantum field theory, and, while interesting, I didn’t understand the point as it relates to the Many Worlds interpretation. I think he was trying to highlight the fundamentally difference between the way reality works in quantum mechanics than in how we perceive reality. That is to say, particles aren’t strictly what we perceive them to be. Perhaps this suggests the same may be true for Many Worlds? It may be that it has nothing to with that and Carroll is branching off into another tangential area of research.
On that note, Chapter 13, the last chapter, is all about quantum gravity. He makes sure to be very clear: this is purely hypothetical. Quantum gravity may be an intriguing idea, but it is not yet on the same level as say even the Many Worlds interpretation which at least is based on an understood scientific idea. I think it did a really good job bringing this section to close. While it is very theoretical and ongoing research, I can better appreciate how chapter 12 was building up to this idea which is essentially that space, and maybe time, is emergent. That is, the nature of entanglement of particles brings space into existence as we perceive it. As such, that might explain why we perceive our world from a different world where the quantum state is a bit different.
Epilogue and Appendix
This was pretty straightforward close to the book. I like that he read the appendix (or selected parts) on the audiobook.
My apartment is finally finished being renovated. My new flooring looks really good and brightens the room. The old flooring was a much darker brown. It was done when I got home September 2nd, but it took me a while to get up the energy to organize everything. I particularly like my idea to swap the table and the couches. The living room has always been on the other side of the room where the window is. This has allowed me to put most of my books in that room. The lighting is great for YouTube videos, and I also just like things being different. I spend most of my time with my books and a my table than on my couches, so this works out.
9/8/20 – Labor Day Weekend
On another personal note, labor day weekend was Dragon Con. Of course, Covid-19 turned it virtual. It was not the same, but I did enjoy playing werewolf with friends and others.
Okay, I have made good progress on my HCN models. Unfortunately, the results speak to a level instabilities above 10%. I met with Jacob (Dr. Buffo) to discuss it, and we have clear plan to move forward. I’m going to work with what I have and extrapolate. The model can only do so much. If you would like to see my results, I am happy to share, but I won’t be posting it publicly. This includes good results for up 5% and okay for 10% as well as testing the affect of varying spatial increments on 1 and 5% (to use it on higher concentrations of 10% and up).
Right now I am running a few more models for 2.5% and 7.5% so we can create a detailed 3D curve of values to better extrapolate forward. These lower concentrations should not take too long. The goal is still up to 25% concentration to ~25m depth pond. I am going to start working on Figures for my paper. Initial figures are set for September 14th, and hopefully by September 16th I can get an extrapolated figure to produce with a paper draft to be sent to Catherine. Then I’d give myself another week to run 2D model (maybe a little ambitious) followed by a final (1st?) draft on September 28th.
Simultaneously, I am working on mapping Pluto. There was no progress with this in August, and the first bit of September I’ve been focused on producing results. That said, I’ve begun mapping. My goal here is to have the entire surface mapped September 18th and the craters depths processed by the end of the month (September 30th). With this timeline, it might be a good idea to set up a meeting with Dr. Bray in mid October.
9/22/20 – Progress Update
The last couple weeks were fairly productive, but I did not achieve my goals. The goal was to have my HCN Concentration in the ice vs depth (thermal gradient) completed by the 16th. I have not. They are in working condition, but I need more data for 5% and 7.5%. This is going to take time because I have to start at shallower depth due to stability issues. On a positive note, I have verified that I can increase the dt with negligible effects on the results. This should allow the models to run to a deep enough depth by the time the paper is finished and needs the finalized figures (~mid October?). I am working on extrapolating the data I have, but it is proving to be more difficult than I realized. It is a new task for me, interpolating and extrapolating in 3D in MATLAB. It is just taking time. I have made progress on the paper. Is virtually good to go minus the results and discussions.
I haven’t worked on the Pluto work because it doesn’t feel as urgent. I honestly don’t know if I ought to give this any time until I get the paper done. The goal now is finish mapping by October 9th with the data processed by ~mid October (14th). I am doing that because I need to get the HCN results done for DPS in October.
I start TAing Thursday, but luckily all my labs are on the same day. The goal is to make those days the day to work on labs. Other days will stay for research.