I have spent the past month or so developing an app I call Earth Simulator. This software is designed to simplify the downloading and processing of geostationary remote sensing data from GOES-15/16, GOES-17/18, Himawari-8/9, Meteosat-9, and Meteosat-10, while also allowing the user to display the generated images on a 3D globe, with support for timelapses, geolocated image blending, and ultra high-resolution images.
I wrote the app exclusively in Python, using PyOpenGL for rendering, wxPython for the GUI, Satpy for image processing, and various other packages for computing secondary products such as blending masks.
The app is still in it's very early stages. At the moment it only supports a few
composites for each satellite, but in the future I hope to implement support for
every composite supported by Satpy, including individual channels. As of August 29, 2023, Earth Simulator
has support for most Satpy composites. I also
hope to add support for more satellites, including polar orbiting ones.
One of the aspects of this project that I'm most proud of is my implementation of geolocated image blending. This is a technique that I developed to blend images based on the geographic coordinates of each pixel. My technique has the added benefit of being able to blend images of any size, as well as being quite efficient by precomputing the blending masks.
The first step for generating the blending masks is to calculate the longitude and latitude coordinates of each point on a 3D model of a sphere (representing the Earth). Once complete, we can determine which vertices are present in each image, as well as which images share vertices in common.
Then, for whichever image we want to blend, we must compute the pixel coordinates of each vertex in the image as well as the pixel coordinates of that vertex in the neighboring satellite's image. Additionally, we must calculate a 'weight' associated with that vertex, which is a normalized ratio of the longitudinal distances between the vertex and the satellites.
At this point, we must triangulate the vertex pixels. First, you generate a list of triangles corresponding to the vertices on each image. You can use the pixel coordinates of each triangle's vertices to calculate an initial list of pixels corresponding to the bounding box of that triangle. Then, using barycentric coordinates, you remove all pixels outside the triangle. For each pixel in the triangle, you calculate the pixel coordinates of the same geographic location in the neighboring image by converting the barycentric coordinates to pixel coordinates. This gives the pixel with which you need to blend. To get the weight, or how much to blend by, you must simply use the barycentric coordinates to interpolate between the weights of each vertex.
This method is very efficient because we can select the granularity of our blending mask by changing the number of vertices on our 3D model. Furthermore, using vectorized functions allows the process of triangulation to be quite fast, although it does require a lot of memory. Nonetheless, I am able to generate a high quality blending mask for an image ~5000x5000 pixels in size in a matter of seconds with 8GB of RAM.
Because we are only rendering a maximum of five satellite images at a time, the simplest way to render the images on our sphere is to only render the vertices that are present in each image and load each texture into memory simultaneously. This becomes an issue if you want to load many images at once due to the large overhead, so I recently implemented barebones timelapse functionality so the images are loaded sequentially when you want to generate your own timelapses. Finally, this application supports extremely large images (~20,000x20,000 pixels) by splitting the image up into smaller textures for rendering.
The results are not only beautiful, but potentially useful for scientific applications. This blending algorithm can dramatically improve the quality of the images at large zenith angles and can correct for perspective distortion and small errors in the images. Furthermore, because the application uses an orthographic camera, the images are not distorted by the curvature of the Earth, allowing the user to generate orthorectified images and maps. Here's a single use-case example:
On January 15, 2022, the Hunga Tonga-Hunga volcano erupted in the South Pacific Ocean. The resulting ash cloud and shock wave were captured by the geostationary satellites GOES-17 and Himawari-8.
A comparison of blended vs. non-blended images.
Shock wave traveling around the Earth, shown using time-differential IR 6.9 um images corresponding to mid-level Tropospheric water vapor
If you feel so inclined, open these images up in your browser to see the differences.
I would argue that the blended images are superior for both the zoomed in shots and the zoomed out shots. The color may be better for himawari in most portions of the zoomed in image, but there is noticable perspective correction occurring in the blended images. The user of course will have the power to decide to blend or not to blend. Since perspective correction is noticeable in these images, blending should be useful for some cases even when it doesn't make an image strictly more beautiful.
Overall, I'm glad I've finally reached a point where I'm getting good results. I think that the potential uses for this app are exciting. Even if it's not useful for scientific purposes, I think being able to take some really cool images of the Earth from virtually any perspective is something many people may be interested in.