Difference between revisions of "Outputs"

From tessera
Jump to navigation Jump to search
Line 10: Line 10:
  
 
= Presentations =
 
= Presentations =
 +
* [https://svr-sk818-web.cl.cam.ac.uk/tessera/index.php/File:JCU-tesserav2.pptx TESSERA overview presentation] James Cook University, S. Keshav, September 29, 2025.
 
* [https://svr-sk818-web.cl.cam.ac.uk/tessera/images/0/0d/PROTEA-short_version.pptx Self-supervised learning for earth observation, short version], (PPTX)  S. Keshav, May 2025
 
* [https://svr-sk818-web.cl.cam.ac.uk/tessera/images/0/0d/PROTEA-short_version.pptx Self-supervised learning for earth observation, short version], (PPTX)  S. Keshav, May 2025
 
* [https://svr-sk818-web.cl.cam.ac.uk/tessera/images/b/b3/BTFM_talk_Exeter_v2.pptx Self-supervised learning for earth observation], (PPTX)  S. Keshav, Exeter, April 2025
 
* [https://svr-sk818-web.cl.cam.ac.uk/tessera/images/b/b3/BTFM_talk_Exeter_v2.pptx Self-supervised learning for earth observation], (PPTX)  S. Keshav, Exeter, April 2025

Revision as of 09:20, 29 September 2025

Software

  • GeoTessera Python library for accessing and working with Tessera geospatial foundation model embeddings. GeoTessera provides access to geospatial embeddings from the Tessera foundation model, which processes Sentinel-1 and Sentinel-2 satellite imagery to generate 128-channel representation maps at 10m resolution. These embeddings compress a full year of temporal-spectral features into dense representations optimized for downstream geospatial analysis tasks.
  • Interactive Tessera Embedding Classifier This repository contains a Jupyter notebook based tool for interactive, human-in-the-loop classification of geospatial data using the Tessera foundation model embeddings. The tool allows a user to define an area of interest, visualize the high-dimensional embedding data with PCA, and iteratively train a machine learning model by simply clicking on the map to label.

Papers

Z. Feng et al. TESSERA: Temporal Embeddings of Surface Spectra for Earth Representation and Analysis, July 2025.

Please follow this link for papers in progress and under submission.

Presentations