Menu

rocky

SpaceNet 3: Road Network Detection

SpaceNet 3: Road Network Detection

The Problem

The commercialization of the geospatial industry has led to an explosive amount of data being collected to characterize our changing planet. One area for innovation is the application of computer vision and deep learning to extract information from satellite imagery at scale. CosmiQ Works, Radiant Solutions and NVIDIA have partnered to release the SpaceNet data set to the public to enable developers and data scientists to work with this data.

Today, map features such as roads, building footprints, and points of interest are primarily created through manual techniques. We believe that advancing automated feature extraction techniques will serve important downstream uses of map data including humanitarian and disaster response, as observed by the need to map road networks during the response to recent flooding in Bangladesh and Hurricane Maria in Puerto Rico. Furthermore, we think that solving this challenge is an important stepping stone to unleashing the power of advanced computer vision algorithms applied to a variety of remote sensing data applications in both the public and private sector.

The Data – Over 8000 Km of roads across the four SpaceNet Areas of Interest.

See the labeling guide and schema for details about the creation of the dataset

AOI Area of Raster (Sq. Km) Road Centerlines (LineString)
AOI_2_Vegas 216 3685 km
AOI_3_Paris 1,030 425 km
AOI_4_Shanghai 1,000 3537 km
AOI_5_Khartoum 765 1030 km

Road Type Breakdown (km of Road)

Road Type Vegas Paris Shanghai Khartoum Total
Motorway 115 9 102 13 240
Primary 365 14 192 98 669
Secondary 417 58 501 66 1042
Tertiary 3 11 34 68 115
Residential 1646 232 939 485 3301
Unclassified 1138 95 1751 165 3149
Cart track 2 6 19 135 162
Total 3685 425 3537.9 1030 8677

Catalog

The data is hosted on AWS as a Public Dataset. It is free to download, but an AWS account is required.
aws s3 ls s3://spacenet-dataset/spacenet/SN3_roads/

Sample Data

10 Samples from each AOI – Road Network Extraction Sample

To download processed 400mx400m tiles of AOI 2 (728.8 MB) with associated road centerlines for training do the following:

aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_sample.tar.gz .

Training Data

AOI 2 – Vegas – Road Network Extraction Training

To download processed 400mx400m tiles of AOI 2 (25 GB) with associated building footprints for training do the following:


aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_train_AOI_2_Vegas.tar.gz .

aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_train_AOI_2_Vegas_geojson_roads_speed.tar.gz .

AOI 3 – Paris – Road Network Extraction Training

To download processed 400mx400m tiles of AOI 3 (5.6 GB) with associated road centerlines for training do the following:


aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_train_AOI_3_Paris.tar.gz .

aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_train_AOI_3_Paris_geojson_roads_speed.tar.gz .

AOI 4 – Shanghai – Road Network Extraction Training

To download processed 400mx400m tiles of AOI 4 (25 GB) with associated road centerlines for training do the following:


aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_train_AOI_4_Shanghai.tar.gz .

aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_train_AOI_4_Shanghai_geojson_roads_speed.tar.gz .

AOI 5 – Khartoum – Road Network Extraction Training

To download processed 400mx400m tiles of AOI 5 (25 GB) with associated road centerlines for training do the following:


aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_train_AOI_5_Khartoum.tar.gz .

aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_train_AOI_5_Khartoum_geojson_roads_speed.tar.gz .

Testing Data

AOI 2 – Vegas – Road Network Extraction Testing

To download processed 400mx400m tiles of AOI 2 (8.1 GB) for testing do:


aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_test_public_AOI_2_Vegas.tar.gz .

AOI 3 – Paris – Road Network Extraction Testing

To download processed 400mx400m tiles of AOI 3 (1.9 GB) for testing do:


aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_test_public_AOI_3_Paris.tar.gz .

AOI 4 – Shanghai – Road Network Extraction Testing

To download processed 400mx400m tiles of AOI 4 (8.1 GB) for testing do:


aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_test_public_AOI_4_Shanghai.tar.gz .

AOI 5 – Khartoum – Road Network Extraction Testing

To download processed 400mx400m tiles of AOI 5 (8.1 GB) for testing do:


aws s3 cp s3://spacenet-dataset/spacenet/SN3_roads/tarballs/SN3_roads_test_public_AOI_5_Khartoum.tar.gz .

Citation Instructions

If you are using data from SpaceNet in a paper, please use the following citation:

Van Etten, A., Lindenbaum, D., & Bacastow, T.M. (2018). SpaceNet: A Remote Sensing Dataset and Challenge Series. ArXiv, abs/1807.01232.

Metric

In the SpaceNet Roads Challenge, the metric for ranking entries is the APLS metric. This metric is based on graph theory and empahsizes the creation of a valid road network.

The current version of the metric is open sourced on github: Average Path Length Similarity (APLS) metric For more information read the SpaceNet Road Detection and Routing Challenge Series, Part 1, and Part 2, written by Adam Van Etten at The DownlinQ.

For more information read the full article written by Adam Van Etten at The DownlinQ.

License

SpaceNet 2: Building Detection v2

SpaceNet 2: Building Detection v2

The Problem

The commercialization of the geospatial industry has led to an explosive amount of data being collected to characterize our changing planet. One area for innovation is the application of computer vision and deep learning to extract information from satellite imagery at scale. CosmiQ Works, Radiant Solutions and NVIDIA have partnered to release the SpaceNet data set to the public to enable developers and data scientists to work with this data.

Today, map features such as roads, building footprints, and points of interest are primarily created through manual techniques. We believe that advancing automated feature extraction techniques will serve important downstream uses of map data including humanitarian and disaster response, as observed by the need to map road networks during the response to recent flooding in Bangladesh and Hurricane Maria in Puerto Rico. Furthermore, we think that solving this challenge is an important stepping stone to unleashing the power of advanced computer vision algorithms applied to a variety of remote sensing data applications in both the public and private sector.

The Data – Over 685,000 footprints across the Five SpaceNet Areas of Interest.

AOI Area of Raster (Sq. Km) Building Labels (Polygons)
AOI_2_Vegas 216 151,367
AOI_3_Paris 1,030 23,816
AOI_4_Shanghai 1,000 92,015
AOI_5_Khartoum 765 35,503

Catalog

The data is hosted on AWS as a Public Dataset. It is free to download, but an AWS account is required.
aws s3 ls s3://spacenet-dataset/spacenet/SN2_buildings/

The Metric

In SpaceNet Challenge, the metric for ranking entries is based on the Jaccard Index, also called the Intersection-over-Union (IoU). For more information read the full article on The DownlinQ.

Labeling Guidelines

For more information about the labeling guidelines, please view the SpaceNet Buildings Dataset Labeling Guide

Sample Data

10 Samples from each AOI – Road Network Extraction Samples

To download processed 400mx400m tiles of AOI 2 (728.8 MB) with associated road centerlines for training do the following:

aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/train/tarballs/ SN2_buildings_train_sample.tar.gz .

Training Data

AOI 2 – Vegas – Building Extraction Training

To download processed 200mx200m tiles of AOI 2 (23 GB) with associated building footprints for training do the following:

aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/SN2_buildings_train_AOI_2_Vegas.tar.gz .

AOI 3 – Paris – Building Extraction Training

To download processed 200mx200m tiles of AOI 3 (5.3 GB) with associated building footprints do the following:

aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/SN2_buildings_train_AOI_3_Paris.tar.gz .

AOI 4 – Shanghai – Building Extraction Training

To download processed 200mx200m tiles of AOI 4 (23.4 GB) with associated building footprints do the following:

aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/SN2_buildings_train_AOI_4_Shanghai.tar.gz .

AOI 5 – Khartoum – Building Extraction Training

To download processed 200mx200m tiles of AOI 2 (4.7 GB) with associated building footprints do the following:

aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/SN2_buildings_train_AOI_5_Khartoum.tar.gz .

Testing Data

AOI 2 – Vegas – Building Extraction Testing

To download processed 200mx200m tiles of AOI 2 (7.9 GB) for testing do:

aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/AOI_2_Vegas_Test_public.tar.gz .

AOI 3 – Paris – Building Extraction Testing

To download processed 400mx400m tiles of AOI 3 (1.9 GB) for testing do:

aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/AOI_3_Paris_Test_public.tar.gz .

AOI 4 – Shanghai – Building Extraction Testing

To download processed 200mx200m tiles of AOI 4 (7.7 GB) for testing do:

aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/AOI_4_Shanghai_Test_public.tar.gz .

AOI 5 – Khartoum – Building Extraction Testing

To download processed 200mx200m tiles of AOI 2 (1.6 GB) for testing do:

aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/AOI_5_Khartoum_Test_public.tar.gz .

Citation Instructions

If you are using data from SpaceNet in a paper, please use the following citation:

Van Etten, A., Lindenbaum, D., & Bacastow, T.M. (2018). SpaceNet: A Remote Sensing Dataset and Challenge Series. ArXiv, abs/1807.01232.

License

SpaceNet 1: Building Detection v1

SpaceNet 1: Building Detection v1

The Problem

The commercialization of the geospatial industry has led to an explosive amount of data being collected to characterize our changing planet. One area for innovation is the application of computer vision and deep learning to extract information from satellite imagery at scale. CosmiQ Works, Radiant Solutions and NVIDIA have partnered to release the SpaceNet data set to the public to enable developers and data scientists to work with this data.

Today, map features such as roads, building footprints, and points of interest are primarily created through manual techniques. We believe that advancing automated feature extraction techniques will serve important downstream uses of map data including humanitarian and disaster response, as observed by the need to map road networks during the response to recent flooding in Bangladesh and Hurricane Maria in Puerto Rico. Furthermore, we think that solving this challenge is an important stepping stone to unleashing the power of advanced computer vision algorithms applied to a variety of remote sensing data applications in both the public and private sector.

The Data – Over 685,000 footprints across the Five SpaceNet Areas of Interest.

AOI Area of Raster (Sq. Km) Building Labels (Polygons)
AOI_1_Rio 2,544 382,534

Catalog

The data is hosted on AWS as a Public Dataset. It is free to download, but an AWS account is required.
aws s3 ls s3://spacenet-dataset/spacenet/SN1_buildings/

Training Data

AOI 1 – Rio – Building Extraction Training

To download processed 200mx200m tiles of AOI 1 (23 GB) with associated building footprints for training do the following:

aws s3 cp s3://spacenet-dataset/spacenet/SN1_buildings/tarballs/SN1_buildings_train_AOI_1_Rio_3band.tar.gz .

aws s3 cp s3://spacenet-dataset/spacenet/SN1_buildings/tarballs/SN1_buildings_train_AOI_1_Rio_8band.tar.gz .

aws s3 cp s3://spacenet-dataset/spacenet/SN1_buildings/tarballs/SN1_buildings_train_AOI_1_Rio_geojson_buildings.tar.gz .

Testing Data

AOI 1 – Rio – Building Extraction Testing

To download processed 200mx200m tiles of AOI 1 (7.9 GB) for testing do:

aws s3 cp s3://spacenet-dataset/spacenet/SN1_buildings/tarballs/SN1_buildings_test_AOI_1_Rio_3band.tar.gz .

aws s3 cp s3://spacenet-dataset/spacenet/SN1_buildings/tarballs/SN1_buildings_test_AOI_1_Rio_8band.tar.gz .

Citation Instructions

If you are using data from SpaceNet in a paper, please use the following citation:

Van Etten, A., Lindenbaum, D., & Bacastow, T.M. (2018). SpaceNet: A Remote Sensing Dataset and Challenge Series. ArXiv, abs/1807.01232.

License

The USSOCOM Urban 3D Competition

The USSOCOM Urban 3D Challenge

The USSOCOM Urban 3D Challenge

High-resolution satellite imagery is changing our understanding of the world around us, as well as the way we as humans interact with our planet. However, raw images do little more than pique our interest unless we can superimpose a layer that actually identifies real objects. Reliable labeling of building footprints based on satellite imagery is one of the first and most challenging steps in producing accurate 3D models. While automated algorithms continue to improve, significant manual effort is still required to ensure geospatial accuracy and acceptable quality. Improved automation is required to enable more rapid response to major world events such as humanitarian and disaster response. 3D height data can help improve automated building footprint detection performance, and capabilities for providing this data on a global scale are now emerging. In this challenge, contestants used 2D and 3D imagery generated from commercial satellite imagery along with state of the art machine learning techniques to provide high quality, automated building footprint detection performance over large areas.

This challenge published a large-scale dataset containing 2D orthrorectified RGB and 3D Digital Surface Models and Digital Terrain Models generated from commercial satellite imagery covering over 360 km of terrain and containing roughly 157,000 annotated building footprints. All imagery products are provided at 50 cm ground sample distance (GSD). This unique 2D/3D large scale dataset provides researchers an opportunity to utilize machine learning techniques to further improve state of the art performance.

SpaceNet is hosting the Urban 3D Challenge dataset in the spacenet repository to ensure easy access to the data.

Related Websites

For more information about the Urban 3D Challenge competition, please visit the Urban 3D Challenge contest website.

Additional information about obtaining the open source algorithms or visualization tools can be found on the Urban 3D Challenge GitHub website.

More information can also be found at the JHU Applied Physics Laboratory Public Geospatial Data and Software website.

Reference Requirement

Please reference the following when publishing results using this data:

  1. H. Goldberg, M. Brown, and S. Wang, A Benchmark for Building Footprint Classification Using Orthorectified RGB Imagery and Digital Surface Models from Commercial Satellites, 46th Annual IEEE Applied Imagery Pattern Recognition Workshop, Washington, D.C, 2017.

BibTex:

@inproceedings{Urban3D2017,
  title={A Benchmark for Building Footprint Classification Using Orthorectified RGB Imagery and Digital Surface Models from Commercial Satellites},
  author={Goldberg, Hirsh and Brown, Myron and Wang, Sean},
  booktitle={Proceedings of IEEE Applied Imagery Pattern Recognition Workshop 2017},
  year={2017}
}

Alternatively, this dataset may be referenced as, “USSOCOM Urban 3D Challenge Benchmark Dataset”.

Please also cite SpaceNet as follows:

SpaceNet on Amazon Web Services (AWS). Datasets. The SpaceNet Catalog.  Last modified January 4, 2018.
Accessed on [Insert Date]. https://spacenetchallenge.github.io/datasets/datasetHomePage.html.

Catalog

aws s3 ls s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/00-Orig_Source_Data/
aws s3 ls s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/01-Provisional_Train/
aws s3 ls s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/02-Provisional_Test/
aws s3 ls s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/03-Sequestered_Test/
aws s3 ls s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/04-Unused_Data/
aws s3 ls s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/AOI_polygons/
aws s3 ls s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/Pretrained_Models/
aws s3 ls s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/LICENSE.txt

Software Code with regard to this dataset:

Available open source solutions from contest winners:

For more information on how to install and run these solutions, see the Urban 3D Challenge Github README.

The pre-trained models associated with each of the winning solution are provided alongside the data as described in the ‘Catalog’ section of this README.

Additional open source software for formatting new dataset to be used by these algorithms can be found here.

Published Papers

  1. H. Goldberg, S. Wang, M. Brown, and G. Christie. Urban 3D Challenge: Building Footprint Detection Using Orthorectified Imagery and Digital Surface Models from Commercial Satellites. In Proceedings SPIE Defense and Commercial Sensing: Geospatial Informatics and Motion Imagery Analytics VIII, Orlando, Florida, USA, 2018.

Dependencies

The AWS Command Line Interface (CLI) must be installed with an active AWS account. Configure the AWS CLI using ‘aws configure’

Questions and Comments

For questions and comments about the dataset or the open source software, please contact pubgeo(at)jhuapl(dot)edu.

License

Urban 3D Challenge Dataset License

IARPA Functional Map of the World (fMoW)

The IARPA Functional Map of the World (fMoW) Challenge

The IARPA Functional Map of the World (fMoW) Challenge

Intelligence analysts, policy makers, and first responders around the world rely on geospatial land use data to inform crucial decisions about global defense and humanitarian activities. Historically, analysts have manually identified and classified geospatial information by comparing and analyzing satellite images, but that process is time consuming and insufficient to support disaster response. The fMoW Challenge sought to foster breakthroughs in the automated analysis of overhead imagery by harnessing the collective power of the global data science and machine learning communities; empowering stakeholders to bolster their capabilities through computer vision automation. The challenge published one of the largest publicly available satellite-image datasets to date, with more than one million points of interest from around the world. The dataset also contains other elements such as temporal views, multispectral imagery, and satellite-specific metadata that researchers can exploit to build novel algorithms capable of classifying facility, building, and land use.

SpaceNet is hosting the fMoW dataset in the spacenet repository to ensure easy access to the data.

Related Websites

For more information about the IARPA competition, please visit the fMoW Challenge website.

Additional information about obtaining the data, baseline algorithm, or visualization tools can be found on the fMoW GitHub website.

Reference Requirement

Please reference the following when publishing results using this data:

  1. G. Christie, N. Fendley, J. Wilson, and R. Mukherjee. Functional Map of the World. In CVPR, 2018.

BibTeX:

@inproceedings{fmow2018,
  title={Functional Map of the World},
  author={Christie, Gordon and Fendley, Neil and Wilson, James and Mukherjee, Ryan},
  booktitle={CVPR},
  year={2018}
}

Alternatively, this dataset may be referenced as, “IARPA’s Functional Map of the World Dataset.”

Please also cite SpaceNet as follows:

SpaceNet on Amazon Web Services (AWS). “Datasets.” The SpaceNet Catalog.  Last modified January 4, 2018.
Accessed on [Insert Date]. https://spacenetchallenge.github.io/datasets/datasetHomePage.html.

Catalog

aws s3 ls s3://spacenet-dataset/Hosted-Datasets/fmow/fmow-full
aws s3 ls s3://spacenet-dataset/Hosted-Datasets/fmow/fmow-rgb

Software Code with regard to this dataset:

Available solutions from contest winners:

Published Papers

Published results:

  1. R. Minetto, M.P. Segundo, and S. Sarkar. Hydra: an Ensemble of Convolutional Neural Networks for Geospatial Land Classification. In arXiv preprint arXiv:1802.03518, 2018.
  2. G. Christie, N. Fendley, J. Wilson, and R. Mukherjee. Functional Map of the World. In CVPR, 2018. In arXiv preprint arXiv: 1711.07846 2017
  3. M. Pritt, G. Chern. Satellite Image Classification with Deep Learning, 46th Annual IEEE Applied Imagery Pattern Recognition Workshop, Washington, D.C.

Dependencies

The AWS Command Line Interface (CLI) must be installed with an active AWS account. Configure the AWS CLI using ‘aws configure’

License

fMoW License

IARPA Multi-View Stereo 3D Mapping

The IARPA Multi-View Stereo 3D

IARPA Multi-View Stereo 3D Mapping Challenge

The availability of public multiple view stereo (MVS) benchmark datasets has been instrumental in enabling research to advance the state of the art in the field and to apply and customize methods to real-world problems. In this work, we provide a public benchmark data set for multiple view stereo applied to 3D outdoor scene mapping using commercial satellite imagery.

This data set includes DigitalGlobe WorldView-3 panchromatic and multispectral images of a 100 square kilometer area near San Fernando, Argentina. We also provide 20cm airborne lidar ground truth data for a 20 square kilometer subset of this area and performance analysis software to assess accuracy and completeness metrics. Commercial satellite imagery is provided courtesy of DigitalGlobe, and ground truth lidar is provided courtesy of IARPA.

This data supported the IARPA Multi-View Stereo 3D Mapping Challenge and is now made publicly available with no restrictions to support continued research. JHU/APL does not plan to maintain an online benchmark leaderboard, but we welcome your feedback and would love to hear about what you’re doing with the data and include your published results on this page.

SpaceNet is hosting the Multi-View Stereo 3D Mapping dataset in the spacenet repository to ensure easy access to the data.

Competition Websites

For more information about the IARPA Competition, Please visit the Multi-View Stereo 3D Mapping Challenge Website

For more information about the MVS benchmark please visit the JHUAPL competition webpage

Catalog

aws s3 ls s3://spacenet-dataset/Hosted-Datasets/mvs_dataset 

The catalog contains the following packages:

  • Updated metric analysis software with examples from contest winners
  • Challenge data package with instructions, cropped TIFF images, ground truth, image cropping software, and metric scoring software (1.2 GB)
  • JHU/APL example MVS solution (451 MB)
  • NITF panchromatic, multispectral, and short-wave infrared DigitalGlobe WorldView-3 satellite images (72.1 GB)
  • LAZ lidar point clouds with SBET (2.2 GB)
  • Spectral image calibration software (84 MB)

Software Code with regard to this dataset:

Available solutions from contest winners:

Published Papers

Published results:

  1. G. Facciolo, C. de Franchis, E. Meinhardt-Llopis, “Automatic 3D Reconstruction from Multi-Date Satellite Images,” IEEE International Conference on Computer Vision and Pattern Recognition, EARTHVISION Workshop, 2017.
  2. R. Qin, “Automated 3D recovery from very high resolution multi-view images,” ASPRS 2017 Annual Conference, 2017.

Dependencies

The AWS Command Line Interface (CLI) must be installed with an active AWS account. Configure the AWS CLI using ‘aws configure’