Awesome
TGS Salt Identification Challenge
This is an open solution to the TGS Salt Identification Challenge.
Note
Unfortunately, we can no longer provide support for this repo. Hopefully, it should still work, but if it doesn't, we cannot really help.
More competitions :sparkler:
Check collection of public projects :gift:, where you can find multiple Kaggle competitions with code, experiments and outputs.
Our goals
We are building entirely open solution to this competition. Specifically:
- Learning from the process - updates about new ideas, code and experiments is the best way to learn data science. Our activity is especially useful for people who wants to enter the competition, but lack appropriate experience.
- Encourage more Kagglers to start working on this competition.
- Deliver open source solution with no strings attached. Code is available on our GitHub repository :computer:. This solution should establish solid benchmark, as well as provide good base for your custom ideas and experiments. We care about clean code :smiley:
- We are opening our experiments as well: everybody can have live preview on our experiments, parameters, code, etc. Check: TGS Salt Identification Challenge :chart_with_upwards_trend: or screen below.
Train and validation monitor :bar_chart: |
---|
Disclaimer
In this open source solution you will find references to the neptune.ai. It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using neptune.ai is not necessary to proceed with this solution. You may run it as plain Python script :snake:.
How to start?
Learn about our solutions
- Check Kaggle forum and participate in the discussions.
- See solutions below:
Link to Experiments | CV | LB | Open |
---|---|---|---|
solution 1 | 0.413 | 0.745 | True |
solution 2 | 0.794 | 0.798 | True |
solution 3 | 0.807 | 0.801 | True |
solution 4 | 0.802 | 0.809 | True |
solution 5 | 0.804 | 0.813 | True |
solution 6 | 0.819 | 0.824 | True |
solution 7 | 0.829 | 0.837 | True |
solution 8 | 0.830 | 0.845 | True |
solution 9 | 0.853 | 0.849 | True |
Start experimenting with ready-to-use code
You can jump start your participation in the competition by using our starter pack. Installation instruction below will guide you through the setup.
Installation
Clone repository
git clone https://github.com/minerva-ml/open-solution-salt-identification.git
Set-up environment
You can setup the project with default env variables and open NEPTUNE_API_TOKEN
by running:
source Makefile
I suggest at least reading the step-by-step instructions to know what is happening.
Install conda environment salt
conda env create -f environment.yml
After it is installed you can activate/deactivate it by running:
conda activate salt
conda deactivate
Register to the neptune.ai (if you wish to use it) even if you don't register you can still see your experiment in Neptune. Just go to shared/showroom project and find it.
Set environment variables NEPTUNE_API_TOKEN
and CONFIG_PATH
.
If you are using the default neptune.yaml
config then run:
export export CONFIG_PATH=neptune.yaml
otherwise you can change to your config.
Registered in Neptune:
Set NEPTUNE_API_TOKEN
variable with your personal token:
export NEPTUNE_API_TOKEN=your_account_token
Create new project in Neptune and go to your config file (neptune.yaml
) and change project
name:
project: USER_NAME/PROJECT_NAME
Not registered in Neptune:
open token
export NEPTUNE_API_TOKEN=eyJhcGlfYWRkcmVzcyI6Imh0dHBzOi8vdWkubmVwdHVuZS5tbCIsImFwaV9rZXkiOiJiNzA2YmM4Zi03NmY5LTRjMmUtOTM5ZC00YmEwMzZmOTMyZTQifQ==
Create data folder structure and set data paths in your config file (neptune.yaml
)
Suggested directory structure:
project
|-- README.md
|-- ...
|-- data
|-- raw
|-- train
|-- images
|-- masks
|-- test
|-- images
|-- train.csv
|-- sample_submission.csv
|-- meta
│-- depths.csv
│-- metadata.csv # this is generated
│-- auxiliary_metadata.csv # this is generated
|-- stacking_data
|-- out_of_folds_predictions # put oof predictions for multiple models/pipelines here
|-- experiments
|-- baseline # this is where your experiment files will be dumped
|-- checkpoints # neural network checkpoints
|-- transformers # serialized transformers after fitting
|-- outputs # outputs of transformers if you specified save_output=True anywhere
|-- out_of_fold_train_predictions.pkl # oof predictions on train
|-- out_of_fold_test_predictions.pkl # oof predictions on test
|-- submission.csv
|-- empty_non_empty
|-- new_idea_exp
in neptune.yaml
config file change data paths if you decide on a different structure:
# Data Paths
train_images_dir: data/raw/train
test_images_dir: data/raw/test
metadata_filepath: data/meta/metadata.csv
depths_filepath: data/meta/depths.csv
auxiliary_metadata_filepath: data/meta/auxiliary_metadata.csv
stacking_data_dir: data/stacking_data
Run experiment based on U-Net:
Prepare metadata:
python prepare_metadata.py
Training and inference.
Everything happens in main.py
.
Whenever you try new idea make sure to change the name of the experiment:
EXPERIMENT_NAME = 'baseline'
to a new name.
python main.py
You can always change the pipeline you want ot run in the main. For example, if I want to run just training and evaluation I can change `main.py':
if __name__ == '__main__':
train_evaluate_cv()
References
1.Lovash Loss
@InProceedings{Berman_2018_CVPR,
author = {Berman, Maxim and Rannen Triki, Amal and Blaschko, Matthew B.},
title = {The Lovász-Softmax Loss: A Tractable Surrogate for the Optimization of the Intersection-Over-Union Measure in Neural Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
Get involved
You are welcome to contribute your code and ideas to this open solution. To get started:
- Check competition project on GitHub to see what we are working on right now.
- Express your interest in paticular task by writing comment in this task, or by creating new one with your fresh idea.
- We will get back to you quickly in order to start working together.
- Check CONTRIBUTING for some more information.
User support
There are several ways to seek help:
- Kaggle discussion is our primary way of communication.
- Submit an issue directly in this repo.