Timeline
2021/07/03
2021/09/03
2021/09/10
Competition start: 2021/07/03 00:01:00
Competition closes on: 2021/09/03 23:59:00
Final Submission Limit: 2021/09/10 23:59:00
Description
For this challenge, we have teamed up with Battelle Memorial Institute - one of the most respected names in the global scientific & research community - to launch a Data Science competition that can help to dramatically accelerate the pace of global innovation. The goal of this project is to break down several barriers that currently stand in the way of advanced research publications getting noticed, and receiving prompt recognition from the world's brightest minds. This competition will also offer cash prizes to the authors of the top two ML models, as determined by our platform's evaluation algorithm. Please read on for more details, and good luck!
About Battelle (battelle.org)
We are part of a community working to encourage the discovery of new and interesting research in Artificial Intelligence and Machine Learning, especially in languages other than English. Much of the research being done in these fields is easily available on the web through sites like Arxiv.org, but many interesting discoveries are happening every day in different corners of the internet that may take time to identify and bring to the attention of the rest of the community.
This is especially true of research that is in a language other than English, which may easily missed by much of the community. We are passionate about finding the best current research, and identifying trends so that the cutting edge can continue to be pushed. In order to push in that direction, we devised a problem that attempts to measure when new ideas are being discussed, in any language. Based on a metric for recency of key words, how can we identify when a research paper is bringing forth new ideas so that we can better isolate them?
Evaluation
If you want to know more about the MLSE metric that Scikit Learn calculates, you can find it here: https://scikit-learn.org/stable/modules/model_evaluation.html#mean-squared-log-error
Where:

N = Number of rows of the dataset Test.csv= true value
= predicted value
Rules
- The code should not be shared privately. Any code that is shared, must be available to all participants of the competition through the platform
- The solution should use only publicly available open source libraries
- If two solutions get identical scores in the ranking table, the tie-breaker will be the date and time of the submission (the first solution submitted will win).
- We reserve the right to request any user's code at any time during a challenge. You will have 72 hours to submit your code following the code review rules.
- We reserve the right to update these rules at any time.
- Your solution must not infringe the rights of any third party and you must be legally authorized to assign ownership of all copyrights in and to the winning solution code to the competition host/sponsor.
- Competitors may register and submit solutions as individuals (not as teams, at least for now).
- Maximum 50 solutions submitted per day.
Thanks for reaching out to us. We were inspecting the file named: "submission_df (5).csv" and we found a number with scientific notation in line 6.579 (we sent you an email with the evidence) inside your submitted file. Please be aware of this kind of notation, because it contains letters or dashes (1e-04), and that means that these are not numeric values, so the evaluation metric cannot compute a result.
If you have any other questions, please let us know
Regards!