• Submit your paper

Publishing with Elsevier: step-by-step

Learn about the publication process and how to submit your manuscript. This tutorial will help you find the right journal and maximize the chance to be published.

1. Find a journal

Find out the journals that could be best suited for publishing your research. Match your manuscript using the JournalFinder tool, then learn more about each journal.

JournalFinder

Powered by the Elsevier Fingerprint Engine™, Elsevier JournalFinder uses smart search technology and field-of-research-specific vocabularies to match your article to Elsevier journals.

Find out more about a journal

Learn about each journal's topics, impact and submission policies.

Find a journal by name

  • Read the journal's aims and scope to make sure it is a match
  • Check whether you can submit – some journals are invitation only
  • Use journal metrics to understand the impact of a journal
  • If available, check the journal at Journal Insights for additional info about impact, speed and reach
  • If you're a postdoc, check out our postdoc free access program

2. Prepare your paper for submission

Download our get published quick guide , which outlines the essential steps in preparing a paper. (This is also available in Chinese ). It is very important that you stick to the specific "guide for authors" of the journal to which you are submitting. This can be found on the journal's home page.

You can find information about the publishing process in the understanding the publishing process guide. It covers topics such as authors' rights, ethics and plagiarism, and journal and article metrics.

If you have research data to share, make sure you read the guide for authors to find out which options the journal offers to share research data with your article.

Read more on preparing your paper

Read about publishing in a special issue

  • Use an external editing service, such as Elsevier’s Author Services if you need assistance with language
  • Free e-learning modules on preparing your manuscript can be found on Researcher Academy
  • Mendeley makes your life easier by helping you organize your papers, citations and references, accessing them in the cloud on any device, wherever you are

3. Submit and revise

You can submit to most Elsevier journals using our online systems.  The system you use will depend on the journal to which you submit. You can access the relevant submission system via the "submit your paper" link on the Elsevier.com journal homepage of your chosen journal.

Alternatively, if you have been invited to submit to a journal, follow the instructions provided to you.

Once submitted, your paper will be considered by the editor and if it passes initial screening, it will be sent for peer review by experts in your field. If deemed unsuitable for publication in your chosen journal, the editor may suggest you transfer your submission to a more suitable journal, via an article transfer service.

Read more on how to submit and revise

  • Check the  open access options on the journal's home page
  • Consider the options for sharing your research data
  • Be accurate and clear when checking your proofs
  • Inform yourself about copyright and licensing

4. Track your paper

Track your submitted paper.

You can track the status of your submitted paper online. The system you use to track your submission will be the same system to which you submitted. Use the reference number you received after submission to track your submission.

Unsure about what the submission status means? Check out this video .

In case of any problems contact the Support Center

Track your accepted paper

Once your paper is accepted for publication, you will receive a reference number and a direct link that lets you follow its publication status via Elsevier’s "Track Your Accepted Article" service.

However, even without a notification you can track the status of your article by entering your article reference number and corresponding author surname in Track Your Accepted Article .

Read more about the article tracking service

5. Share and promote

Now that your article is published, you can promote it to achieve a bigger impact for your research. Sharing research, accomplishments and ambitions with a wider audience makes you more visible in your field. This helps you get cited more, enabling you to cultivate a stronger reputation, promote your research and move forward in your career.

Read more on sharing your research After publication, celebrate and get noticed!

Elsevier.com visitor survey

We are always looking for ways to improve customer experience on Elsevier.com. We would like to ask you for a moment of your time to fill in a short questionnaire, at the end of your visit . If you decide to participate, a new browser tab will open so you can complete the survey after you have completed your visit to this website. Thanks in advance for your time.

How to Submit A Data Article

Emma Bertran

Emma Bertran

About this video

Data articles provide scientists with the opportunity to describe and share their raw data, and hence participate in Open Science and satisfy funder requirements. In this video, Emma Bertran, a scientific editor from Data in Brief provides detailed guidance to authors to check whether their data is within the scope of the journal and how to submit data articles to Data in Brief .

To find out more about the journal, please visit the journal homepage .  

About the presenter

Emma Bertran

Scientific Editor, Data in Brief 

  • Journal Article Publishing Support Center

To post social content, you must have a display name. The page will refresh upon submission. Any pending input will be lost.

  • Complimentary copies

How can I track the status of my submitted article?

If you have submitted a manuscript, you'll be able to log in to Editorial Manager (EM) as the corresponding author to view the status of your submission.

You'll need to know which EM journal your paper was submitted to; most journals will send confirmation emails for each submission that include a link to the login page.

  • Log in to the same account from which you submitted the article. For co-authors, log into the same account from which you confirmed your relationship to the submission.
  • In your 'Author Main Menu' manuscripts appear in different folders as they pass through phases in the editorial process: Original submission are listed first, followed by revisions.
  • "Current status" terms can be customized by each journal, see What does the status of my submission mean? for explanations of the default status terms.
  • The 'Authorship' column will show if you are the corresponding author, or else a verified co-author. Co-authors have only limited actions available.

If your paper has been accepted for publication, you can track it using  Elsevier's Article Tracking service .

Was this answer helpful?

Thank you for your feedback, it will help us serve you better. If you require assistance, please scroll down and use one of the contact options to get in touch.

Help us to help you:

Thank you for your feedback!

  • Why was this answer not helpful?
  • It was hard to understand / follow.
  • It did not answer my question.
  • The solution did not work.
  • There was a mistake in the answer.
  • Feel free to leave any comments below: Please enter your feedback to submit this form

Related Articles:

  • What does the status of my submission mean in Editorial Manager?
  • How can I track my accepted article?
  • What is Editorial Manager Co-Author verification?
  • When can I expect a decision from the Editor?
  • Video Guide: Submission status explainer

For further assistance:

Help | Advanced Search

Computer Science > Machine Learning

Title: a retrospective of the tutorial on opportunities and challenges of online deep learning.

Abstract: Machine learning algorithms have become indispensable in today's world. They support and accelerate the way we make decisions based on the data at hand. This acceleration means that data structures that were valid at one moment could no longer be valid in the future. With these changing data structures, it is necessary to adapt machine learning (ML) systems incrementally to the new data. This is done with the use of online learning or continuous ML technologies. While deep learning technologies have shown exceptional performance on predefined datasets, they have not been widely applied to online, streaming, and continuous learning. In this retrospective of our tutorial titled Opportunities and Challenges of Online Deep Learning held at ECML PKDD 2023, we provide a brief overview of the opportunities but also the potential pitfalls for the application of neural networks in online learning environments using the frameworks River and Deep-River.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Main Navigation

  • Contact NeurIPS
  • Code of Ethics
  • Code of Conduct
  • Create Profile
  • Journal To Conference Track
  • Diversity & Inclusion
  • Proceedings
  • Future Meetings
  • Exhibitor Information
  • Privacy Policy

NeurIPS 2024 Datasets and Benchmarks Track

If you'd like to become a reviewer for the track, or recommend someone, please use this form .

The Datasets and Benchmarks track serves as a venue for high-quality publications, talks, and posters on highly valuable machine learning datasets and benchmarks, as well as a forum for discussions on how to improve dataset development. Datasets and benchmarks are crucial for the development of machine learning methods, but also require their own publishing and reviewing guidelines. For instance, datasets can often not be reviewed in a double-blind fashion, and hence full anonymization will not be required. On the other hand, they do require additional specific checks, such as a proper description of how the data was collected, whether they show intrinsic bias, and whether they will remain accessible. The Datasets and Benchmarks track is proud to support the open source movement by encouraging submissions of open-source libraries and tools that enable or accelerate ML research.

The previous editions of the Datasets and Benchmarks track were highly successful; you can view the accepted papers from 2021 , 2002 , and 2023 , and the winners of the best paper awards 2021 , 2022 and 2023

CRITERIA. W e are aiming for an equally stringent review as the main conference, yet better suited to datasets and benchmarks. Submissions to this track will be reviewed according to a set of criteria and best practices specifically designed for datasets and benchmarks , as described below. A key criterion is accessibility: datasets should be available and accessible , i.e. the data can be found and obtained without a personal request to the PI, and any required code should be open source. We encourage the authors to use Croissant format ( https://mlcommons.org/working-groups/data/croissant/ ) to document their datasets in machine readable way.   Next to a scientific paper, authors should also submit supplementary materials such as detail on how the data was collected and organised, what kind of information it contains, how it should be used ethically and responsibly, as well as how it will be made available and maintained.

RELATIONSHIP TO NeurIPS.  Submissions to the track will be part of the main NeurIPS conference , presented alongside the main conference papers. Accepted papers will be officially published in the NeurIPS proceedings .

SUBMISSIONS.  There will be one deadline this year. It is also still possible to submit datasets and benchmarks to the main conference (under the usual review process), but dual submission to both is not allowed (unless you retracted your paper from the main conference). We also cannot transfer papers from the main track to the D&B track. Authors can choose to submit either single-blind or double-blind . If it is possible to properly review the submission double-blind, i.e., reviewers do not need access to non-anonymous repositories to review the work, then authors can also choose to submit the work anonymously. Papers will not be publicly visible during the review process. Only accepted papers will become visible afterward. The reviews themselves are not visible during the review phase but will be published after decisions have been made. The datasets themselves should be accessible to reviewers but can be publicly released at a later date (see below). New authors cannot be added after the abstract deadline and they should have an OpenReview profile by the paper deadline. NeurIPS does not tolerate any collusion whereby authors secretly cooperate with reviewers, ACs or SACs to obtain favourable reviews.

SCOPE. This track welcomes all work on data-centric machine learning research (DMLR) and open-source libraries and tools that enable or accelerate ML research, covering ML datasets and benchmarks as well as algorithms, tools, methods, and analyses for working with ML data. This includes but is not limited to:

  • New datasets, or carefully and thoughtfully designed (collections of) datasets based on previously available data.
  • Data generators and reinforcement learning environments.
  • Data-centric AI methods and tools, e.g. to measure and improve data quality or utility, or studies in data-centric AI that bring important new insight.
  • Advanced practices in data collection and curation that are of general interest even if the data itself cannot be shared.
  • Frameworks for responsible dataset development, audits of existing datasets, identifying significant problems with existing datasets and their use
  • Benchmarks on new or existing datasets, as well as benchmarking tools.
  • In-depth analyses of machine learning challenges and competitions (by organisers and/or participants) that yield important new insight.
  • Systematic analyses of existing systems on novel datasets yielding important new insight.

Read our original blog post for more about why we started this track.

Important dates

  • Abstract submission deadline: May 29, 2024
  • Full paper submission and co-author registration deadline: Jun 5, 2024
  • Supplementary materials submission deadline: Jun 12, 2024
  • Review deadline - Jul 24, 2024
  • Release of reviews and start of Author discussions on OpenReview: Aug 07, 2024
  • End of author/reviewer discussions on OpenReview: Aug 31, 2024
  • Author notification: Sep 26, 2024
  • Camera-ready deadline: Oct 30, 2024 AOE

Note: The site will start accepting submissions on April 1 5 , 2024.

FREQUENTLY ASKED QUESTIONS

Q: My work is in scope for this track but possibly also for the main conference. Where should I submit it?

A: This is ultimately your choice. Consider the main contribution of the submission and how it should be reviewed. If the main contribution is a new dataset, benchmark, or other work that falls into the scope of the track (see above), then it is ideally reviewed accordingly. As discussed in our blog post, the reviewing procedures of the main conference are focused on algorithmic advances, analysis, and applications, while the reviewing in this track is equally stringent but designed to properly assess datasets and benchmarks. Other, more practical considerations are that this track allows single-blind reviewing (since anonymization is often impossible for hosted datasets) and intended audience, i.e., make your work more visible for people looking for datasets and benchmarks.

Q: How will paper accepted to this track be cited?

A: Accepted papers will appear as part of the official NeurIPS proceedings.

Q: Do I need to submit an abstract beforehand?

A: Yes, please check the important dates section for more information.

Q: My dataset requires open credentialized access. Can I submit to this track?

A: This will be possible on the condition that a credentialization is necessary for the public good (e.g. because of ethically sensitive medical data), and that an established credentialization procedure is in place that is 1) open to a large section of the public, 2) provides rapid response and access to the data, and 3) is guaranteed to be maintained for many years. A good example here is PhysioNet Credentialing, where users must first understand how to handle data with human subjects, yet is open to anyone who has learned and agrees with the rules. This should be seen as an exceptional measure, and NOT as a way to limit access to data for other reasons (e.g. to shield data behind a Data Transfer Agreement). Misuse would be grounds for desk rejection. During submission, you can indicate that your dataset involves open credentialized access, in which case the necessity, openness, and efficiency of the credentialization process itself will also be checked.

SUBMISSION INSTRUCTIONS

A submission consists of:

  • Please carefully follow the Latex template for this track when preparing proposals. We follow the NeurIPS format, but with the appropriate headings, and without hiding the names of the authors. Download the template as a bundle here .
  • Papers should be submitted via OpenReview
  • Reviewing is in principle single-blind, hence the paper should not be anonymized. In cases where the work can be reviewed equally well anonymously, anonymous submission is also allowed.
  • During submission, you can add a public link to the dataset or benchmark data. If the dataset can only be released later, you must include instructions for reviewers on how to access the dataset. This can only be done after the first submission by sending an official note to the reviewers in OpenReview. We highly recommend making the dataset publicly available immediately or before the start of the NeurIPS conference. In select cases, requiring solid motivation, the release date can be stretched up to a year after the submission deadline.
  • Dataset documentation and intended uses. Recommended documentation frameworks include datasheets for datasets , dataset nutrition labels , data statements for NLP , data cards , and accountability frameworks .
  • URL to website/platform where the dataset/benchmark can be viewed and downloaded by the reviewers. 
  • URL to Croissant metadata record documenting the dataset/benchmark available for viewing and downloading by the reviewers. You can create your Croissant metadata using e.g. the Python library available here: https://github.com/mlcommons/croissant
  • Author statement that they bear all responsibility in case of violation of rights, etc., and confirmation of the data license.
  • Hosting, licensing, and maintenance plan. The choice of hosting platform is yours, as long as you ensure access to the data (possibly through a curated interface) and will provide the necessary maintenance.
  • Links to access the dataset and its metadata. This can be hidden upon submission if the dataset is not yet publicly available but must be added in the camera-ready version. In select cases, e.g when the data can only be released at a later date, this can be added afterward (up to a year after the submission deadline). Simulation environments should link to open source code repositories
  • The dataset itself should ideally use an open and widely used data format. Provide a detailed explanation on how the dataset can be read. For simulation environments, use existing frameworks or explain how they can be used.
  • Long-term preservation: It must be clear that the dataset will be available for a long time, either by uploading to a data repository or by explaining how the authors themselves will ensure this
  • Explicit license: Authors must choose a license, ideally a CC license for datasets, or an open source license for code (e.g. RL environments). An overview of licenses can be found here: https://paperswithcode.com/datasets/license
  • Add structured metadata to a dataset's meta-data page using Web standards (like schema.org and DCAT ): This allows it to be discovered and organized by anyone. A guide can be found here: https://developers.google.com/search/docs/data-types/dataset . If you use an existing data repository, this is often done automatically.
  • Highly recommended: a persistent dereferenceable identifier (e.g. a DOI  minted by a data repository or a prefix on identifiers.org ) for datasets, or a code repository (e.g. GitHub, GitLab,...) for code. If this is not possible or useful, please explain why.
  • For benchmarks, the supplementary materials must ensure that all results are easily reproducible. Where possible, use a reproducibility framework such as the ML reproducibility checklist , or otherwise guarantee that all results can be easily reproduced, i.e. all necessary datasets, code, and evaluation procedures must be accessible and documented.
  • For papers introducing best practices in creating or curating datasets and benchmarks, the above supplementary materials are not required.
  • For papers resubmitted after being retracted from another venue: a brief discussion on the main concerns raised by previous reviewers and how you addressed them. You do not need to share the original reviews.
  • For the dual submission and archiving, the policy follows the NeurIPS main track paper guideline .

Use of Large Language Models (LLMs): We welcome authors to use any tool that is suitable for preparing high-quality papers and research. However, we ask authors to keep in mind two important criteria. First, we expect papers to fully describe their methodology, and any tool that is important to that methodology, including the use of LLMs, should be described also. For example, authors should mention tools (including LLMs) that were used for data processing or filtering, visualization, facilitating or running experiments, and proving theorems. It may also be advisable to describe the use of LLMs in implementing the method (if this corresponds to an important, original, or non-standard component of the approach). Second, authors are responsible for the entire content of the paper, including all text and figures, so while authors are welcome to use any tool they wish for writing the paper, they must ensure that all text is correct and original.

REVIEWING AND SELECTION PROCESS

Reviewing will be single-blind, although authors can also submit anonymously if the submission allows that. A datasets and benchmarks program committee will be formed, consisting of experts on machine learning, dataset curation, and ethics. We will ensure diversity in the program committee, both in terms of background as well as technical expertise (e.g., data, ML, data ethics, social science expertise). Each paper will be reviewed by the members of the committee. In select cases where ethical concerns are flagged by reviewers, an ethics review may be performed as well.

Papers will not be publicly visible during the review process. Only accepted papers will become visible afterward. The reviews themselves are also not visible during the review phase but will be published after decisions have been made. Authors can choose to keep the datasets themselves hidden until a later release date, as long as reviewers have access.

The factors that will be considered when evaluating papers include:

  • Utility and quality of the submission: Impact, originality, novelty, relevance to the NeurIPS community will all be considered. 
  • Reproducibility: All submissions should be accompanied by sufficient information to reproduce the results described i.e. all necessary datasets, code, and evaluation procedures must be accessible and documented. We encourage the use of a reproducibility framework such as the ML reproducibility checklist to guarantee that all results can be easily reproduced. Benchmark submissions in particular should take care to ensure sufficient details are provided to ensure reproducibility. If submissions include code, please refer to the NeurIPS code submission guidelines .  
  • Was code provided (e.g. in the supplementary material)? If provided, did you look at the code? Did you consider it useful in guiding your review? If not provided, did you wish code had been available?
  • Ethics: Any ethical implications of the work should be addressed. Authors should rely on NeurIPS ethics guidelines as guidance for understanding ethical concerns.  
  • Completeness of the relevant documentation: Per NeurIPS ethics guidelines , datasets must be accompanied by documentation communicating the details of the dataset as part of their submissions via structured templates (e.g. TODO). Sufficient detail must be provided on how the data was collected and organized, what kind of information it contains,  ethically and responsibly, and how it will be made available and maintained. 
  • Licensing and access: Per NeurIPS ethics guidelines , authors should provide licenses for any datasets released. These should consider the intended use and limitations of the dataset, and develop licenses and terms of use to prevent misuse or inappropriate use.  
  • Consent and privacy: Per  NeurIPS ethics guidelines , datasets should minimize the exposure of any personally identifiable information, unless informed consent from those individuals is provided to do so. Any paper that chooses to create a dataset with real data of real people should ask for the explicit consent of participants, or explain why they were unable to do so.
  • Ethics and responsible use: Any ethical implications of new datasets should be addressed and guidelines for responsible use should be provided where appropriate. Note that, if your submission includes publicly available datasets (e.g. as part of a larger benchmark), you should also check these datasets for ethical issues. You remain responsible for the ethical implications of including existing datasets or other data sources in your work.
  • Legal compliance: For datasets, authors should ensure awareness and compliance with regional legal requirements.

ADVISORY COMMITTEE

The following committee will provide advice on the organization of the track over the coming years: Sergio Escalera, Isabelle Guyon, Neil Lawrence, Dina Machuve, Olga Russakovsky, Joaquin Vanschoren, Serena Yeung.

DATASETS AND BENCHMARKS CHAIRS

Lora Aroyo, Google Francesco Locatello, Institute of Science and Technology Austria Lingjuan Lyu, Sony AI

Contact: [email protected]

COMMENTS

  1. Publish with Elsevier: Step by step

    4. Track your paper. 5. Share and promote. 1. Find a journal. Find out the journals that could be best suited for publishing your research. For a comprehensive list of Elsevier journals check our Journal Catalog. You can also match your manuscript using the JournalFinder tool, then learn more about each journal.

  2. How do I publish my article with Elsevier?

    Select ' Submit your article ' on the homepage of the journal you would like to publish in. This option may not always be available as some journals do not accept submissions. Sign in to Editorial Manager, or register if you are a first-time user. Follow the steps to submit your article. After submitting your article, use the reference number ...

  3. Submit your paper

    3. Submit and revise. You can submit to most Elsevier journals using our online systems. The system you use will depend on the journal to which you submit. You can access the relevant submission system via the "submit your paper" link on the Elsevier.com journal homepage of your chosen journal. Alternatively, if you have been invited to submit ...

  4. PDF How to Write a Good Research Paper

    Write with clarity, objectivity, accuracy, and brevity. 28. Scientific Language -Sentences. •Write direct and short sentences - more professional looking •One idea or piece of information per sentence is sufficient •Avoid multiple statements in one sentence - they are confusing to the reader.

  5. Submitting your manuscript to journals checklist

    A cover letter is a key document that accompanies submissions to a journal which mentions author information, the key findings and significance of the study, information on additional data and supplementary materials, and information on ethical compliance. The purpose of the cover letter is to demonstrate that your manuscript reflects authentic ...

  6. How do I include Highlights with my manuscript?

    Click on 'Guide for Authors' in the left-hand menu. Unless otherwise instructed in the Guide for Authors, Highlights should be included as a separate source file (i.e. Microsoft Word not PDF). Select 'Highlights' from the drop-down file list when uploading files. Use 'Highlights' as the file name.

  7. Find a journal

    Elsevier Journal Finder helps you find journals that could be best suited for publishing your scientific article. Journal Finder uses smart search technology and field-of-research specific vocabularies to match your paper's abstract to scientific journals.

  8. How can I submit my review in Editorial Manager?

    Fill in any Confidential Comments to the Editors. If you have a file to submit, click 'Upload Reviewer Attachments'. Steps to upload a reviewer attachment file. If the journal allows reviewer file uploads, the " Upload Reviewer Attachments " button appears at top, just under the Recommendation box.

  9. How to submit and publish an OA article

    About this video. In this module, we provide hands-on guidance for those who are ready to publish open access. By taking a closer look at the publication process, we walk you through the different stages of open access publishing from finding a journal, submission, peer review, revision and all the way to publication.

  10. How to Write a Journal Article from a Thesis

    Use previously published papers (at least three) from the target journal as examples; 5. Tighten the methods section. Keep the discussion about your research approach short; Use previously published papers (at least three) from the target journal as examples; 6. Report main findings in the results

  11. How do I make changes after I have submitted an article?

    Corresponding Author: For changes to corresponding author you must include a completed 'Corresponding Author Change Request Form' signed by all co-authors. Confirm that the new corresponding author is registered in the Journal Submission system. Click here to download a change request form. Return the completed form to us via the Email ...

  12. How to get your paper published

    The publisher sends proofs, and usually an order blank for reprints, to the author and asks that the materials be returned in 24 to 48 hours. Some journals send manuscripts to the publisher as soon as the article is accepted. In this case, the manuscript is typeset immediately and placed in a queue for publication.

  13. How do I include a video with my submission?

    Submitting the files. Follow the usual steps for submission to your selected journal. When you reach the 'Attach Files' step, upload the files under the appropriate file types . Many journals have specific file types available for 'Video' and 'Video Still'. Where there is no specific 'Video' file type, use a more general file type, commonly ...

  14. How to Submit A Data Article

    Data articles provide scientists with the opportunity to describe and share their raw data, and hence participate in Open Science and satisfy funder requirements. In this video, Emma Bertran, a scientific editor from Data in Brief provides detailed guidance to authors to check whether their data is within the scope of the journal and how to submit data articles to Data in Brief.To find out ...

  15. PDF Manuscript Format

    is 10,000 for technical papers, and 3,000 for technical notes. Please note that each figure or table is regarded as 160 word-equivalents. Figures 1. Original figures should be in high quality with the resolution of 600dpi and saved as filename.tif or filename.jpg. 2. Figures should be in the proper size, less than 8 cm or 16 cm in width, and less

  16. How can I track the status of my submitted article?

    If you have submitted a manuscript, you'll be able to log in to Editorial Manager (EM) as the corresponding author to view the status of your submission. If you are a co-author, you may be able to check the status if you received a co-author verification email, and have logged in to validate your relationship to the submission.

  17. Call for papers

    This Special Issue aims to explore the key aspects of female health, encompassing reproductive rights and the health needs of the LGBTQIAPN+ population. By incorporating contributions from diverse fields within the healthcare sector, this Special Issue of Clinics seeks to present a comprehensive and holistic view of women's health.

  18. [2405.17222] A Retrospective of the Tutorial on Opportunities and

    A Retrospective of the Tutorial on Opportunities and Challenges of Online Deep Learning. Cedric Kulbach, Lucas Cazzonelli, Hoang-Anh Ngo, Minh-Huong Le-Nguyen, Albert Bifet. Machine learning algorithms have become indispensable in today's world. They support and accelerate the way we make decisions based on the data at hand.

  19. NeurIPS 2024 Call for Papers

    Call For Papers. Abstract submission deadline: May 15, 2024. Full paper submission deadline, including technical appendices and supplemental material (all authors must have an OpenReview profile when submitting): May 22, 2024. Author notification: Sep 25, 2024. Camera-ready, poster, and video submission: Oct 30, 2024 AOE.

  20. Call For Datasets & Benchmarks 2024

    The Datasets and Benchmarks track is proud to support the open source movement by encouraging submissions of open-source libraries and tools that enable or accelerate ML research. The previous editions of the Datasets and Benchmarks track were highly successful; you can view the accepted papers from 2021, 2002, and 2023, and the winners of the ...