Josh Silverberg

; About Me Experience Projects Contact

Professional experience

Bloomberg L.P. - New York, NY

Software Engineer (full time) • August 2023 - Present

  • Currently on the Market Data Distribution team working on reliable and scalable exchange data streaming to clients around the world.
  • Bloomberg L.P. - New York, NY

    Software Engineering Intern • May 2022 - July 2022

  • Summer 2022 intern on the Financial Analytics team working on scalable bond ETF pricing technology.
  • OATS Technology Inc. - Elmsford, NY

    Software Engineer • Nov 2021 - Dec 2021

  • Designed and implemented backend system to accurately extract named entities of various forms from text streams.
  • Amazon - Seattle, WA

    Software Development Engineering Intern • May 2021 - July 2021

  • Worked on the Alexa Brain Metrics and Learning team on utterance rewrite generation pipeline.
  • Created secure logging system for evaluation of model functionality in distributed container workflow.
  • Designed, planned, and built system to securely handle up to 1800 TPS of 5KB confidential logs with negligible latency overhead on critical path.
  • Zazraak - Bratislava, Slovakia

    Development Intern • Jul 2020 - Mar 2021

  • Implemented an extractive text summarization algorithm using key-term analysis using the SpaCy and NLTK Python NLP libraries.
  • Developed an automated end-to-end question generation and text summarization system for formative assessments in online education.
  • Led full-stack deployment of pipeline as a web app on GCP using Flask with frontend deployment as a Google Docs plugin.
  • Machine Learning for Social Media Lab - Ann Arbor, MI

    Undergraduate Researcher • Oct 2019 - May 2020

  • Initiated, planned, and executed project comparing two RNN models' adherence to subreddit comment styles in generated responses to prompts.
  • Compiled and synthesized almost 100 academic papers into executive summaries in meta-research phase.
  • Modified code from ACL paper on LSTM seq2seq comment generation and built custom scraper to selectively aggregate dataset of almost 9000 comments.