Categories
Uncategorized

Minorities to Majorities: How Jasmine Chigbu is tackling the diversity gap and inspiring students to pursue their dreams

Jasmine Chigbu has always been interested in medicine and driven by her passion to help people. Chigbu, a first year med student with an undergrad in clinical research and master’s degree in biomedical sciences, noticed a distinct lack of diversity in her undergraduate, graduate, and professional experiences. She was often one of a few females and ethnic minorities in the room, and fell into entrepreneurship through this realization. While Chigbu was researching scholarships for her graduate degree, she realized she could share her extensive research to help others. It became a personal passion project. Chigbu wanted to find a way to increase the representation of diverse groups by providing them with information about educational and professional opportunities, tools, and inspiration. Chigbu found a software development company, and through trial and error, built Minorities to Majorities, a mobile app to provide ethnic minority, female, LGBTQ and international students information about scholarship, internship, and fellowship opportunities.

“Be creative in the ways you want to reduce disparities. What’s creative in your approach? Minorities to Majorities is using tech. Use your niche—find a creative way to attack the same issue and you’ll have greater results. If you’re really passionate, that will keep you going.” 

While being underrepresented is challenging and at times isolating, Chigbu encourages people to use their voice, even if they feel suppressed, and to let go of imposter syndrome.  

“You might be under-represented, but you’re not under-qualified.”

Jasmine Chigbu, founder of Minorities to Majorities.

“Find your community—social media, friends, a community of people to support you and champion you.” 

Chigbu found support and mentorship from her boss while working at a biotech startup, and she was introduced to Nadiyah Johnson, founder of Jet Constellations, while pursuing her mission-driven project. Both are driven by their passion for promoting diversity and empowering underrepresented people—an instant partnership was formed. Johnson has helped develop MTM, and has joined their growing team as Operations Manager.

The Minorities to Majorities team.

MTM is driven by their mission to transform the lives of students one opportunity at a time. They’ve started a crowd funding campaign to raise funds to generate their second generation app in order to better serve students, and they need your help. The funds will help expedite the scholarship, internship and fellowship search and application process for students through the utilization of AI, customized software and an improved algorithm. The goal is to scale their platform and mobile app into the leading educational and professional platform connecting students to opportunities across the globe.

MTM plans to optimize their mobile app and platform through the following methods:

  • Develop an advanced algorithm with improved search and filtering functionality to provide users with curated experiences
  • Build a web compatible platform to accompany the mobile app, improving user accessibility
  • Leverage artificial intelligence to continuously populate our opportunities database  
  • Hire an enterprise development team to design, build and configure the app

In order to close the diversity gap, Chigbu says we need to speak up and speak out. 

“Don’t be afraid to call out disparities. Call them out and provide examples, whether it’s at work, or school. Start small-one step at a time, making actionable steps.”

If you want to help, consider contributing to the campaign, or help by sharing this campaign with your friends, family, and social networks!

Check out the campaign here!

Categories
Uncategorized

Grace Hopper 2018 – Training generative adversarial networks: A challenge?

Our-text-conditional-convolutional-GAN-architecture-Text-encoding-pht-is-used-by-bothToday I had the pleasure of attending a very interesting workshop on generative adversarial networks. The goal of the workshop was to teach attendees about deep learning and Generative Adversarial Networks (GANS).  In the lab we used PyTorch, an open source deep learning framework, used to demonstrate and explore this type of neural network architecture. The lab was comprised of two major parts an introduction to both PyTorch and GANs followed by text-to-image generation.

The first part of the lab began by importing torch modules, creating a simple linear transformation model creating a loss function to understand the difference between our model and the ground truth.

Next we ran our model on a GPU! Earlier in the session we learned that GPUs work well for deep learning because they are inherently parallel.
With GPUs, trained neural networks can occur in minutes.

We then began to focus more on GANs. The facilitators of the workshop shared that GANs are getting widespread attention in the deep learning community for their image generation and style transfer capabilities.
This deep learning technique uses two neural networks in a adversarial way to complete its objective.

One network is called the generator and the other the discriminator. The discriminator network is trained with a dataset comprised of real data and output from the generator network, and its objective is to discriminate between the two. The generator network’s objective is to fool the discriminator into classifying its output as real data. While training the generator is updated to generate data that mimics the real data and fool the discriminator.

In this part of the lab attendees were tasked to :

  • Feed data into PyTorch using Numpy
  • Create a multi-layer network
  • Configure the generator and discriminator network
    • Learn how to update the generator network

The second part of the lab  we built upon the popular Deep Convolution Generative Adversarial Network (DCGAN)  to enable text to image generation. This part of the lab was based on the paper Generative Adversarial Text to Image Synthesis by Reed et, al.  Captions of images were encoded and concatenated with the input noise vector before being propagated to the generator. Then the encoded caption was concatenated again with a feature map in the discriminator network after the fourth leaky Rectified Linear Unit (ReLU) layer.The goal of the second half of the lab was to create a text to image model by using the GAN+CLS technique.

We demonstrated the capability of our model to generate plausible images of pizzas and broccoli from detailed text descriptions/captions. While this was just a case for learning purposes its clear that there are many powerful applications to this deep learning technique.