Rishi Shiramshetti

About Me

Hello! My name is Rishi. I am now in my second year studying Computer Science at Santa Clara University. I started learning programming some years ago, and I really enjoy it.

I am very interested in agentic applications and data-driven solutions. I like how computers can make smart decisions when they have good data. It feels amazing to see how data can help us make better systems and understand things more deeply.

I am still learning many new things, but I like to build small projects and test new ideas. In the future, I want to work on building intelligent systems that can help people in real life, like automating work or helping with information.

Rishi Shiramshetti

Projects

Gradient

JUN 2025

VIEW DOCS

A networking platform for university students built with modern web technologies.

React Firebase AI

Cineflow

APR 2025

VIEW DOCS

A full-stack application that uses machine learning to classify movie genres based on poster images.

React ML Python

Reddit Comment Analysis

AUG 2025

VIEW DOCS

A modular system for scraping and analyzing Reddit comments using clustering algorithms and semantic search.

Python ML NLP

Metis

ONGOING

Working on something big.

Work Experience

CanyonView Technology — Technology Intern

Jun 2023 – Aug 2023 | Remote

  • Developed and implemented a Lua script utilizing the Haversine equation to enhance rover navigation precision based on GPS coordinates.
  • Conducted comprehensive testing and created technical documentation detailing methodologies, implementation challenges, and solutions.

Mathnasium — Mathematics Tutor

Feb 2023 – Jun 2023 | Santa Clara, CA

  • Tutored 15+ students in mathematics, developing individualized learning plans that improved grades by an average of one letter grade.
  • Provided regular progress reports to parents and fostered positive learning relationships to ensure student success.

Writing

Coming soon.

← Back to Projects

Project: Gradient

When I was studying at Santa Clara University, I often wanted to meet other students who liked the same things as me. Sometimes I wanted to find someone who was interested in machine learning, or maybe a senior who had done an internship at Google. But it was not easy to find people like that. Everyone was busy, and there was no single place to connect with others who shared similar goals.

That is how the idea for Gradient came to me.

Gradient is a simple website that helps students, alumni, and professionals from Santa Clara University find each other. It is built using modern tools like React and Firebase, but more than that, it is about people and connections. The goal is to make it easier for students to discover others who are like them or who can help them grow.

How It Works

When you open Gradient, you first see the homepage. At the top, there is a clean navigation bar. Below it, there is a big headline that types itself, like a typewriter. It says things such as "Find students like you" or "Connect with professionals." Under that, there are colorful cards that talk about what Gradient does.

If you scroll down, you reach the search page, which is the heart of the app. You can type something like:

  • "Data scientists in California"
  • "Software engineers who like startups"
  • "Alumni working at Tesla"

After you search, Gradient uses AI to find people who are similar to what you wrote. It does not just look at the words. It tries to understand the meaning behind them. For example, if you write "AI developer," it can also show results for people who say "machine learning engineer."

This is called semantic search. It means the AI looks for meaning, not only exact matches.

What You See

After you search, you get a list of people shown as small cards. Each card shows their name, job title, and a percentage that tells how close they are to your search. For example, it might say "87% match."

When you click on a card, it opens their full profile. There you can see more information, such as where they studied, what jobs they have done, and what they wrote about themselves. Some profiles also have a link to their LinkedIn page.

If the AI system ever stops working, Gradient does not just break. It has a backup system that uses Firebase Firestore, so you can still see some profiles.

How It Is Built

Gradient is made using React, a popular tool for building web apps. The design is done with Tailwind CSS, which helps make it look simple and modern. The app uses soft gradient colors, mostly dark gray and blue with a touch of cyan. It feels calm, like a quiet night in the library while you code with music in the background.

The information about users is stored in Firebase Firestore, which is like a notebook in the cloud.

Why It Is Called "Gradient"

The name "Gradient" has two meanings. In design, a gradient means a smooth change of colors, one turning into another. The app does the same thing by connecting people smoothly, from one person to another.

In machine learning, a gradient is something that helps models learn and improve. The name felt right because the app is also about learning, improving, and connecting.

Challenges

When I started building Gradient, I faced many problems. Sometimes the AI API stopped working, and the search gave strange results. I had to make a backup system using Firebase so the app could still work.

There were also times when the design looked fine on my laptop but terrible on a phone. I had to learn how to make it responsive so it would look good everywhere.

One time, the gradient background was not showing on some Android phones. I spent two evenings trying to fix it. Finally, I changed one small line of CSS, and it worked. I remember feeling very happy after that small victory.

The Future

Right now, Gradient is a small working demo. You can try it at https://www.gradientscu.xyz.

In the future, I want to add a login system so users can create their own profiles. I also want to make a chat feature so students can message each other. Another idea is to make a mobile app version that people can use more easily.

Later, I hope the AI will become even smarter. Maybe it can suggest connections by itself. For example, it could say, "You and Sarah both study data science. Would you like to connect?"

What I Learned

Building Gradient taught me many things. I learned about programming, design, and also patience. I understood that good software is not only about writing code. It is also about making something that people can actually use and enjoy.

Sometimes I look at Gradient and think about when it was just an empty white screen. Now it is a real web app that connects people. It makes me proud to see something I imagined become real.

← Back to Projects

Project: Cineflow

When I first started learning about video editing, I often noticed that one part took a lot of time. Making smooth transitions between two clips was always tricky. If the timing was not right or the lighting looked different, the final video would not feel natural. Good transitions make a video flow nicely, but they can take hours to perfect.

That problem gave me the idea for cineflow.

cineflow is an AI-powered video editing tool that helps create smooth and professional transitions between two clips automatically. It uses machine learning and computer vision to study your videos and then blend them together in a natural way. You just upload your clips, describe what kind of transition you want, and cineflow does the rest.

This project became something very special for me and my team. We built cineflow during the SCU x AIC Hackathon in April 2025, and it won first place.

How cineflow Works

Imagine you have two short videos. In one, you are walking through a park, and in the next, you are sitting in a coffee shop. If you put them together without editing, the cut between them feels too sudden. But with cineflow, you can upload both videos and ask the AI to create a smooth transition. You can even write a small description, like "soft cinematic fade" or "quick flash cut."

cineflow looks at both clips carefully. It uses computer vision to understand what is happening in each scene, such as lighting, angle, and motion. Then it uses machine learning to create the best possible transition based on your request.

After a few seconds, it gives you a preview, and you can download the finished video.

The idea is to make professional-quality editing easy for everyone, even for people who have never used complicated tools like Adobe Premiere or After Effects.

The Mission Behind cineflow

Our main goal was to make high-quality video editing simple and open to everyone. Many people, like students, small creators, or casual editors, want to make good videos but do not have the tools or time to do it.

With cineflow, all they need is a browser. They can drag, drop, and describe what they want, and AI handles the hard part.

For example, if a student wants to make a short film for class, cineflow can automatically add transitions between scenes to make it look professional.

The Architecture in Simple Words

cineflow has two main parts: the frontend and the backend.

The frontend is what the user interacts with. It is made with React and Vite to make it fast and responsive. It uses Tailwind CSS for clean design and Radix UI for accessibility. The app also uses Axios to talk to the backend server.

The backend is where the AI lives. It uses Node.js and Express for the web server and Python for the machine learning tasks. The AI model we used is called a Vision Transformer, or ViT, which looks at the videos frame by frame and understands the scenes.

Here's what happens when a user creates a transition:

  1. The two video clips are uploaded to Cloudinary, a cloud service for videos and images.
  2. The AI model checks each frame and studies its motion, lighting, and camera angle.
  3. The model classifies the type of scene, such as "close-up," "wide shot," or "slow motion."
  4. Based on what it finds, it chooses a matching transition style.
  5. It blends the clips and uploads the finished video back to Cloudinary.
  6. The user gets a link to preview or download it.

The entire process usually takes less than a minute.

The AI and Machine Learning Inside cineflow

The Vision Transformer is the brain of cineflow. It looks at each frame like small image patches and learns what they mean. It can tell the difference between a slow-motion clip and a fast-moving one, or between a bright outdoor shot and a dark indoor scene.

It classifies scenes into ten categories, such as slow motion, wide shot, close-up, or cinematic lighting. Using this information, cineflow knows which kind of transition will look most natural.

For example, if the first clip is fast and the second is slow, cineflow will make a smooth fade instead of a hard cut. If the lighting changes a lot, it adjusts the blend to make it feel balanced.

The AI was trained using cinematic footage and optimized to work quickly, even on regular computers.

Features That Make cineflow Special

  • Automatic transitions: The AI studies your videos and creates seamless transitions automatically.
  • Custom options: You can control transition duration, aspect ratio, and resolution.
  • Text-based style prompts: You can type things like "dreamy blur fade" or "bright flash cut," and the AI will understand.
  • Real-time preview: You can watch your transition before downloading.
  • Multi-platform support: Works for YouTube, TikTok, Instagram, and more.
  • Batch processing: You can generate several transitions at once if needed.

How to Use cineflow

Using cineflow is easy.

  1. Open the app in your browser.
  2. Upload two video clips.
  3. Set how long you want the transition to be.
  4. Choose the aspect ratio and resolution.
  5. Type a short description of your desired style.
  6. Click "Generate Transition."
  7. Watch your preview and download the final video.

Everything is handled online, and there is no need for complex setup.

The Journey of Building cineflow

During the hackathon, our small team worked very hard. One of us focused on training the AI model, another built the web interface, and another handled the backend. We had to connect everything together in just a few days.

There were many challenges. Sometimes the AI created strange transitions, especially when the two clips were very different. Other times, our server crashed because of heavy video files. We spent long nights debugging and testing.

But when we finally saw the first perfect transition play smoothly from one clip to another, everyone cheered. It was such a proud moment.

During the demo, the judges were impressed by how cineflow made complex editing so simple. When we were announced as the first-place winners, it felt like all the effort was worth it.

The Future of cineflow

We want to make cineflow even better. One big plan is to connect it with Adobe Creative Cloud, so users can use cineflow directly inside their Adobe apps. We also want to build a mobile version for creators who edit on their phones.

In the future, cineflow might also learn your editing style and suggest transitions that match your preferences. It could even add small creative effects like particles or color grading automatically.

Our dream is to turn cineflow into a creative assistant that helps storytellers bring their ideas to life easily.

What I Learned

Building cineflow taught me more than just programming. It showed me how teamwork, creativity, and persistence can turn a small idea into something real. I learned that AI is not only for research or science. It can also be used for art and storytelling.

When I look at cineflow now, I feel proud. It started as a simple idea to make editing faster, but it became something that helps people express themselves through video. Every time I see a smooth transition made by cineflow, it reminds me of that hackathon night and the excitement of watching our idea come alive.

That feeling is what makes building projects like this truly special.

← Back to Projects

Project: Reddit Comment Analysis

When I first started exploring Reddit, I was amazed by how many people shared their opinions, stories, and questions in the comments. But there was one big problem: there were too many comments. Thousands of them. It was impossible to read everything or understand what people were really talking about.

That made me curious. What if I could build a system that could read, clean, and organize Reddit comments automatically? What if it could find patterns, group similar opinions, and even help me search semantically for ideas inside all that text?

That is how the Reddit Comment Analysis system was born.

It is a modular tool that can scrape, process, and analyze Reddit comments using clustering and semantic search. In simple terms, it helps turn messy conversations into understandable insights.

How the System Works

The system has three main parts:

  1. Data Collection
  2. Text Processing
  3. Analysis and Search

Each part works like a small worker in a team, passing cleaned and organized data to the next one.

Step 1: Collecting Comments

The first part is called reddit_scraper.py. It talks directly to Reddit's API to collect comments from a post.

When you give it a Reddit link, it turns that link into a JSON endpoint and starts downloading all the comments. It doesn't just take the top-level ones; it also goes deep into the nested replies so nothing is missed.

Each comment is saved with its author name, score, timestamp, and content. Everything is stored in a file called reddit_comments.jsonl so that the next layer can use it easily.

For example, if I give the scraper a link to a Reddit thread about "AI taking jobs," it collects all comments and their replies — hundreds or even thousands — and saves them neatly.

Step 2: Cleaning the Text

Raw Reddit comments are messy. They often include usernames, URLs, emojis, and formatting symbols. So the next step, handled by text_cleaner.py, is to clean and normalize them.

The cleaner removes extra text, makes everything lowercase, deletes stopwords, and removes duplicates. It even compares similar comments and filters out near-duplicates using a similarity threshold of 0.8, which means two comments that are 80 percent alike are treated as one.

After this process, the cleaned text is saved in another file called cleaned_comments.jsonl. This is the version we use for the actual analysis.

Step 3: Clustering and Semantic Search

Now comes the main part, handled by semantic_search_engine.py. This is where the AI starts to understand what people are saying.

It works through a class called CommentClusterer, which runs several steps:

  • It loads the cleaned comments from the file.
  • It creates embeddings for each comment using the OpenAI Embeddings API, which turns text into numerical vectors that capture meaning.
  • It finds patterns and groups similar comments together using K-Means clustering.
  • It then visualizes those clusters on a graph using PCA, showing which comments are close in meaning.

For example, if people are talking about "AI jobs," one cluster might include comments about automation, another about job fear, and another about AI benefits.

This helps you see how people feel or what topics dominate the discussion.

Understanding the Clusters

The clustering system uses Silhouette Analysis and the Elbow Method to find the best number of clusters. Usually, it tests between three and five.

Once the clusters are ready, you can generate a report that describes what each group is about. For example, it might say:

  • Cluster 1: Comments about fear of losing jobs to AI.
  • Cluster 2: Comments discussing government regulations.
  • Cluster 3: Comments that are optimistic about new technology.

Each cluster gives a snapshot of public opinion.

Searching with Meaning

Sometimes, you do not want to read all the clusters. Maybe you just want to find comments about something specific, like "AI job displacement fears."

That is where semantic search comes in. You type your query, and the system turns it into an embedding (a vector). Then it compares that vector to all the comment embeddings using cosine similarity.

It does not just look for matching words. It looks for meaning. For example, if you search for "fear of automation," it might also find comments about "robots replacing humans."

The system returns the top matching comments along with their similarity scores, and you can save the results as a report.

What You Get

After running the full pipeline, you get several useful files:

  • reddit_comments.jsonl: Raw Reddit comments with metadata.
  • cleaned_comments.jsonl: Comments after cleaning and deduplication.
  • clusters_visualization.png: A 2D image showing how comment groups relate to each other.
  • cluster_analysis_report.txt: A text summary of what each cluster represents.
  • unified_analysis_report.txt: A combined view of clustering and search results.

All this helps turn thousands of comments into something readable and meaningful.

How It Performs

The system can handle about 1,000 comments using around 6 MB of memory. It costs around $0.02 for embedding those comments using the smaller OpenAI model. It can process batches of 100 comments at a time.

It works best for English comments, since that is what the embeddings understand best.

Example Use

Let's say you want to analyze a Reddit thread about "climate change."

  1. You give the Reddit link to the scraper.
  2. It collects all comments.
  3. Then you run the cleaner, which removes links, usernames, and emojis.
  4. Next, you run the analyzer. It groups the comments into clusters, like:
    • People worried about rising temperatures.
    • People debating government policy.
    • Users sharing hopeful technology solutions.
  5. Finally, you can search within those comments for something like "solar energy" or "climate denial" and instantly get the most relevant results.

This turns an unreadable wall of text into something that feels like a summarized discussion.

What I Learned

Building this project taught me how to mix data science with language understanding. It also helped me see how online conversations can be studied without reading every single message.

Sometimes, data can reveal feelings and ideas that are not visible at first glance. When I first saw the clusters on the graph and noticed that one group of comments was full of fear while another was full of hope, it felt like the system had learned to understand people in a small but real way.

The Reddit Comment Analysis project started as a simple idea to organize text, but it became something more meaningful. It showed me how we can use AI not just to process data but to better understand human voices in the digital world.