Profile Info

Prashanth Sandela


Summary

Prashanth Sandela pursing Masters in San Jose State University. I've worked on various tools and Technologies, coding geek with interest in Code Optimization. I'm currently learning Big Data.


Experience

Web Application Developer  at   San Jose State University
March 2014  -  Present (5 months)

Development and Maintenance of Website of Lucas College and Graduate School of Business, San Jose State

Associate Software Engineer  at   Trianz
July 2012  -  January 2014  (1 year 7 months) Experiences with different technologies and tools
=> Databases: Teradata, MYSQL, Oracle
=> Tools: Teradata, Informatica, Pentaho




Volunteer Experience

Organiser for loop holes at   Indian Institute of Technology, Kanpur
September 2011  -  September 2011  (1 month)
Organizing and Managing Loop Holes Hacking Sessions held at Mahatma Gandhi Institute of Engineering technology. I was responsible for managing all students.



Certifications

Big Data Training Course Certification
Big Data University       February 2013

Hadoop Fundamentals Training Certificate
Bigdatauniversity.com       February 2013

Up and Running with CakePHP
lynda.com   License 63D45C    June 2014

C/C++ Essential Training
lynda.com   License A4D64C    June 2014


Courses

Master's degree, Computer Science
San Jose State University

Information Retrieval
Big Data
Design and Analysis of Algorithms




Projects

ETL Testing
July 2012 to December 2012
Members:Prashanth Sandela, kalpana patlavath

1) Developed mapping using Informatica Power Center and Developed ETL test scripts for Teradata
2) Managed a team of 2 resources
3) Communication with client

Data Migration
December 2013 to Present
Members:Prashanth Sandela, kalpana patlavath, Vijayalakshmi Gangu, raghuveer metla, Sumitha P, Anurag
Gupta


Social Network analysis using Hadoop
February 2014 to Present
Members:Prashanth Sandela, Siddartha Reddy

Our goal is to identify which people know each other(eg, friends, acquaintances) by looking at when and where they've been.


We are using a data set consisting of 6 million checkins, the idea is that if they've been to the same place at about the same time, they probably know each other. The more times we see them together like this, the stronger the inferred friendship.


We feel once we have this basic info, then we can construct a node+edge graph and run various algorithms on it. For example, finding the connected components within a certain node-hop-distance would be to find the social networks(eg, your friends, and friends of friends).

Designing a Search Engine on Wikipedia
February 2014 to Present
Members:Prashanth Sandela, Sai Kiran Reddy Padooru, Saravana Gajendran

We got a Wikipedia data-set of around 50 GB. The dump is purely XML data. The overall idea was to design a Search Engine on top of the data-set.


Responsibilities:
1) Analysis of data-set
2) Manage and assign task for team of 3 members.
3) Java Developer.
4) Keep track of project status.
5) Maintain a healthy and effective communication with team member.


Developer Responsibilities:
1) Analyzing data-set
2) Converting entire dataset into tokens and respective ids.
3) Creating a Posting list.
4) Sorting posting list using External Sort.
5) Dictionary Compression using block and front coding
6) ID’s Compression using id differences and variable byte encoding.
7) Generating indexes.
8) Identifying nodes and edges, and creating a web graph of inlinks.
9) Ranking all the documents depending on the connectivity.
10) Creating a search box for returning documents


Report of Dataset and Outputs:
1) Size of Input Dataset: ~43GB
2) No. of Nodes: 14,349,277
3) No. of Edges: 58,635
4) Size of Tokens(before sorting): ~13GB
5) Size of ID Title: ~600MB
6) Size of Tokens after Compression(after sorting): ~4.3GB
7) Size of WebGraph: ~24MB

BigData Analytics : Using Splunk
March 2014 to May 2014
Members:Prashanth Sandela, Siddartha Reddy, Chris Rehfeld

# We choose to work on a data set provided by bayareabikeshare.com (Bike rental company in bay area). We had 6 months of data to play with, our data included station data, trip data, inventory data and weather data.
# We correlated with the given data sets and went onto answer various interesting questions.

Automated Feedback Management System
January 2011 to Present
Members:Prashanth Sandela, Lavanya Pandramish

Automated Feedback Management System is a web application which aims to achieve Feedback to be managed online. This project has been designed to replace traditional Feedback Management System which is paper based. This system also involves Business Analytic's.


Roles and Responsibilities:
1) Gave an active role as a team member.
2) Worked as a Data Modeler and Web Designer.
3) Active member in documenting the project


Education

San Jose State University
Master's degree, Computer Science, 2014 - 2016
Grade:  A
Activities and Societies:  Working as a Web Designer

Mahatma Gandhi Institute of Technology
Bachelor of Technology (B.Tech.), Computer Science, 2008 - 2012
Grade:  A
Activities and Societies:  Programmer, Web Designer, PHP, Ethical Hacker, Database, C, C++, JAVA



Interests

Automation


Prashanth Sandela

Comments

Post a Comment

Popular posts from this blog

Planning My Career