Skip to content

Latest commit

 

History

History
70 lines (44 loc) · 4.51 KB

ProjectProposal.md

File metadata and controls

70 lines (44 loc) · 4.51 KB

Access Code 2.1 Final Project Proposal

App: MomenTune
Team: SoundShakes
Team Members: Tasha Smith, George Syrimis, John Gomez, Jorge Reina

The Problem

Listening to the same music can become boring and repetitive after a while. Either you look for new music you like or pray to God that you can teach yourself how to play, because the third option is to just resign to what you’re given.

Other apps that try to create a theme or playlist of music for you are either riddled with ads unless you pay for them, or loop around the same playlist without adding new content until you start changing your preferences, which can get old fast. One way to solve the problem is to make your own music, but most people don't know how to, nor do they want to learn. Even those that do, must go through a steep learning curve. There are apps currently in the market that make music which are geared towards either professionals and hold a steep learning curve, or are incredibly simple and gimmicky to be useful. User base would be mainly audio artists or anyone interested in exploring soundscapes when it comes to creating their own music but have neither the time to learn how to or have not found an application that does it well.

The Solution

People sometimes want to create their own music but the apps out there are either simple enough to be inefficient or for professionals. The application attempts to solve the problem by providing the user with a motion based digital sequencer and sampler that creates dynamically pleasing sounds for the user based on device movement and user interaction. Our application will attempt to bridge the usefulness gap by giving the user an intuitive way to interact with their music. Using the sensor framework of the phone we’ll create a distinct natural connection to bridge the gap between user and instrument without the user having to know music theory and without it feeling like it’s a toy. A visualizer will add a degree of depth to the sound and a pleasing and easy to use UI will make sure that the user enjoys their experience with the app.

Baseline Features (By Demo 1) "It's a moonshot."

  • LIfe module: Detects breakpoints in the accelerometer data to modulate the output of digital frequencies and create music, useful while walking and running. The Idea being that the sound generated harmonizes in tempo/pitch based on time/distance the sensors detect movement.
  • Record and playback your creations.
  • Save tracks and overlay them.
  • The ability to elegantly randomize the music it makes so that it sounds consistent but never the same.
  • The ability to Save performances onto the device.
  • The ability to elegantly randomize the music it makes so that it sounds consistent but never the same.

Bonus features

  • sAmp module: Intergrate sampler (with prepackaged libraries)
  • SeeQ module: Intergrate sequencer.
  • Integrate a custom visualizer

Wireframe

Execution

###Sprint Cycle

Week Date Sprint
Week 1 8/3-8/9 Ideation and feasibility
Week 2 8/10-8/16 Sensor catalogue and Research: baseline technical specs for rudimentary signal output, plan signal mapping and how that can tie in with visualizer, sampler/sequencer work, and UI.
Week 3 8/17-8/23 Finish baseline implementation (Working data input and analog output system) begin work on other features
Week 4 8/24-8/30 Finalize: clean UI streamline UX
Week 5 8/31-9/6 First release demo, gauge user reaction, user testing, implement new features
Week 6 9/7-9/13 Second release. New features Implenmented feedback.
Week 7 Tue Sep 15, 7pm Final Demo Day

Team Responsibilities

Tash:

  • UI, Will build out Material design implementation that creates a pleasant experience for the user and will work with other team members to maintain consistency throughout the app. Will also work on the Visualizer.

Jorge:

  • Will work with John to build Sequencer and user interaction with it. Will help build UI elements and work with tash on visualizer.

John:

  • Will work with Jorge and tash to build Sequencer and implement the visualizer. Will work with George to build the system that translates data to analog sound.

George:

  • Will build with John digital-analog system helping with debugging and maintaining our group on track.

Every member of the team will have some kind of input on each member's part to ensure overall communication and collaboration within the team.