Skip to content

Meeting Minutes

Randy Chen edited this page Jun 5, 2017 · 14 revisions

These are the meeting minutes from the second project of CIS 422, recorded by Randy Chen.

5-1-17

Outline of Application:

Back-end:
Image Processing API
Tagging
Sort by food
Call Yummly/ Recipe API
Recipes Text; list; Picture of food?

Front-end:
Pictures, interface, input
Check/confirm valid food
Display results. Other User choices.

For next Meeting Thurs 5/4 11:30 AM
group: slides for presentation
TJ: Looking up recipe APIs How they work (what's the I/O of the data?)
Randy: Understanding capabilities of image tagging APIs; Set up slack
Je Min: Figure out the flow of the app; What will the user want in such an app?

5-2-17
New member:
Daniel Su

Welcome to the team!

5-4-17
Je couldn't make the meeting at the last minute. Decided to use something similar to the SnapIt! app for the image tagging.
Still testing out different recipe APIs. Daniel will be working out a balsamiq mock up over the coming weekend.

TODO for next meeting:
Demo Image Tagging(R)
Demo Recipe generations(T)
Need: form input/output
Start front end
If given toy data, what will be displayed in each step of app
Mock-up of pages.

5-9-17 Met to update each other on progress.
We have yet to decide on which API we should use for processing the image.
Je and Daniel are working on the mock-ups on Balsamiq.

5-11-17
Everyone studied for midterms and had other homework to do.
We were able to set a regular schedules for meeting.
From the drawing board:
<>Front end:
->Start Screen
->Need Server -> using PythonAnywhere ->TODO: ->Stub image upload
->Stub ingredients
->Stub recipe JSON <>Images: ->Format of output?
->array of ingredients. ->How to delineate ingredients via words? <>Recipes: ->MVP: list of ingredients. ->List of potential recipes. ->w/ lists of instructions. ->350k+ recipes pool.

TODO for next meeting:
Integrate back-end of image-processing with front end of application.

5-14-17
Back-end of application works as standalone image-processing program. Images can be tagged. Output to JSON file implemented.

TODO for next meeting:
Integrate back-end of voice-processing with front end of application.

5-16-17
Recipe API working as intended and as standalone program. Output still needs to be configure so that front-end can use the data well.

TODO for next meeting:
Clean up code which generates the recipes. Integrate recipe generation with the front end

5-18-17
Found a couple of voice-recognition APIs which can be used and combined with the Clarifai API to help users produce a more accurate list of ingredients. Still working on parsing the response from the recipe API.

TODO for next meeting:
Confirm which speech recognition API will be used to capture voice. Begin documentation of tutorials and requirements for app usage.

5-23-17
Testing of the speech recognition API has begun. Front-end cleaned up a few bugs with displaying the table of food recommendations. Image-processing API integrated with the front-end.

TODO for next meeting:
Implement a way to record audio once the user is done uploading images. Look into integrating the recipe API with the front end.

5-25-17
Worked out the use of loading screens for the application. May throw loading screens away depending on how clear visual indicators are of the app running.

TODO for next meeting: Will hold group hackathons to ensure full functionality is reached.

5-30-17
Group hackathon day. Front end is nearly complete. Buttons working as intended.

5-31-17
Group hackathon day. Recipe API integrated with the front end of the application. Working to get the speech recording as intended.

6-1-17
Testing app
Finishing up documentation

6-4-17
Meeting to go over our presentation for Tuesday, June 6th.

Clone this wiki locally