Skip to content
This repository has been archived by the owner on May 21, 2019. It is now read-only.

Commit

Permalink
fix: update README with support for IAM (#286)
Browse files Browse the repository at this point in the history
Fixes: #285 Update the instructions to work with IAM
  • Loading branch information
germanattanasio committed Aug 17, 2018
1 parent ffb86af commit b2eed33
Show file tree
Hide file tree
Showing 4 changed files with 635 additions and 504 deletions.
129 changes: 60 additions & 69 deletions README.md
@@ -1,92 +1,91 @@
# Visual Recognition Demo
[![Build Status](https://travis-ci.org/watson-developer-cloud/visual-recognition-nodejs.svg?branch=master)](https://travis-ci.org/watson-developer-cloud/visual-recognition-nodejs?branch=master)
[![codecov.io](https://codecov.io/github/watson-developer-cloud/visual-recognition-nodejs/coverage.svg?branch=master)](https://codecov.io/github/watson-developer-cloud/visual-recognition-nodejs?branch=master)
<h1 align="center" style="border-bottom: none;">🚀 Visual Recognition Sample Application</h1>
<h3 align="center">This Node.js app demonstrates some of the Visual Recognition service features.</h3>
<p align="center">
<a href="http://travis-ci.org/watson-developer-cloud/visual-recognition-nodejs">
<img alt="Travis" src="https://travis-ci.org/watson-developer-cloud/visual-recognition-nodejs.svg?branch=master">
</a>
<a href="#badge">
<img alt="semantic-release" src="https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg">
</a>
</p>
</p>

The [Visual Recognition][visual_recognition_service] Service uses deep learning algorithms to analyze images for scenes, objects, faces, text, and other subjects that can give you insights into your visual content. You can organize image libraries, understand an individual image, and create custom classifiers for specific results that are tailored to your needs.

Give it a try! Click the button below to fork into IBM DevOps Services and deploy your own copy of this application on the IBM Cloud.
## Prerequisites

## Getting Started
1. Sign up for an [IBM Cloud account](https://console.bluemix.net/registration/).
1. Download the [IBM Cloud CLI](https://console.bluemix.net/docs/cli/index.html#overview).
1. Create an instance of the Visual Recognition service and get your credentials:
- Go to the [Visual Recognition](https://console.bluemix.net/catalog/services/visual-recognition) page in the IBM Cloud Catalog.
- Log in to your IBM Cloud account.
- Click **Create**.
- Click **Show** to view the service credentials.
- Copy the `apikey` value.
- Copy the `url` value.

1. You need a IBM Cloud account. If you don't have one, [sign up][sign_up]. Experimental Watson Services are free to use.
## Configuring the application

2. Download and install the [Cloud-foundry CLI][cloud_foundry] tool if you haven't already.
1. In the application folder, copy the *.env.example* file and create a file called *.env*

3. Edit the `manifest.yml` file and change `<application-name>` to something unique. The name you use determines the URL of your application. For example, `<application-name>.mybluemix.net`.
```yaml
---
declared-services:
visual-recognition-service:
label: watson_vision_combined
plan: free
applications:
- name: <application-name>
path: .
command: npm start
memory: 512M
services:
- visual-recognition-service
env:
NODE_ENV: production
```
```
cp .env.example .env
```

4. Connect to the IBM Cloud with the command line tool.
2. Open the *.env* file and add the service credentials that you obtained in the previous step.

```sh
cf api https://api.ng.bluemix.net
cf login
```
Example *.env* file that configures the `apikey` and `url` for a Visual Recognition service instance hosted in the US East region:

5. Create the Visual Recognition service in the IBM Cloud.
```sh
cf create-service watson_vision_combined free visual-recognition-service
cf create-service-key visual-recognition-service myKey
cf service-key visual-recognition-service myKey
```
```
VISUAL_RECOGNITION_IAM_APIKEY=X4rbi8vwZmKpXfowaS3GAsA7vdy17Qh7km5D6EzKLHL2
VISUAL_RECOGNITION_URL=https://gateway.watsonplatform.net/visual-recognition/api
```

6. Create a `.env` file in the root directory by copying the sample `.env.example` file using the following command:
## Running locally

```none
cp .env.example .env
```
You will update the `.env` with the information you retrieved in steps 5 and 6
1. Install the dependencies

The `.env` file will look something like the following:
```
npm install
```

```none
VISUAL_RECOGNITION_API_KEY=
```
1. Run the application

7. Install the dependencies you application need:
```
npm start
```

```none
npm install
```
1. View the application in a browser at `localhost:3000`

8. Start the application locally:
## Deploying to IBM Cloud as a Cloud Foundry Application

```none
npm start
```
1. Login to IBM Cloud with the [IBM Cloud CLI](https://console.bluemix.net/docs/cli/index.html#overview)

9. Point your browser to [http://localhost:3000](http://localhost:3000).
```
ibmcloud login
```

10. **Optional:** Push the application to the IBM Cloud:
1. Target a Cloud Foundry organization and space.

```none
cf push
```
```
ibmcloud target --cf
```

After completing the steps above, you are ready to test your application. Start a browser and enter the URL of your application.
1. Edit the *manifest.yml* file. Change the **name** field to something unique.
For example, `- name: my-app-name`.
1. Deploy the application

<your application name>.mybluemix.net
```
ibmcloud app push
```

1. View the application online at the app URL.
For example: https://my-app-name.mybluemix.net

For more details about developing applications that use Watson Developer Cloud services in the IBM Cloud, see [Getting started with Watson Developer Cloud and the IBM Cloud][getting_started].

## Environment Variables

- `VISUAL_RECOGNITION_API_KEY` : This is the API key for the vision service, used if you don't have one in your IBM Cloud account.
- `VISUAL_RECOGNITION_IAM_API_KEY` : This is the IAM API key for the vision service, used if you don't have one in your IBM Cloud account.
- `PRESERVE_CLASSIFIERS` : Set if you don't want classifiers to be deleted after one hour. *(optional)*
- `PORT` : The port the server should run on. *(optional, defaults to 3000)*
- `OVERRIDE_CLASSIFIER_ID` : Set to a classifer ID if you want to always use a custom classifier. This classifier will be used instead of training a new one. *(optional)*
Expand Down Expand Up @@ -144,12 +143,4 @@ training form with your existing classifier.


[service_url]: https://www.ibm.com/watson/services/visual-recognition/
[cloud_foundry]: https://github.com/cloudfoundry/cli
[visual_recognition_service]: https://www.ibm.com/watson/services/visual-recognition/
[sign_up]: https://console.bluemix.net/registration/
[getting_started]: https://console.bluemix.net/docs/services/watson/index.html#about
[node_js]: http://nodejs.org/
[npm]: https://www.npmjs.com



11 changes: 1 addition & 10 deletions manifest.yml
@@ -1,15 +1,6 @@
---
declared-services:
visual-recognition-service:
label: watson_vision_combined
plan: free
applications:
- name: visual-recognition-demo
path: .
command: npm start
memory: 3000M
instances: 5
services:
- visual-recognition-service
env:
NODE_ENV: production
memory: 768M

0 comments on commit b2eed33

Please sign in to comment.