Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🍑 ③ Set up API #2

Open
tatianamac opened this issue Aug 31, 2019 · 72 comments
Open

🍑 ③ Set up API #2

tatianamac opened this issue Aug 31, 2019 · 72 comments

Comments

@tatianamac
Copy link
Collaborator

No description provided.

@tatianamac tatianamac changed the title 🍑 Build backend API infrastructure 🍑 Build API Aug 31, 2019
@good-idea
Copy link

👋Let me know when you're getting started here!

@tatianamac
Copy link
Collaborator Author

As I'm not as familiar with APIs, I'm not sure when it's best to start thinking about the structure of this, but the general idea is that I would like the entire dictionary to have an API, so that different companies and products can tap into the database of words.

Part of the build functionality will include a bot function that can help to auto-correct exclusive terminology.

What other context can I provide that would be helpful?

@connor-baer
Copy link

I think the features and functionality that you have in mind are quite clear but I'm unsure about the technical implementation.

The below questions will help us to figure out how to store the data and how to query it.

  • What kind of data do you have in mind? Is it just word definitions or will there be other types of data?
  • How is the data structured? Is it a simple one dimensional collection or is there a hierarchy? On the website it looks like some words are grouped into categories. Could there be multiple layers of groups?

@tatianamac
Copy link
Collaborator Author

tatianamac commented Oct 20, 2019

What kind of data do you have in mind? Is it just word definitions or will there be other types of data?

Only definitions for now, but there will be layers of information (eventually things like parts of speech, connecting like-terms, etc).

How is the data structured? Is it a simple one dimensional collection or is there a hierarchy? On the website it looks like some words are grouped into categories. Could there be multiple layers of groups?

It's definitely layered in that some definitions will require alternate definitions, or sub definitions (as you noticed.

It's probably also important to note the future URL feature I'd like to be able to integrate, which could affect how we structure the data.

@coilysiren
Copy link

As hinted at by the above comments - I think this may be easier to parse it you split this task into 3 parts:

  1. setup the data structure
  2. setup the build system for compiling the data structure into html
  3. setup the api for exposing the data structure programmatically

This is primarily relevant here because the api is actually the last step.

@good-idea
Copy link

Hi all! My thoughts are pretty in line with what @lynncyrin mentioned above.

  1. setup the data structure

Definitely. In my experience, it has been really helpful to figure out how all of the content fits together as a system, and explore it as much as possible to figure out the edge cases. For instance, I imagine that the structure could be described as (for a start):

  1. Having Words. This could include both inclusive and non-inclusive words, each having an inclusive boolean.
  2. Words have one or more definitions or contexts. For instance, the word crazy on thesaurus.com has three different definitions:
  • mentally strange, "That person is crazy"
  • unrealistic, fantastic, "The new star wars trailer is crazy"
  • infatuated, in love "Right now I'm crazy about sci-fi"
  1. Each of these definitions could have any number of suggested alternative Words (synonyms, basically).
  1. setup the build system for compiling the data structure into html

An alternative here would be limiting the API to providing the data only, and allowing the frontend (website, app, slackbot) take care of rendering the HTML on its own. This could be kind of nice because the API wouldn't have the responsibility for supporting different platforms or environments.

  1. setup the api for exposing the data structure programmatically

Have either of you worked with or have thoughts about GraphQL? I've been using it in my projects for a year or so and have grown to really like it. It comes with some of its own overhead, but a bonus here is that the API (again) doesn't need to grow with more endpoints to satisfy the needs of particular use-cases. I've also found that defining a schema (wether you are going to use GraphQL or not) can be really helpful in the planning phase, just to figure out how all of the data fits together.

Question: other than marking suggested alternative words as inclusive/non-inclusive, any thoughts on how the overall structure might be different from a thesaurus?

Other question: @lynncyrin --- you spoke at !!Con West earlier this year, didn't you?

@coilysiren
Copy link

  1. setup the data structure

You're headed down the right path @good-idea! I would encourage splitting the data structure task in half, though:

a. figure out how to encode the data structure into this repo, eg. yaml / json / etc file? database with a CMS? etc.
b. determine the structure of the data itself. I get the impression that there's a few dictionary features that are still being specced, so this sub-task is probably best left to @tatianamac for the moment.

  1. setup the api for exposing the data structure programmatically

I would recommend against picking technologies at this point, the encoding of the data structure (eg. task 1a) is a bit more important - and that choice can potentially narrow your api options significantly.

spoke at !!Con West

yep => https://www.youtube.com/watch?v=-fpnf2nOigQ

@tatianamac tatianamac changed the title 🍑 Build API 🍑 3️⃣ Set up API Oct 21, 2019
@tatianamac tatianamac changed the title 🍑 3️⃣ Set up API 🍑 ③ Set up API Oct 21, 2019
@tatianamac
Copy link
Collaborator Author

Thank you, everyone! I broke this task up into several issues so we can have more targeted discussions under those relevant issues.

Also, this maybe obvious to everyone but if you haven't already, what I've built so far is here: https://www.selfdefined.app/

@good-idea
Copy link

spoke at !!Con West
yep => https://www.youtube.com/watch?v=-fpnf2nOigQ

@lynncyrin yay, I thought I recognized you! We didn't meet but I was in the audience.

@tatianamac https://www.selfdefined.app/ is looking great!

@denistsoi
Copy link

Just throwing this out there:

Noticed that since the site is being hosted on netlify, the project could leverage netlify functions and query the JSON/markdown files for a basic query.

I know that the aim is to leverage algolia in the future as well, so this could be a step forward towards that.

An alternative is that with now.sh you could use serverless functions to query the API (which would return the same result as above).

Wondering if @good-idea/ @ovlb / @tatianamac have thoughts before I start a PR.

@ovlb
Copy link
Collaborator

ovlb commented Jan 17, 2020

@denistsoi I have some thoughts, but currently no time to write them out. I will try to do so over the weekend. Also, my thoughts don’t matter too much :)

@denistsoi
Copy link

Thanks @ovlb
I think one thing to make this work also, is that I may include the raw markdown files within the final build. That way if we need to query the raw definition, it’s already in the deployed site, rather than querying the codebase.

I think this would be helpful say, if the project is moved off from github.

@ovlb
Copy link
Collaborator

ovlb commented Feb 6, 2020

@denistsoi

Sorry for the delay in answering!

I think we have to answer additional questions before we hop into implementation mode.

Let me expand my reasoning a bit:

  1. Are Netlify functions the right tool for the job? I am not sure but also have limited experience. Can you say something about working with them? From my understanding, they are a tool that works with APIs, not to be an API itself. Is this correct? Or am I on the wrong path here?
  2. We don’t have a database. Using the file system as a DB works in theory but is expensive and slow. Using e.g. Algolia would allow us to use our existing data and hand it over to them. However, I do agree, it might be overkill for a quick proof of concept. Proof of which concept, though?
  3. Not only is it the tools, but also the underlying design: What’s the purpose of the API? What are the use cases and possible implementations? What queries should return which data? How do we protect it against – given the nature of the project quite likely – fraudulent use? Before we start working on a solution, we have to answer these (and probably more) questions. Otherwise, we risk building something too complicated or ill-suited.

I guess the first task where an API is needed is the Twitter bot. As soon as we have this task specced out (which questions does the bot answer, what happens if there is no definition/no alternative words, …) we can build something that solves these problems.

As we all have limited resources, I think it is vital to use our energy wisely and try to focus on implementations that – hopefully – stand the test of time. An API that uses the file system will most likely not do that.

Re: Including raw definitions in the build: Having the definitions in the deployed markup and/or the client-side JS bundle (not entirely sure what you would like to target) seems very detrimental to the performance of the site. I would advise against this.

@denistsoi
Copy link

Sorry for not sending this sooner @ovlb, but had drafted an earlier response on my phone and forgot to getting round to sending it.

  1. Are Netlify functions the right tool for the job? I am not sure but also have limited experience. Can you say something about working with them? From my understanding, they are a tool that works with APIs, not to be an API itself. Is this correct? Or am I on the wrong path here?

Netlify functions or any serverless function can act as an endpoint.
(Given a API structure, you can pass in your query param to retrieve the data, e.g. /api/definition?)

according to netlify you can add it into the working directory /functions/<whatever-name>.js

exports.handler = async (event, context) => {
    // add logic here
    return {
        statusCode: 200,
        body: "ok"
    }
}

only issue is billing (free-tier is 125k api calls per month, or 100 hours)

  1. We don’t have a database. Using the file system as a DB works in theory but is expensive and slow. Using e.g. Algolia would allow us to use our existing data and hand it over to them. However, I do agree, it might be overkill for a quick proof of concept. Proof of which concept, though?
    The existing app is a static generated app. (All definitions are defined in markdown and compiled into html).

By hooking into the build pipeline, we could define the serverless functions.

(Re Algolia, I believe we need to provide an index of all the items that want to be searchable -we could feed it with all definitions as JSON).

I don’t think it’s necessarily to require a database unless there’s some data processing later down the line, or want to host the api via heroku or something) (You could theoretically store all definitions on a headless cms and run the build on update)

  1. Not only is it the tools, but also the underlying design: What’s the purpose of the API? What are the use cases and possible implementations? What queries should return which data? How do we protect it against – given the nature of the project quite likely – fraudulent use? Before we start working on a solution, we have to answer these (and probably more) questions. Otherwise, we risk building something too complicated or ill-suited.

The API from previous posts was for users to call the service and service the definition as a response (e.g. /define “word”) rather than have the user copy/paste.

The consumers of the api would be users or bots, but ultimately by sharing a link, the api would return the definitions of words from the defined api.

e.g. user flow:

  • group are talking in slack
  • person a mentions gaslighting
  • person b types /define gaslighting
  • slackbot returns gaslighting noun: ...

I mentioned including the markup as technically, you could serve the raw text files as a route, and that'd be the most rudimentary proof of concept for an api.

Suppose if the app were to be self hosted, considerations about performance would be higher priority, whereas using statically generated sites like 11ty removes that consideration until the app reaches the free tier limit.

Re: api design/structure or security, I haven't considered them yet, but just wanted to start a discussion.

As we all have limited resources I think it is vital to use our energy wisely and try to focus on implementations that – hopefully – stand the test of time. An API that uses the file system will most likely not do that.

I’m not currently working at the moment so was hoping I thought I’d offer my time to the project rather than doing coding interviews (and face the emotional rejection that comes with that). Also, since i'm in Hong Kong right now, recruitment is slow due to nCoV-2019.

@tatianamac
Copy link
Collaborator Author

tatianamac commented Feb 13, 2020

(I wanted to jump in and say thank you for this discussion and your time!!! Finding this particular sort of help has been difficult, so your expertise is really valued. I want these discussions to happen concurrently with the the build of the webapp, so we can be mindful of the dictionary's infrastructure for its future plans.)

@ovlb
Copy link
Collaborator

ovlb commented Feb 13, 2020

Hey @denistsoi,

thanks so much for your answer.

I think I misunderstood one point with including the raw markdown content in the first place, which led me down the wrong path. Would you aim to compile the definitions into JSON objects during build time and store them somewhere in dist/whatever and have some functions that basically require() these definitions to search them?

It feels much more feasible to start building it with this background knowledge. I would say: go for it!

@denistsoi
Copy link

denistsoi commented Feb 14, 2020

Hey @ovlb

Would you aim to compile the definitions into JSON objects during build time and store them somewhere in dist/whatever and have some functions that basically require() these definitions to search them?

I think requiring the definitions as JSON would be the easiest - however, I think there'd be some duplication as we currently have the workflow of

  1. create md file in 11ty/definitions/
  2. eleventy generates collection and converts markdown file to html
  3. html is saved into dist/defintions/<defined-word>/index.html

Having JSON gives us the benefit of searching via key (or regex) and returning the result;

actually i'll add this since i like the added benefit of generating the file as well

@denistsoi
Copy link

denistsoi commented Feb 14, 2020

Update -

I created this structured JSON below

{
  ...
  "women-and-people-of-colour": {
    "metadata": {
      "title": "Self-Defined",
      "url": "https://www.selfdefined.app/",
      "description": "A modern dictionary about us. We define our words, but they don't define us.",
      "author": {
        "name": "Tatiana & the Crew",
        "email": "info@selfdefined.app"
      }
    },
    "title": "women and people of colour",
    "slug": "women-and-people-of-colour",
    "flag": {
      "level": "avoid"
    },
    "defined": true,
    "speech": "noun",
    "alt_words": [
      "people of colour and white women",
      "people of colour",
      "white non-binary people, and white women",
      "find ways to reframe why this dynamic exists",
      "or omit"
    ],
    "page": {
      "date": "2020-02-13T09:56:58.228Z",
      "inputPath": "./11ty/definitions/women-and-people-of-colour.md",
      "fileSlug": "women-and-people-of-colour",
      "filePathStem": "/definitions/women-and-people-of-colour",
      "url": "/definitions/women-and-people-of-colour/",
      "outputPath": "dist/definitions/women-and-people-of-colour/index.html"
    },
    "html": "<hr>\n<p>title: women and people of colour\nslug: women-and-people-of-colour\nflag:\nlevel: avoid\ndefined: true\nspeech: noun\nalt_words:</p>\n<ul>\n<li>people of colour and white women</li>\n<li>people of colour</li>\n<li>white non-binary people, and white women</li>\n<li>find ways to reframe why this dynamic exists</li>\n<li>or omit</li>\n</ul>\n<hr>\n<p>often used as a phrase to encompass “non-white, non-men,” seeking to provide solidarity for these two groups</p>\n<h4>Issues</h4>\n<p>What happens to women of colour? As a woman of colour, I am split between both women and people of colour.</p>\n<h4>Impact</h4>\n<p>As such, it elicits feelings of erasure for women of colour. It also neglects <a href=\"/#non-binary\">non-binary</a> individuals.</p>\n"
  }
}

I got this info via the collection Template object in eleventy and markdown-it, just configuring the netlify function, gonna grab some lunch (brb).

@hibaymj
Copy link

hibaymj commented Feb 14, 2020

In selfdefined/web-app#72 Created an OpenAPI spec for the app based on the high level stuff I saw on the app and in the linked issues from this thread. I don't know much of anything about netlify, so if there's requirements for the data elements to match that format for some reason, the models could be changed in the spec to accommodate.

I think this also addresses selfdefined/web-app#6 and should handle how you can maintain the dictionary when the words get large and manage linking of new words, synonyms, and alternatives. I also leaned towards adding internationalization support via language of origin and translations. Not really sure if that would be helpful at all, but I was just thinking about the context of words.

If you haven't worked with OpenAPI before, you can take that text and put it in http://editor.swagger.io/ to get a good visual UI to see how the things work out. Once code is running and the spec matches, you can actually use that interface to make some calls to the service or output some cURL commands for you.

@denistsoi
Copy link

denistsoi commented Feb 14, 2020

@hibaymj - actually thats a good start and didn't occur to me to have gone down that route - 👍

I suppose we wouldn't need to use netlify functions if we go down this route since the spec could just codegen -> I'm a bit rusty on how that works again so have to look at that part.

The other thing i forgot was where someone could pass in multiple terms and render a page

https://github.com/tatianamac/selfdefined/issues/6#issue-509601267

hmmm... need to think about this some more -

thinking

the project right now is a statically generated site hosted on netlify - my thoughts are:
I don't know whether to use swagger-codegen to generate the server stubs into the codebase (or abstract it away into a separate thing)

@denistsoi
Copy link

denistsoi commented Feb 14, 2020

Another thing to mention:

@good-idea mentions in https://github.com/tatianamac/selfdefined/issues/13 about type definitions for graphql. Using that you could also codegen the openapi spec via Sofa: - this publication has a good example: https://medium.com/the-guild/sofa-the-best-way-to-rest-is-graphql-d9da6e8e7693,

Alternatively, we could reverse this process if we wanted a gql endpoint https://github.com/IBM/openapi-to-graphql.

I suppose my bias is to use something like hasura hosted on heroku and store definitions as a headless CMS. (I need to think about this more before going down this path)

@denistsoi
Copy link

denistsoi commented Feb 14, 2020

@tatianamac / @ovlb

I forked the project and hosting it on netlify with netlify functions.

https://elated-lovelace-edac00.netlify.com/.netlify/functions/api?name=women-and-people-of-colour

The definitions are in json located https://github.com/denistsoi/selfdefined/blob/master/functions/data.json

I wanted to get some thoughts before I submit a PR. (wanna get some rest before I do any more improvements)

Thinking how this might look if say someone were to query from say slack or twitter (say get a raw text instead of it being in html)

@hibaymj
Copy link

hibaymj commented Feb 14, 2020

The Open API 3 tools are really robust and getting more capable over time.

Regarding GQL, it's a lot more trouble than you're thinking if you have low API capabilities and your goal is to be cross linked and work with other systems.

More importantly however, if you look closely at the GET operations, you'll see I added support for multiple words to come back. This would mean just tailoring the query parameters would be necessary to support returning more than 1 word in a request.

Writing the API contract first is extremely valuable for the project, but generating it isn't really all that beneficial. The contract also has you define schemas as well. Hope this helps!

@denistsoi
Copy link

denistsoi commented Feb 14, 2020 via email

@ovlb
Copy link
Collaborator

ovlb commented Feb 16, 2020

FYI: I haven’t forgotten this discussion and providing feedback is on my to-do list. Sorry for the delay.

@BrentonPoke
Copy link

BrentonPoke commented Mar 13, 2020

It seems as though the api functionality is being worked into the app itself instead of using a database to store definitions, so is that the prevailing design? By decoupling the data from the app, APIs are much easier to build. They're also easier to scale should you decide to offer it as a service to companies down the road. I would like to help in the area of APIs since I have experience there.

@leovolving
Copy link

Hi everyone! My name is Leo and I'd love to help with this process if I can.

Having read the entire thread, I feel compelled to echo the concerns of @BrentonPoke. It sounds like the web app is only a small portion of the overall vision for this project. Maintaining a separate API that can be consumed by both our web app as well as 3rd parties would be much better in the long run.

I also have concerns about storing HTML in the data. I think that that's allowing the API to be too opinionated. The frontend should be making decisions on how the data appears visually. If there is a major design change on the frontend, we would have to touch every single entry in the database as opposed to fixing the structure in a single JS file. That's expensive over time.

It seems like a relational database would be the best way to go, given that everything seems to link back to a single source of truth: the root word. AFAIK that should still give us the freedom to use graphQL on the frontend to query the data.

As others have mentioned, once the data structure has been defined, we'll have a better idea of what options we have regarding tech stack. @tatianamac have you finalized, from the user's perspective, what you'd like the API to be able to do? Once that's finalized, we can setup a call to discuss the data structure.

I realize that I'm arriving late to a months-long conversation, so I apologize if I'm stepping on any toes. Is there a point person with whom I should touch base?

@leovolving
Copy link

Thanks for sharing this prototype, but we need a read/write capabilities for our API and we'd already decided on a non-relational database, as the needs of our schema are likely to evolve overtime.

I should be able to get the API repo started this weekend.

@tatianamac
Copy link
Collaborator Author

@simonw Thank you for putting this together! We'll definitely keep this in mind for the future approaches; for now I think we'll likely proceed as we have set out.

@Ljyockey Thank you! Please let me know if you need anything from me.

@leovolving
Copy link

My apologies if anyone was waiting on me to proceed. I'm struggling to keep up with my commitments in the wake of the pandemic. Hoping to get started in the next week or two!

@tatianamac
Copy link
Collaborator Author

hi @Ljyockey ! I recognise things are very weird right now—are you still interested in setting this up? No worries either way, just going through the tickets. ✨🙏🏽

@leovolving
Copy link

leovolving commented May 3, 2020 via email

@denistsoi
Copy link

denistsoi commented May 9, 2020

given the PR for netlify open authoring - https://github.com/greatislander/selfdefined/pull/1/checks

I think the doubling down of using netlify serverless functions is more feasible ...

Edit: since looking at netlify functions - the starter tier is only 125K calls per month.
That totals on average at 173 per hour - which is quite restrictive...
for $25 pm - its 2M or 2777 per hour -
https://www.netlify.com/pricing/#features

for glitch - we can get 4000 per hour on the free tier - i might start it with glitch first
https://glitch.com/pricing

@aguywithcode
Copy link

Happy birthday @tatianamac I came here from your birthday tweet.

I have a keen interest in ontology, taxonomy, and natural language processing. This project intrigues me and I would love to contribute and help in any way I can.

I see there is a discussion about how best to design the API and storage for the dictionary as well as the schema. In the ontology space there is an XML based protocol for defining concepts and their relationships called RDF. There is also a W3C recommendation called JSON-LD that fills the same space. We are using it in the IoT space to make metadata around devices and physical locations. It could prove useful in defining dictionary terms and how they relate.

WRT storage, I would also recommend using a NoSQL storage platform not necessarily for flexibility but in order to provide more performance for retrieval.

I'd like to offer my experience with cloud architecture, serverless development, and nosql database design. I can help us get an initial environment setup on Azure for development and production. The free tier for azure functions includes 1 million executions per month. CosmosDb (the Microsoft NoSQL database that supports MongoDB API as well as a Graph API) has a similarly generous free tier. For reference, Troy hunt of HaveIBeenPwned.com discusses the costs of running his site on Cloudflare and Azure Functions "It's costing me 2.6c per day to support 141M monthly queries of 517M records." A lot of this is due to a great caching model and usage of serverless functions to save on the backend.

Azure functions supports JavaScript, Python, C#, and Java so it will accommodate whatever language the team is comfortable with. In addition the Azure API Management service will make it easier to provide managed APIs for partners who would consume the service at free as well as paid tiers.

What I'd advise is that we identify an MVP of the service (it looks like a prime candidate would be the Twitter bot @tatianamac was talking about). Design the API that would support the bot and build the first revision. Perhaps we can setup a regular cadence of weekly calls (come as you can) to get some traction on the solution.

@tatianamac
Copy link
Collaborator Author

Thank you, @aguywithcode ! What a helpful and thorough digest.

I've restarted the Slack channel, which I'd love for you to join!

To me, there are two potential MVPs:

  1. The Twitter Bot: @SelfDefinedAppBot is the current Twitter handle I'm squatting on for this. That's definitely a possibility. I'm happy to write specs for that.
  2. The Dictionary: Ideally, I'd like for the dictionary to also use the API.

I'm going to switch over and address your comments on the OpenAPI pull request now.

Anyone else who has been on this thread is welcome to join the Slack channel as well if you'd like to continue to work on this! ✨

@mjoynes-wombat-web
Copy link

Oscar, Amy, Kurt, Tatiana and myself have been discussing the API in the dev channel of Slack.

I mentioned using AWS.
Amy mentioned using GCP.

However, Oscar and Tatiana would prefer to avoid Amazon, Google and Facebook due to issues with their ethics.

We continued discussing additional solutions that avoid these.

Oscar mentioned Algolia as a solution for searching. We've looked at using a managed database on DigitalOcean. Kurt has been using Netlify functions for search and this would fit as a potential solution for the API code.

Amy shared this article as a solution to keep the Serverless function within the DigitalOcean ecosystem.

@kkemple
Copy link

kkemple commented Aug 24, 2020

I'm hesitant to use Algolia as the free plan doesn't allow for much customization and I wonder what the results of search terms would be. I'd love to run a few different options locally (like elastic search) and compare against what Algolia provides.

@mjoynes-wombat-web
Copy link

@kkemple I can say that elastic search provides a lot of options for customization. I created an e-commerce site using it over the last year and they had a bunch of custom data access coming from their internal warehouse management software. I agree that it would be good to test different solutions out.

Maybe the first step would be to create a local database with the terms and testing it out with different platforms?

@ovlb
Copy link
Collaborator

ovlb commented Aug 24, 2020

I'm hesitant to use Algolia as the free plan doesn't allow for much customization and I wonder what the results of search terms would be.

We could apply for the OSS tier immediately?

I'd love to run a few different options locally (like elastic search) and compare against what Algolia provides.

Like that idea.

@mjoynes-wombat-web
Copy link

I'd love to run a few different options locally (like elastic search) and compare against what Algolia provides.

Do we want to have a standardized local environment for this? I think we won't really be using it much in development if we're going managed database, service and serverless function. Or do we want to divi up the different types of services and then present them?

@mjoynes-wombat-web
Copy link

For the future, if we're using Auth for clients, would Netlify's auth work for this? Would we use API keys?

@BrentonPoke
Copy link

Did anyone ever look into Linode to see if they would be able to provide anything for this project?

@mjoynes-wombat-web
Copy link

@BrentonPoke I don't believe so, at least among the people who talked about it yesterday. Linode wasn't brought up. Do you have some more context outside of what's in this issue? I'm new here so I'm not fully aware of all that's going on.

@tatianamac
Copy link
Collaborator Author

(As an aside, when we're reading to start building stuff for this, I'd ask that we open it up as a new repo within the selfdefined organisation so we keep the web app separate from the API. Eventually we will want the web app to be served by the API, but this way it keeps concerns separate.)

@mjoynes-wombat-web
Copy link

@tatianamac I was going to start on some local testing of this tomorrow night. If you create a repo I'm happy to push what I'm working on up so other people can review. Also going to try and live stream a bit, though it'll be my first time so we'll see how that goes.

@ovlb
Copy link
Collaborator

ovlb commented Aug 26, 2020

There you go: https://github.com/selfdefined/api

@mjoynes-wombat-web
Copy link

Should I create a new issue to continue this discussion over there?

@ovlb
Copy link
Collaborator

ovlb commented Aug 26, 2020

Let’s keep the discussion here for context, but the implementation in the new repo. I’ve created a pinned issue in the new repo sending folks who are interested to this thread.

@tatianamac
Copy link
Collaborator Author

@ovlb What do you think about transferring this issue over there?

image

(I don't have strong feelings or experience either way.)

@ovlb
Copy link
Collaborator

ovlb commented Aug 26, 2020

@tatianamac Ah, didn’t know this was possible. Sounds like a plan :)

@ovlb ovlb transferred this issue from selfdefined/web-app Aug 26, 2020
@mjoynes-wombat-web
Copy link

Been thinking about this more and Craft CMS would handle the majority of the needs of the dictionary. It would only need to be extended to allow for submissions of new words and revisions of existing ones. With the approval process to do so. It handles all the user requirements I could think of, allows for localization/translation "sites". Provides an excellent interface for content management. It integrates through a plugin with Elastic search. It also has a headless graphql/rest api mode that allows for working with Eleventy.

The CMS is very extendable and I know we could create our own module to handle the submissions and revisions from community members.

I use it almost daily at my full time job and really like it. Thoughts?

@mjoynes-wombat-web
Copy link

So because Craft CMS handles a lot of the features we'd need I'm going to spin up a test environment in it. I can do that a lot quicker than a test MySQL instance. If this isn't something that will work I'm happy to stop but until I get some direction this is how I will proceed.

@ovlb
Copy link
Collaborator

ovlb commented Oct 23, 2020

@ssmith-wombatweb Sorry for not getting back to you for so long. I’m glad you started thinking about a CMS, since it is quite a logical conclusion to our needs. I’ve only limited experience with Craft (and years old for that matter), so I trust your judgement fully. In short: Go for it :)

@mjoynes-wombat-web
Copy link

@ovlb Great! I think I can setup the admin portion of the site fairly easily to show off. I have a couple additional explanations and questions around the backend that I've outlined/asked in these three issues if you get a chance to look at them.

#7
#8
#9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests