Skip to content

Commit

Permalink
docs: fix spelling and grammar (#26381)
Browse files Browse the repository at this point in the history
  • Loading branch information
fenilgmehta committed Jan 2, 2024
1 parent 6e443ad commit 24e6ec3
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion RELEASING/release-notes-2-0/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Superset 2.0 is a big step forward. This release cleans up many legacy code path

- New GitHub workflow to test Storybook Netlify instance nightly ([#19852](https://github.com/apache/superset/pull/19852))

- Minimum requirement for Superset is now Python 3.8 ([#19017](https://github.com/apache/superset/pull/19017)
- Minimum requirement for Superset is now Python 3.8 ([#19017](https://github.com/apache/superset/pull/19017))

## Features

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/frequently-asked-questions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ Table schemas evolve, and Superset needs to reflect that. It’s pretty common i
dashboard to want to add a new dimension or metric. To get Superset to discover your new columns,
all you have to do is to go to **Data -> Datasets**, click the edit icon next to the dataset
whose schema has changed, and hit **Sync columns from source** from the **Columns** tab.
Behind the scene, the new columns will get merged it. Following this, you may want to re-edit the
Behind the scene, the new columns will get merged. Following this, you may want to re-edit the
table afterwards to configure the Columns tab, check the appropriate boxes and save again.

### What database engine can I use as a backend for Superset?
Expand Down Expand Up @@ -220,7 +220,7 @@ and write your own connector. The only example of this at the moment is the Drui
is getting superseded by Druid’s growing SQL support and the recent availability of a DBAPI and
SQLAlchemy driver. If the database you are considering integrating has any kind of of SQL support,
it’s probably preferable to go the SQLAlchemy route. Note that for a native connector to be possible
the database needs to have support for running OLAP-type queries and should be able to things that
the database needs to have support for running OLAP-type queries and should be able to do things that
are typical in basic SQL:

- aggregate data
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/installation/event-logging.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -56,5 +56,5 @@ from superset.stats_logger import StatsdStatsLogger
STATS_LOGGER = StatsdStatsLogger(host='localhost', port=8125, prefix='superset')
```

Note that it’s also possible to implement you own logger by deriving
Note that it’s also possible to implement your own logger by deriving
`superset.stats_logger.BaseStatsLogger`.

0 comments on commit 24e6ec3

Please sign in to comment.