Skip to content

Commit

Permalink
Merge pull request #2947 from centerofci/0.1.2
Browse files Browse the repository at this point in the history
Release 0.1.2
  • Loading branch information
silentninja committed Jun 13, 2023
2 parents d259658 + 9f2f2d2 commit c9197b9
Show file tree
Hide file tree
Showing 241 changed files with 7,544 additions and 3,319 deletions.
1 change: 0 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
ALLOWED_HOSTS='.localhost, 127.0.0.1, [::1]'
SECRET_KEY=2gr6ud88x=(p855_5nbj_+7^bw-iz&n7ldqv%94mjaecl+b9=4
DJANGO_DATABASE_KEY=default
DJANGO_DATABASE_URL=postgres://mathesar:mathesar@mathesar_db:5432/mathesar_django
MATHESAR_DATABASES=(mathesar_tables|postgresql://mathesar:mathesar@mathesar_db:5432/mathesar)
## Uncomment the setting below to put Mathesar in 'demo mode'
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/deploy-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,6 @@ jobs:
- uses: actions/setup-python@v4
with:
python-version: 3.x
- run: pip install mkdocs-material mkdocs-redirects
- run: pip install -r ./docs/requirements.txt
- working-directory: ./docs
run: mkdocs gh-deploy --strict --force
20 changes: 20 additions & 0 deletions .github/workflows/github-repo-stats.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Stores GitHub repo stats daily, to overcome the 14-day limitation of GitHub's built-in traffic statistics.
name: github-repo-stats

on:
schedule:
# Run this once per day, towards the end of the day for keeping the most
# recent data point most meaningful (hours are interpreted in UTC).
- cron: "0 23 * * *"
workflow_dispatch: # Allow for running this manually.

jobs:
j1:
name: github-repo-stats
runs-on: ubuntu-latest
steps:
- name: run-ghrs
# Use latest release.
uses: jgehrcke/github-repo-stats@RELEASE
with:
ghtoken: ${{secrets.MATHESAR_ORG_GITHUB_TOKEN}}
2 changes: 1 addition & 1 deletion .github/workflows/run-pytest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
run: sudo chown -R 1000:1000 .

- name: Build the stack
run: docker-compose --profile test up --build -d
run: docker compose -f docker-compose.yml -f docker-compose.dev.yml up --build -d test-service

- name: Create coverage directory
run: docker exec mathesar_service_test mkdir coverage_report
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,6 @@ jobs:
- uses: actions/setup-python@v4
with:
python-version: 3.x
- run: pip install mkdocs-material mkdocs-redirects
- run: pip install -r ./docs/requirements.txt
- working-directory: ./docs
run: mkdocs build --strict
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ We highly recommend joining our [Matrix community](https://wiki.mathesar.org/en/
- Refer to our **[Developer Guide](./DEVELOPER_GUIDE.md)** for questions about the code.
- Make sure to follow our [front end code standards](./mathesar_ui/STANDARDS.md) and [API standards](./mathesar/api/STANDARDS.md) where applicable.
- If you are not familiar with GitHub or pull requests, please follow [GitHub's "Hello World" guide](https://guides.github.com/activities/hello-world/) first. Make sure to commit your changes on a new git branch named after the ticket you claimed. Base that new branch on our `develop` branch.
- Commit early, commit often. Write good commit messages. Try to keep pull requests small if possible, since it makes review easier.
- Commit early, commit often. Write [good commit messages](https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53). Try to keep pull requests small if possible, since it makes review easier.
- If you expect your work to last longer than 1 week, open a draft pull request for your in-progress work.

1. **Open a PR.**
Expand Down
1 change: 1 addition & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,4 @@ COPY . .
RUN sudo npm install -g npm-force-resolutions
RUN cd mathesar_ui && npm install --unsafe-perm && npm run build
EXPOSE 8000 3000 6006
ENTRYPOINT ["./run.sh"]
19 changes: 19 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ You can use Mathesar to build **data models**, **enter data**, and even **build
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**

- [Sponsors](#sponsors)
- [Status](#status)
- [Join our community!](#join-our-community)
- [Screenshots](#screenshots)
Expand All @@ -36,6 +37,24 @@ You can use Mathesar to build **data models**, **enter data**, and even **build

<!-- END doctoc generated TOC please keep comment here to allow auto update -->

## Sponsors
Our top sponsors! Become a sponsor on [GitHub](https://github.com/sponsors/centerofci) or [Open Collective](https://opencollective.com/mathesar).

<table>
<tbody>
<tr>
<td align="center" valign="top" width="14.28%">
<a href="https://www.thingylabs.io/">
<img src="https://user-images.githubusercontent.com/287034/226116547-cd28e16a-4c89-4a01-bc98-5a19b02ab1b2.png" width="100px;" alt="Thingylabs GmbH"/>
<br />
<sub><b>Thingylabs GmbH</b></sub>
</a>
<br />
</td>
</tr>
</tbody>
</table>

## Status
- [x] **Public Alpha**: You can install and deploy Mathesar on your server. Go easy on us!
- [ ] **Public Beta**: Stable and feature-rich enough to implement in production
Expand Down
2 changes: 1 addition & 1 deletion config/settings/common_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ def pipe_delim(pipe_string):
db_key: db_url(url_string)
for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))
}
DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)
DATABASES[decouple_config('DJANGO_DATABASE_KEY', default="default")] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)

for db_key, db_dict in DATABASES.items():
# Engine can be '.postgresql' or '.postgresql_psycopg2'
Expand Down
4 changes: 3 additions & 1 deletion conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@

from db.engine import add_custom_types_to_ischema_names, create_engine as sa_create_engine
from db.types import install
from db.sql import install as sql_install
from db.schemas.operations.drop import drop_schema as drop_sa_schema
from db.schemas.operations.create import create_schema as create_sa_schema
from db.schemas.utils import get_schema_oid_from_name, get_schema_name_from_oid
Expand Down Expand Up @@ -73,6 +74,7 @@ def __create_db(db_name):
create_database(engine.url)
created_dbs.add(db_name)
# Our default testing database has our types and functions preinstalled.
sql_install.install(engine)
install.install_mathesar_on_database(engine)
engine.dispose()
return db_name
Expand Down Expand Up @@ -208,7 +210,7 @@ def _create_schema(schema_name, engine, schema_mustnt_exist=True):
if schema_mustnt_exist:
assert schema_name not in created_schemas
logger.debug(f'creating {schema_name}')
create_sa_schema(schema_name, engine)
create_sa_schema(schema_name, engine, if_not_exists=True)
schema_oid = get_schema_oid_from_name(schema_name, engine)
db_name = engine.url.database
created_schemas_in_this_engine = created_schemas.setdefault(db_name, {})
Expand Down
41 changes: 26 additions & 15 deletions db/columns/operations/alter.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
from sqlalchemy.exc import DataError, InternalError, ProgrammingError
from psycopg2.errors import InvalidTextRepresentation, InvalidParameterValue, StringDataRightTruncation, RaiseException, SyntaxError

from db import connection as db_conn
from db.columns.defaults import NAME, NULLABLE
from db.columns.exceptions import InvalidDefaultError, InvalidTypeError, InvalidTypeOptionError
from db.columns.operations.select import (
Expand Down Expand Up @@ -283,21 +284,31 @@ def _batch_alter_table_rename_columns(table_oid, column_data_list, connection, e


def batch_alter_table_drop_columns(table_oid, column_data_list, connection, engine):
table = reflect_table_from_oid(
table_oid,
engine,
connection_to_use=connection,
# TODO reuse metadata
metadata=get_empty_metadata(),
)
ctx = MigrationContext.configure(connection)
op = Operations(ctx)
with op.batch_alter_table(table.name, schema=table.schema) as batch_op:
for column_data in column_data_list:
column_attnum = column_data.get('attnum')
if column_attnum is not None and column_data.get('delete') is not None:
name = get_column_name_from_attnum(table_oid, column_attnum, engine=engine, metadata=get_empty_metadata(), connection_to_use=connection)
batch_op.drop_column(name)
"""
Drop the given columns from the given table.
Args:
table_oid: OID of the table whose columns we'll drop.
column_data_list: List of dictionaries describing columns to alter.
connection: the connection (if any) to use with the database.
engine: the SQLAlchemy engine to use with the database.
Returns:
A string of the command that was executed.
"""
columns_to_drop = [
int(col['attnum']) for col in column_data_list
if col.get('attnum') is not None and col.get('delete') is not None
]

if connection is not None and columns_to_drop:
return db_conn.execute_msar_func_with_psycopg2_conn(
connection, 'drop_columns', int(table_oid), *columns_to_drop
)
elif columns_to_drop:
return db_conn.execute_msar_func_with_engine(
engine, 'drop_columns', int(table_oid), *columns_to_drop
)


def batch_update_columns(table_oid, engine, column_data_list):
Expand Down
31 changes: 16 additions & 15 deletions db/columns/operations/drop.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,19 @@
from alembic.migration import MigrationContext
from alembic.operations import Operations

from db.columns.operations.select import get_column_name_from_attnum
from db.tables.operations.select import reflect_table_from_oid
from db.metadata import get_empty_metadata
"""The function in this module wraps SQL functions that drop columns."""
from db import connection as db_conn


def drop_column(table_oid, column_attnum, engine):
# TODO reuse metadata
metadata = get_empty_metadata()
table = reflect_table_from_oid(table_oid, engine, metadata=metadata)
column_name = get_column_name_from_attnum(table_oid, column_attnum, engine, metadata=metadata)
column = table.columns[column_name]
with engine.begin() as conn:
ctx = MigrationContext.configure(conn)
op = Operations(ctx)
op.drop_column(table.name, column.name, schema=table.schema)
"""
Drop the given columns from the given table.
Args:
table_oid: OID of the table whose columns we'll drop.
column_attnum: The attnums of the columns to drop.
engine: SQLAlchemy engine object for connecting.
Returns:
Returns a string giving the command that was run.
"""
return db_conn.execute_msar_func_with_engine(
engine, 'drop_columns', table_oid, column_attnum
).fetchone()[0]
47 changes: 47 additions & 0 deletions db/connection.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
from sqlalchemy import text
import psycopg


def execute_msar_func_with_engine(engine, func_name, *args):
"""
Execute an msar function using an SQLAlchemy engine.
This is temporary scaffolding.
Args:
engine: an SQLAlchemy engine for connecting to a DB
func_name: The unqualified msar function name (danger; not sanitized)
*args: The list of parameters to pass
"""
conn_str = str(engine.url)
with psycopg.connect(conn_str) as conn:
# Returns a cursor
return conn.execute(
f"SELECT msar.{func_name}({','.join(['%s']*len(args))})",
args
)


def execute_msar_func_with_psycopg2_conn(conn, func_name, *args):
"""
Execute an msar function using an SQLAlchemy engine.
This is *extremely* temporary scaffolding.
Args:
conn: a psycopg2 connection (from an SQLAlchemy engine)
func_name: The unqualified msar function name (danger; not sanitized)
*args: The list of parameters to pass
"""
args_str = ", ".join([str(arg) for arg in args])
args_str = f"{args_str}"
stmt = text(f"SELECT msar.{func_name}({args_str})")
# Returns a cursor
return conn.execute(stmt)


def load_file_with_engine(engine, file_handle):
"""Run an SQL script from a file, using psycopg."""
conn_str = str(engine.url)
with psycopg.connect(conn_str) as conn:
conn.execute(file_handle.read())
3 changes: 3 additions & 0 deletions db/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,6 @@
ID_ORIGINAL = "id_original"
INFERENCE_SCHEMA = f"{MATHESAR_PREFIX}inference_schema"
COLUMN_NAME_TEMPLATE = 'Column ' # auto generated column name 'Column 1' (no undescore)
MSAR_PUBLIC = 'msar'
MSAR_PRIVAT = f"__{MSAR_PUBLIC}"
MSAR_VIEWS = f"{MSAR_PUBLIC}_views"
24 changes: 17 additions & 7 deletions db/constraints/operations/drop.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,19 @@
from alembic.migration import MigrationContext
from alembic.operations import Operations
from db.connection import execute_msar_func_with_engine


def drop_constraint(table_name, schema, engine, constraint_name):
with engine.begin() as conn:
ctx = MigrationContext.configure(conn)
op = Operations(ctx)
op.drop_constraint(constraint_name, table_name, schema=schema)
def drop_constraint(table_name, schema_name, engine, constraint_name):
"""
Drop a constraint.
Args:
table_name: The name of the table that has the constraint to be dropped.
schema_name: The name of the schema where the table with constraint to be dropped resides.
engine: SQLAlchemy engine object for connecting.
constraint_name: The name of constraint to be dropped.
Returns:
Returns a string giving the command that was run.
"""
return execute_msar_func_with_engine(
engine, 'drop_constraint', schema_name, table_name, constraint_name
).fetchone()[0]
11 changes: 9 additions & 2 deletions db/identifiers.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
import hashlib


POSTGRES_IDENTIFIER_SIZE_LIMIT = 63


def truncate_if_necessary(identifier):
"""
Takes an identifier and returns it, truncating it, if it is too long. The truncated version
Expand Down Expand Up @@ -30,9 +33,13 @@ def truncate_if_necessary(identifier):


def is_identifier_too_long(identifier):
postgres_identifier_size_limit = 63
# TODO we should support POSTGRES_IDENTIFIER_SIZE_LIMIT here;
# Our current limit due to an unknown bug that manifests at least
# when importing CSVs seems to be 57 bytes. Here we're setting it even
# lower just in case.
our_temporary_identifier_size_limit = 48
size = _get_size_of_identifier_in_bytes(identifier)
return size > postgres_identifier_size_limit
return size > our_temporary_identifier_size_limit


def _get_truncation_hash(identifier):
Expand Down
9 changes: 6 additions & 3 deletions db/install.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@
from sqlalchemy.exc import OperationalError

from db import engine
from db.types import install
from db.sql import install as sql_install
from db.types import install as types_install


def install_mathesar(
Expand All @@ -16,7 +17,8 @@ def install_mathesar(
try:
user_db_engine.connect()
print(f"Installing Mathesar on preexisting PostgreSQL database {database_name} at host {hostname}...")
install.install_mathesar_on_database(user_db_engine)
sql_install.install(user_db_engine)
types_install.install_mathesar_on_database(user_db_engine)
user_db_engine.dispose()
except OperationalError:
database_created = _create_database(
Expand All @@ -29,7 +31,8 @@ def install_mathesar(
)
if database_created:
print(f"Installing Mathesar on PostgreSQL database {database_name} at host {hostname}...")
install.install_mathesar_on_database(user_db_engine)
sql_install.install(user_db_engine)
types_install.install_mathesar_on_database(user_db_engine)
user_db_engine.dispose()
else:
print(f"Skipping installing on DB with key {database_name}.")
Expand Down
Loading

0 comments on commit c9197b9

Please sign in to comment.