Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Explicit keytype 2 #38

Merged
merged 7 commits into from

2 participants

@kevinmontuori

Evan --

Here's a second attempt at adding an explicit Id::uuid type. There
are no changes required to ChicagoBoss.git. In general, there were
only a few changes required to update the mock and pgsql adapters:

  • boss_record_compiler required a new process_tokens clause to
    capture the ::uuid(). (I have a question about this: it appears
    that the clause on line 36 will never match:

    process_tokens([{'-',N}=T1,{atom,N,module}=T2, ...]);

    Unless there's something I'm missing, N should be different in
    tuple one and tuple two.)

  • I opted to provide the UUID and not require the DB to supply it
    (something Postgres will do with the appropriate plugin compiled
    and loaded). As a result save_record/2 now calls
    maybe_populate_id_value/1 which, seeing it's a uuid type, does
    what you'd expect.

  • build_insert_query/1 was updated to handle the case where the ID
    supplied is a list. (As an aside, I couldn't figure out when the
    stanza:

    ({id, V}, {Attrs, Vals}) when is_integer(V) -> ...

    would be ever be called. It appears that build_update_query/1 is
    called from save_record/2 when an ID is supplied.)

  • integer_to_list/1 called on ID values has been replaced with a
    call to id_value_to_string/1. I named it that to make the
    intention clear, obviously id_value_to_list would be okay.

  • Where it made sense, I updated re:split/3 calls to include the
    option {parts, 2}. Likewise, some string:tokens/2 calls have been
    replaced with re:split/3. I don't have great profiling
    information to see how this changes performance.

  • validate_record_types/1 now always validates the 'id' attribute.

  • keytype/1 was added to boss_record_lib and exported. Either a
    record or the type can be passed. Had to have something like that
    that worked with types so that infer_type_from_id/1 (called from
    find/2) would work correctly.

I hacked up a little integration test to be sure that things worked
the way I expected:

https://github.com/kevinmontuori/cb-keytest

All the tests succeed with the modified pgsql adapter; however, the
relational tests fail with the mock adapter. I'm not really sure what
the expected behavior there should be.

Thanks again for your time. I like CB quite a bit and it's now
solving a real business problem that we have. Some of the record
compiler code was a real eye-opener ... really nice work. Please let
me know what questions you might have.

@evanmiller
Owner

Looking good to me. Will you please add a note to README.md about how to use UUIDs?

Thanks for the detailed explanations. To answer your first question, N is the same in both places because it is a line number, which should be the same for the '-' token and 'module' token in -module(...). I will poke into your other concerns when I get a breather.

@kevinmontuori

Hi Evan --

I've added a short blurb regarding Primary Keys and using the UUID type.

Regarding the process_token pattern, I'm seeing that N is the tuple {line_number, column_number} and is never matching. It might be that I'm doing something wrong there or perhaps something changed during development. It seems not to matter but I wanted to call it out.

@evanmiller
Owner

Thanks. I'm don't think all the things you say in the blurb apply to all DB adapters, but I'll get this merged in and fix it up later.

@evanmiller evanmiller merged commit 7987305 into from
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Sep 17, 2012
  1. @kevinmontuori
Commits on Sep 19, 2012
  1. @kevinmontuori
  2. @kevinmontuori

    removed uuid library: without a random seed v4 UUIDs were coming up i…

    kevinmontuori authored
    …dentical; switching to avtobiff's erlang-uuid lib
  3. @kevinmontuori
Commits on Sep 20, 2012
  1. @kevinmontuori
  2. @kevinmontuori
Commits on Sep 28, 2012
  1. @kevinmontuori

    added PK blurb to readme

    kevinmontuori authored
This page is out of date. Refresh to see the latest.
View
18 README.md
@@ -222,3 +222,21 @@ Pooling
BossDB uses Poolboy to create a connection pool to the database. Connection pooling
is supported with all databases.
+
+
+Primary Keys
+------------
+
+The Id field of each model is assumed to be an integer supplied by the
+database (e.g., a SERIAL type in Postgres or AUTOINCREMENT in MySQL).
+Specifying an Id value other than the atom 'id' for a new record will
+result in an error.
+
+When using the mock or pgsql adapters, the Id may have a type of
+::uuid(). This will coerce boss_db into generating a v4 UUID for the
+Id field before saving the record (in other words, the UUID is
+provided by boss_db and not by the application nor by the DB). UUIDs
+are useful PKs when data are being aggregated from multiple sources.
+
+The default Id type ::serial() may be explicitly supplied. Note that
+all Id types, valid or otherwise, pass type validation.
View
3  rebar.config
@@ -9,5 +9,6 @@
{mysql, ".*", {git, "git://github.com/dizzyd/erlang-mysql-driver.git", {tag, "16cae84b5e"}}},
{poolboy, ".*", {git, "git://github.com/devinus/poolboy.git", {tag, "855802e0cc"}}},
{riakc, ".*", {git, "git://github.com/basho/riak-erlang-client", {tag, "1.3.0"}}},
- {tiny_pq, ".*", {git, "git://github.com/evanmiller/tiny_pq", {tag, "HEAD"}}}
+ {tiny_pq, ".*", {git, "git://github.com/evanmiller/tiny_pq", {tag, "HEAD"}}},
+ {uuid, ".*", {git, "git://gitorious.org/avtobiff/erlang-uuid.git", "master"}}
]}.
View
64 src/boss_db.erl
@@ -266,36 +266,40 @@ validate_record(Record) ->
validate_record_types(Record) ->
Errors = lists:foldl(fun
({Attr, Type}, Acc) ->
- Data = Record:Attr(),
- GreatSuccess = case {Data, Type} of
- {undefined, _} ->
- true;
- {Data, string} when is_list(Data) ->
- true;
- {Data, binary} when is_binary(Data) ->
- true;
- {{{D1, D2, D3}, {T1, T2, T3}}, datetime} when is_integer(D1), is_integer(D2), is_integer(D3),
- is_integer(T1), is_integer(T2), is_integer(T3) ->
- true;
- {Data, integer} when is_integer(Data) ->
- true;
- {Data, float} when is_float(Data) ->
- true;
- {Data, boolean} when is_boolean(Data) ->
- true;
- {{N1, N2, N3}, timestamp} when is_integer(N1), is_integer(N2), is_integer(N3) ->
- true;
- {Data, atom} when is_atom(Data) ->
- true;
- {_Data, Type} ->
- false
- end,
- if
- GreatSuccess ->
- Acc;
- true ->
- [lists:concat(["Invalid data type for ", Attr])|Acc]
- end
+ case Attr of
+ id -> Acc;
+ _ ->
+ Data = Record:Attr(),
+ GreatSuccess = case {Data, Type} of
+ {undefined, _} ->
+ true;
+ {Data, string} when is_list(Data) ->
+ true;
+ {Data, binary} when is_binary(Data) ->
+ true;
+ {{{D1, D2, D3}, {T1, T2, T3}}, datetime} when is_integer(D1), is_integer(D2), is_integer(D3),
+ is_integer(T1), is_integer(T2), is_integer(T3) ->
+ true;
+ {Data, integer} when is_integer(Data) ->
+ true;
+ {Data, float} when is_float(Data) ->
+ true;
+ {Data, boolean} when is_boolean(Data) ->
+ true;
+ {{N1, N2, N3}, timestamp} when is_integer(N1), is_integer(N2), is_integer(N3) ->
+ true;
+ {Data, atom} when is_atom(Data) ->
+ true;
+ {_Data, Type} ->
+ false
+ end,
+ if
+ GreatSuccess ->
+ Acc;
+ true ->
+ [lists:concat(["Invalid data type for ", Attr])|Acc]
+ end
+ end
end, [], Record:attribute_types()),
case Errors of
[] -> ok;
View
22 src/boss_db_mock_controller.erl
@@ -48,14 +48,21 @@ handle_call({save_record, Record}, _From, [{Dict, IdCounter}|OldState]) ->
Type = element(1, Record),
TypeString = atom_to_list(Type),
{Id, IdCounter1} = case Record:id() of
- id -> {lists:concat([Type, "-", IdCounter]), IdCounter + 1};
+ id -> case boss_record_lib:keytype(Record) of
+ uuid -> {lists:concat([Type, "-", uuid:to_string(uuid:uuid4())]), IdCounter};
+ _ -> {lists:concat([Type, "-", IdCounter]), IdCounter + 1}
+ end;
ExistingId ->
- [TypeString, IdNum] = string:tokens(ExistingId, "-"),
- Max = case list_to_integer(IdNum) of
- N when N > IdCounter -> N;
- _ -> IdCounter
- end,
- {lists:concat([Type, "-", IdNum]), Max + 1}
+ case boss_record_lib:keytype(Record) of
+ uuid -> {ExistingId, IdCounter};
+ _ ->
+ [TypeString, IdNum] = string:tokens(ExistingId, "-"),
+ Max = case list_to_integer(IdNum) of
+ N when N > IdCounter -> N;
+ _ -> IdCounter
+ end,
+ {lists:concat([Type, "-", IdNum]), Max + 1}
+ end
end,
NewAttributes = lists:map(fun
({id, _}) ->
@@ -88,6 +95,7 @@ code_change(_OldVsn, State, _Extra) ->
handle_info(_Info, State) ->
{noreply, State}.
+
do_find(Dict, Type, Conditions, Max, Skip, SortBy, SortOrder) ->
Tail = lists:nthtail(Skip,
lists:sort(fun(RecordA, RecordB) ->
View
14 src/boss_news_controller.erl
@@ -54,7 +54,7 @@ handle_call({set_watch, WatchId, TopicString, CallBack, UserInfo, TTL}, From, St
(SingleTopic, {ok, StateAcc, WatchListAcc}) ->
case re:split(SingleTopic, "\\.", [{return, list}]) of
[Id, Attr] ->
- [Module, IdNum] = re:split(Id, "-", [{return, list}]),
+ [Module, IdNum] = re:split(Id, "-", [{return, list}, {parts, 2}]),
{NewState1, WatchInfo} = case IdNum of
"*" ->
SetAttrWatchers = case dict:find(Module, StateAcc#state.set_attr_watchers) of
@@ -75,7 +75,7 @@ handle_call({set_watch, WatchId, TopicString, CallBack, UserInfo, TTL}, From, St
end,
{ok, NewState1, [WatchInfo|WatchListAcc]};
_ ->
- case re:split(SingleTopic, "-", [{return, list}]) of
+ case re:split(SingleTopic, "-", [{return, list}, {parts, 2}]) of
[_Module, _IdNum] ->
IdWatchers = case dict:find(SingleTopic, State#state.id_watchers) of
{ok, Val} -> Val;
@@ -96,7 +96,7 @@ handle_call({set_watch, WatchId, TopicString, CallBack, UserInfo, TTL}, From, St
end;
(_, Error) ->
Error
- end, {ok, State, []}, re:split(TopicString, ", +", [{return, list}])),
+ end, {ok, State, []}, re:split(TopicString, ", +", [{return, list}, {parts, 2}])),
case RetVal of
ok -> {reply, RetVal, NewState#state{
watch_dict = dict:store(WatchId,
@@ -133,7 +133,7 @@ handle_call({extend_watch, WatchId}, _From, State0) ->
{reply, RetVal, NewState};
handle_call({created, Id, Attrs}, _From, State0) ->
State = prune_expired_entries(State0),
- [Module | _IdNum] = re:split(Id, "-", [{return, list}]),
+ [Module | _IdNum] = re:split(Id, "-", [{return, list}, {parts, 2}]),
PluralModel = inflector:pluralize(Module),
{RetVal, State1} = case dict:find(PluralModel, State#state.set_watchers) of
{ok, SetWatchers} ->
@@ -156,7 +156,7 @@ handle_call({created, Id, Attrs}, _From, State0) ->
{reply, RetVal, State1};
handle_call({deleted, Id, OldAttrs}, _From, State0) ->
State = prune_expired_entries(State0),
- [Module | _IdNum] = re:split(Id, "-", [{return, list}]),
+ [Module | _IdNum] = re:split(Id, "-", [{return, list}, {parts, 2}]),
PluralModel = inflector:pluralize(Module),
{RetVal, State1} = case dict:find(PluralModel, State#state.set_watchers) of
{ok, SetWatchers} ->
@@ -182,7 +182,7 @@ handle_call({deleted, Id, OldAttrs}, _From, State0) ->
{reply, RetVal, State1};
handle_call({updated, Id, OldAttrs, NewAttrs}, _From, State0) ->
State = prune_expired_entries(State0),
- [Module | _IdNum] = re:split(Id, "-", [{return, list}]),
+ [Module | _IdNum] = re:split(Id, "-", [{return, list}, {parts, 2}]),
IdWatchers = case dict:find(Id, State#state.id_attr_watchers) of
{ok, Val} -> Val;
_ -> []
@@ -242,7 +242,7 @@ future_time(TTL) ->
MegaSecs * 1000 * 1000 + Secs + TTL.
activate_record(Id, Attrs) ->
- [Module | _IdNum] = re:split(Id, "-", [{return, list}]),
+ [Module | _IdNum] = re:split(Id, "-", [{return, list}, {parts, 2}]),
Type = list_to_atom(Module),
DummyRecord = boss_record_lib:dummy_record(Type),
apply(Type, new, lists:map(fun
View
3  src/boss_record_compiler.erl
@@ -36,6 +36,9 @@ process_tokens([{']',_},{')',_},{dot,_}|_]=Tokens, TokenAcc, Acc) ->
process_tokens([{'-',N}=T1,{atom,N,module}=T2,{'(',_}=T3,{atom,_,_ModuleName}=T4,{',',_}=T5,
{'[',_}=T6,{var,_,'Id'}=T7|Rest], TokenAcc, []) ->
process_tokens(Rest, lists:reverse([T1, T2, T3, T4, T5, T6, T7], TokenAcc), []);
+process_tokens([{'-',_N}=T1,{atom,_,module}=T2,{'(',_}=T3,{atom,_,_ModuleName}=T4,{',',_}=T5,
+ {'[',_}=T6,{var,_,'Id'}=T7,{'::',_},{atom,_,VarType},{'(',_},{')',_}|Rest], TokenAcc, []) ->
+ process_tokens(Rest, lists:reverse([T1, T2, T3, T4, T5, T6, T7], TokenAcc), [{'Id', VarType}]);
process_tokens([{',',_}=T1,{var,_,VarName}=T2,{'::',_},{atom,_,VarType},{'(',_},{')',_}|Rest], TokenAcc, Acc) ->
process_tokens(Rest, lists:reverse([T1, T2], TokenAcc), [{VarName, VarType}|Acc]);
process_tokens([H|T], TokenAcc, Acc) ->
View
9 src/boss_record_lib.erl
@@ -6,10 +6,12 @@
dummy_record/1,
attribute_names/1,
attribute_types/1,
+ keytype/1,
convert_value_to_type/2,
ensure_loaded/1]).
-define(MILLION, 1000000).
+-define(DEFAULT_KEYTYPE, serial).
run_before_hooks(Record, true) ->
run_hooks(Record, element(1, Record), before_create);
@@ -52,6 +54,13 @@ attribute_types(Module) ->
DummyRecord = dummy_record(Module),
DummyRecord:attribute_types().
+keytype(Module) when is_atom(Module) ->
+ proplists:get_value(id, attribute_types(Module), ?DEFAULT_KEYTYPE);
+keytype(Module) when is_list(Module) ->
+ proplists:get_value(id, attribute_types(list_to_atom(Module)), ?DEFAULT_KEYTYPE);
+keytype(Record) when is_tuple(Record) andalso is_atom(element(1, Record)) ->
+ proplists:get_value(id, Record:attribute_types(), ?DEFAULT_KEYTYPE).
+
ensure_loaded(Module) ->
case code:ensure_loaded(Module) of
{module, Module} ->
View
32 src/db_adapters/boss_db_adapter_pgsql.erl
@@ -102,12 +102,13 @@ delete(Conn, Id) when is_list(Id) ->
save_record(Conn, Record) when is_tuple(Record) ->
case Record:id() of
id ->
- Type = element(1, Record),
- Query = build_insert_query(Record),
+ Record1 = maybe_populate_id_value(Record),
+ Type = element(1, Record1),
+ Query = build_insert_query(Record1),
Res = pgsql:equery(Conn, Query, []),
case Res of
{ok, _, _, [{Id}]} ->
- {ok, Record:set(id, lists:concat([Type, "-", integer_to_list(Id)]))};
+ {ok, Record1:set(id, lists:concat([Type, "-", id_value_to_string(Id)]))};
{error, Reason} -> {error, Reason}
end;
Defined when is_list(Defined) ->
@@ -119,6 +120,7 @@ save_record(Conn, Record) when is_tuple(Record) ->
end
end.
+
push(Conn, Depth) ->
case Depth of 0 -> pgsql:squery(Conn, "BEGIN"); _ -> ok end,
pgsql:squery(Conn, "SAVEPOINT savepoint"++integer_to_list(Depth)).
@@ -142,9 +144,24 @@ transaction(Conn, TransactionFun) ->
% internal
+id_value_to_string(Id) when is_atom(Id) -> atom_to_list(Id);
+id_value_to_string(Id) when is_integer(Id) -> integer_to_list(Id);
+id_value_to_string(Id) when is_binary(Id) -> binary_to_list(Id);
+id_value_to_string(Id) -> Id.
+
infer_type_from_id(Id) when is_list(Id) ->
- [Type, TableId] = string:tokens(Id, "-"),
- {list_to_atom(Type), type_to_table_name(Type), list_to_integer(TableId)}.
+ [Type, TableId] = re:split(Id, "-", [{return, list}, {parts, 2}]),
+ IdValue = case boss_record_lib:keytype(Type) of
+ uuid -> TableId;
+ serial -> list_to_integer(TableId)
+ end,
+ {list_to_atom(Type), type_to_table_name(Type), IdValue}.
+
+maybe_populate_id_value(Record) ->
+ case boss_record_lib:keytype(Record) of
+ uuid -> Record:set(id, uuid:to_string(uuid:uuid4()));
+ _ -> Record
+end.
type_to_table_name(Type) when is_atom(Type) ->
type_to_table_name(atom_to_list(Type));
@@ -153,14 +170,14 @@ type_to_table_name(Type) when is_list(Type) ->
integer_to_id(Val, KeyString) ->
ModelName = string:substr(KeyString, 1, string:len(KeyString) - string:len("_id")),
- ModelName ++ "-" ++ integer_to_list(Val).
+ ModelName ++ "-" ++ id_value_to_string(Val).
activate_record(Record, Metadata, Type) ->
AttributeTypes = boss_record_lib:attribute_types(Type),
apply(Type, new, lists:map(fun
(id) ->
Index = keyindex(<<"id">>, 2, Metadata),
- atom_to_list(Type) ++ "-" ++ integer_to_list(element(Index, Record));
+ atom_to_list(Type) ++ "-" ++ id_value_to_string(element(Index, Record));
(Key) ->
KeyString = atom_to_list(Key),
Index = keyindex(list_to_binary(KeyString), 2, Metadata),
@@ -197,6 +214,7 @@ build_insert_query(Record) ->
TableName = type_to_table_name(Type),
{Attributes, Values} = lists:foldl(fun
({id, V}, {Attrs, Vals}) when is_integer(V) -> {[atom_to_list(id)|Attrs], [pack_value(V)|Vals]};
+ ({id, V}, {Attrs, Vals}) when is_list(V) -> {[atom_to_list(id)|Attrs], [pack_value(V)|Vals]};
({id, _}, Acc) -> Acc;
({_, undefined}, Acc) -> Acc;
({A, V}, {Attrs, Vals}) ->
Something went wrong with that request. Please try again.