You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I want to report a bug. I have a Sentence like this:
As a administrator, I want to refund sponsorship money that was processed via stripe, so that people get their monies back.
When I try to convert it to CoNLL the span is not converted well. I then debug the library and found that the offset is wrong. Here is the output of the offset:
As 0
a 3
administrator, 3
I 20
want 22
to 25
refund 30
sponsorship 37
money 49
that 55
was 60
processed 64
via 74
stripe, 78
so 86
that 89
people 94
get 101
their 103
monies 111
back. 118
As you can see, in the second and third lines, the offset is the same (3 and 3, while It should be 3 and 5). This behavior makes the span undetected in the conversion process.
It seems that the get_offsets function in utils.py checks the equality in the sequence of characters to decide about the offsets.
defget_offsets(
text: str,
tokens: List[str],
start: Optional[int] =0) ->List[int]:
"""Calculate char offsets of each tokens. Args: text (str): The string before tokenized. tokens (List[str]): The list of the string. Each string corresponds token. start (Optional[int]): The start position. Returns: (List[str]): The list of the offset. """offsets= []
i=0fortokenintokens:
forj, charinenumerate(token):
whilechar!=text[i]:
i+=1ifj==0:
offsets.append(i+start)
returnoffsets
It will be a problem if the last character of the previous word is the same as the first character of the next word. I'm still looking for a fix to this problem.
Cheers
The text was updated successfully, but these errors were encountered:
I manage to solve the problems with this modification in utils.py. It basically check what I mentioned in the above issues, if the last character of the previous word is the same as the first character of the next word.
defget_offsets(
text: str,
tokens: List[str],
start: Optional[int] =0) ->List[int]:
"""Calculate char offsets of each tokens. Args: text (str): The string before tokenized. tokens (List[str]): The list of the string. Each string corresponds token. start (Optional[int]): The start position. Returns: (List[str]): The list of the offset. """offsets= []
i=0same_char=Falsefork, tokeninenumerate(tokens):
iftoken[0] ==tokens[k-1][-1]:
same_char=Trueelse:
same_char=Falseforj, charinenumerate(token):
whilechar!=text[i] orsame_char:
i+=1same_char=Falseifj==0:
offsets.append(i+start)
returnoffsets
I don't know if it's the best solution. But it works for me, and luckily my NER model improves :)
Hello, I want to report a bug. I have a Sentence like this:
As a administrator, I want to refund sponsorship money that was processed via stripe, so that people get their monies back.
When I try to convert it to CoNLL the span is not converted well. I then debug the library and found that the offset is wrong. Here is the output of the offset:
As you can see, in the second and third lines, the offset is the same (3 and 3, while It should be 3 and 5). This behavior makes the span undetected in the conversion process.
It seems that the
get_offsets
function inutils.py
checks the equality in the sequence of characters to decide about the offsets.It will be a problem if the last character of the previous word is the same as the first character of the next word. I'm still looking for a fix to this problem.
Cheers
The text was updated successfully, but these errors were encountered: