Skip to content

Commit

Permalink
v0.3.0+build.1884
Browse files Browse the repository at this point in the history
改进:

- 1.1,1.7,4.9,6.30,调整了“君子”、“贤”、“士”、“圣”的解释。
- 1.5,增加了“用”的解释。
- 1.8、7.38,扩充了“威”、“猛”的解释。
- 1.13,略扩充了“因不失其亲”的讨论。
- 2.3,增加了“齐”的例句。
- 3.1,增加了“庭”的解释。
- 5.9,增加了与其它章的联系。
- 5.24,增加了“孰谓微生高直”的理解及参考引文。扩充了“或乞醯焉”的解释及例。
- 6.2,为“居敬”等增加了参考引文。
- 6.11,增加了“在陋巷”及“陋”的解释。
- 6.18,增加了“胜”的解释。调整了“史”的解释。
- 6.26,增加了“何为其然也”的详细解释。
- 7.14,增加了“子在齐闻《韶》”的背景说明。
- 8.10,增加了疾、病、患的比较。
- 10.27,扩充了古代书写系统的若干概念解释。
- 11.14,增加了“长府”的解释,及郎顗传的参考引文。
- 11.22,增加了冉求、子路的性格勾勒。
- 12.9,增加了“年饥”的解释,及《吕氏春秋》的参考引文。
- 12.20,扩充了“达”、“闻”的解释及例参。
- 12.21,增加了“忿”与“愤”的比较。
- 13.11, 13.12,扩充了解释及参考引文。
- 14.13,扩充了“岂其然乎”的解释及例句。
- 14.21,增加了“讨”的解释。
- 14.26,扩充了“思不出其位”的解释,增加了陈平的事例。
- 14.44,增加了“速成”的解释。
- 15.34,为“小知、大受”增加了俞大猷的参考。
- 16.14,增加了夫人、小童、寡小君的解释。
- 17.5,改进了“东周”的解释。
- 17.9,略调整了解释。
- 18.7,修改了“蓧”的解释;增加了“食之”的解释。
- 原文:15.27“巧言乱德”后的逗号改为句号。
- 实现:\lyextended 改用思源宋体(Source Han Serif)。
- 实现:改进 validate_lyrefs() 的逻辑,简化 SPECIAL_LYQS。
- 实现:使用yapf格式化Python代码。

修正:

- 4.1,“Jim John”应为“Jim Rohn”。非常抱歉!
- 5.5,“御”的解释。
- 将“晏平仲”从人物表前移至5.17内。
  • Loading branch information
abolybook committed Aug 4, 2018
1 parent 571fad1 commit c29c0a7
Show file tree
Hide file tree
Showing 18 changed files with 705 additions and 444 deletions.
4 changes: 2 additions & 2 deletions LICENSE.md
@@ -1,5 +1,5 @@
Unless otherwise specified, everything in this repository is covered by the Creative Commons BY-NC-ND license:

[![Creative Commons License](https://licensebuttons.net/l/by-nc-nd/3.0/88x31.png)](http://creativecommons.org/licenses/by-nc-nd/3.0/)
[![Creative Commons License](https://licensebuttons.net/l/by-nc-nd/3.0/88x31.png)](https://creativecommons.org/licenses/by-nc-nd/3.0/)

中国大陆版:http://creativecommons.org/licenses/by-nc-nd/3.0/cn/
中国大陆版:https://creativecommons.org/licenses/by-nc-nd/3.0/cn/
8 changes: 4 additions & 4 deletions README.md
Expand Up @@ -11,9 +11,9 @@ If you want to:

don't hesitate to dive in!

Any feedback is welcome, in Chinese, English, or one that [Google Translate](http://translate.google.com) handles nicely.
Any feedback is welcome, in Chinese, English, or one that [Google Translate](https://translate.google.com) handles nicely.

Web site (Chinese): http://www.abolybook.org
Web site (Chinese): https://www.abolybook.org

------

Expand All @@ -25,11 +25,11 @@ Web site (Chinese): http://www.abolybook.org
下载:

- GitHub:https://github.com/abolybook/aboly/releases
- 百度网盘备份:http://pan.baidu.com/s/1eRp6vMq
- 百度网盘备份:https://pan.baidu.com/s/1eRp6vMq

编译用到了:

- xelatex,调用的包和字体,见 style.tex
- Python,2.7和3皆可,调用了 [pinyin](https://pypi.python.org/pypi/pinyin/)

在线版(更新较频繁):http://www.abolybook.org
在线版(更新较快):https://www.abolybook.org
2 changes: 1 addition & 1 deletion aboly.tex
Expand Up @@ -20,7 +20,7 @@

\hypersetup{
pdftitle=一瓶论语,
pdfauthor=孔子 原著,abolybook@gmail.com 译评及制作,
pdfauthor=孔子 原著,abolybook@gmail.com 译评制作,
pdfsubject=,
pdfkeywords={\versioninfoaboly},
pdfcreator=LaTeX with Python,
Expand Down
13 changes: 8 additions & 5 deletions autocharacters.py
Expand Up @@ -31,14 +31,15 @@ def get_charname_blobs(content):

lines = content.splitlines(True)
c2bs = {}
for i, line in enumerate(lines):
for line in lines:
if line.lstrip().startswith(CHAPTER_PREFIX):
chapter_count += 1
blob_count = 0
elif line.lstrip().startswith(BLOB_PREFIX):
blob_count += 1
left = content.index(line)
assert content[left + len(line):].find(line) == -1 # no duplicate \lyblob line
assert content[left + len(line):].find(
line) == -1 # no duplicate \lyblob line
title = extract_blob_title(content[left:])
mats = CHARNAME_PAT.finditer(title)
if mats:
Expand Down Expand Up @@ -66,14 +67,15 @@ def append_annotations(content, charname_blobs):
removecomment_pat = re.compile(r'(?<!\\)%.+', re.M)
content = removecomment_pat.sub('', content)

charlabel_pat = re.compile(r'(?:^\\lypdfbookmark)|(?:^\\lylabel\{(\w+)\})', re.M)
charlabel_pat = re.compile(r'(?:^\\lypdfbookmark)|(?:^\\lylabel\{(\w+)\})',
re.M)
skip_labels = set(('zisi', 'shaogong', 'boyi', 'lijiliyun'))
segs = []
pos = 0
label, copy = '', True
for mat in charlabel_pat.finditer(content):
start = mat.start()
seg = content[pos: start]
seg = content[pos:start]
pos = start
if copy:
segs.append(seg)
Expand Down Expand Up @@ -112,7 +114,8 @@ def main():
characters_content = fin.read()

charname_blobs = get_charname_blobs(body)
auto_characters_content = append_annotations(characters_content, charname_blobs)
auto_characters_content = append_annotations(characters_content,
charname_blobs)

with io.open(CHARACTERS_OUT, 'w', encoding=ENCODING) as fout:
print(auto_characters_content, file=fout)
Expand Down
3 changes: 2 additions & 1 deletion autolybody.py
Expand Up @@ -17,7 +17,8 @@ def main():
body = re.sub(r'\\lychar\{(.+?)\}', r'\1', body)
body = re.sub(r'\\lycharlink\{.+\}\{(.+?)\}', r'\1', body)
body = re.sub(r'\\lylink\{.+\}\{(.+?)\}', r'\1', body)
body = '\n\n\n'.join(re.findall(r'\\(?:chapter|lyblob)a?\{(?:.+?)\}', body, re.S))
body = '\n\n\n'.join(
re.findall(r'\\(?:chapter|lyblob)a?\{(?:.+?)\}', body, re.S))
body = body.replace(r'\lybloba', r'\lyblob')
body = body.replace(r'\lyblob', r'\lyblobraw')

Expand Down
32 changes: 17 additions & 15 deletions autotopics.py
Expand Up @@ -7,7 +7,6 @@
import re
import sys


BODY = 'body.tex'
TOPICS = 'topics.tex'
AUTOTOPICS = 'autotopics.tex'
Expand All @@ -31,7 +30,8 @@ def hack_pinyin():
with io.open(dat) as f:
for line in f:
k, v = line.strip().split('\t')
pinyin.pinyin.pinyin_dict[k] = v.lower().split(" ")[0] # don't strip tones
pinyin.pinyin.pinyin_dict[k] = v.lower().split(" ")[
0] # don't strip tones


def extract_topics():
Expand Down Expand Up @@ -63,7 +63,10 @@ def extract_topics():
match_count += 1
blob_label = BLOB_TEMPLATE % (chapter_count, blob_count)
if not mat.group(1):
print('Line %d is an empty keyword list' % (lineno+1), line, file=sys.stderr)
print(
'Line %d is an empty keyword list' % (lineno + 1),
line,
file=sys.stderr)
continue
for c in mat.group(1).split(sep):
if c not in topics:
Expand All @@ -72,7 +75,8 @@ def extract_topics():
topics[c].append(blob_label)
totalblobs = content.count(BLOB_PREFIX)
if match_count != totalblobs:
print('Found %d keyword lists for %d blobs' % (match_count, totalblobs))
print(
'Found %d keyword lists for %d blobs' % (match_count, totalblobs))
return topics


Expand All @@ -83,30 +87,27 @@ def extract_topics():
'礼': 'topicli3',
'学': 'topicxue2',
'政': 'topiczheng4',

'孝': 'topicxiao4',
'义': 'topicyi4',
'信': 'topicxin4',
'友': 'topicyou3',
'恕': 'topicshu4',

'敬': 'topicjing4',
'谦': 'topicqian1',
'温': 'topicwen1',
'耻': 'topicchi3',

'文': 'topicwen2',
'音乐': 'topicyinyue', # 乐
'智': 'topiczhi4a',
'志': 'topiczhi4',

'德': 'topicde2',
'忠': 'topiczhong1',
'用人': 'topicyongren', # 贤
'惠': 'topichui4',
'廉': 'topiclian2',

# Others.
'直': 'topiczhi2',
'未见': 'topicweijian',
'快乐': 'topickuaile',
'人我': 'topicrenwo',
Expand All @@ -128,23 +129,22 @@ def dump_topics(topics):
segment = sorted_counts[start:stop]
segment.sort(key=lambda x: pinyin.get(x[0]))
sorted_counts[start:stop] = segment
start, stop = stop, stop+1
start, stop = stop, stop + 1

# Dump topics.
with io.open(TOPICS, encoding=ENCODING) as fin:
template = fin.read()
with io.open(AUTOTOPICS, 'w', encoding=ENCODING) as fout:
insersion = template.index(INSERSION_POINT)
prolog, epilog = template[:insersion], template[insersion+len(INSERSION_POINT):]
prolog, epilog = template[:insersion], template[insersion +
len(INSERSION_POINT):]
fout.write(prolog)
for i, x in enumerate(sorted_counts):
k, v = x
if v <= SHORT_TOPIC_THRESHOLD:
break
topic = TOPIC_TEMPLATE % (
k, v,
' '.join(
BLOB_REF % v for v in topics[k]))
topic = TOPIC_TEMPLATE % (k, v, ' '.join(BLOB_REF % v
for v in topics[k]))
if k in topic_labels:
topic = '\\lylabel{%s}\n%s' % (topic_labels[k], topic)
fout.write(topic)
Expand All @@ -155,12 +155,14 @@ def dump_topics(topics):
sts = []
while i < sclen and sorted_counts[i][1] == count:
k = sorted_counts[i][0]
sts.append(SHORT_TOPIC_T % (k, ' '.join(BLOB_REF % v for v in topics[k])))
sts.append(SHORT_TOPIC_T % (k, ' '.join(BLOB_REF % v
for v in topics[k])))
i += 1
line = SHORT_TOPICS_TEMPLATE % (count, SHORT_TOPICS_SEP.join(sts))
fout.write(line)
fout.write(epilog)


# # Dump summary.
# summary = '\n'.join('%s\t%d' % (v[0], v[1]) for v in sorted_counts)
# with io.open(SUMMARY, 'w', encoding=ENCODING) as f:
Expand Down

0 comments on commit c29c0a7

Please sign in to comment.