Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time
section x-masysma-name title date lang author keywords x-masysma-version x-masysma-repository x-masysma-website x-masysma-owned x-masysma-copyright
UNI DKE U08/6: NormalizeEncodings
2016/06/20 00:05:19
Linux-Fan, (
NormalizeEncodings, Copyright (c) 2016 For further info send an e-mail to


This program demonstrates some of the issues when working with heterogeneous text files which have different encodings, line endings and supply a set of fixed fields sometimes in a slightly different writing.

This was designed as a solution to an exercise presented at the TU Darmstadt's DKE course in summer 2016.


If any make-tool is available, you can compile this via

$ make

(If you want a jar-file, use $ make jar).

Otherwise invoke the Java compiler directly:

$ javac *.java

This program requires Java 7 or higher (it has only been tested with Java 7).


To make use of the program, you need to create a set of input text-files and make a ZIP-archive from them.

For instance, you can use the files supplied with the program by invoking (provided 7-Zip is in your $PATH)

$ 7z -tzip a encoding_tests

Then invoke the program as follows

$ java NormalizeEncodings result.txt

See how_to_start_java_programs(37) if you need more help with getting this to work, NormalizedEncodings is the main class here.

The result of the program execution will then be written to a new file called result.txt (existing files are overwritten without notice).


What is this program all about?

Basically, the exercise was to write an application which could read text files gathered from multiple operating systems and even more different users which all were in the same language (German) but different encodings and line-endings etc. The success of reading and processing all these files is demonstrated by writing all the files' contents to a result file which is encoded in UTF-8 and consistently uses UNIX line-endings.

In this concrete exercise, all data has to be in the text format “defined” by encoding_tests/vorlage.txt which specifies six sections introduced with > having fixed names. Contents for these sections is written below that sections one entry per line. As this definition is rather informal and it was initially not even suggested that the data from these files would probably be processed automatically, the input files supplied are all in slightly different variations from vorlage.txt -- some leave out part of the section titles, spell them differently, change punctuation, add or remove newlines, introduce bullet points, use different line-endings etc. Most of these differences are normalized by this solution to result in data in a consistent format specified in a later exercise.


Encoding is about the second-worst of these issues (the worst being -- of course -- handling dates and times correctly). Therefore, this solution is -- as just about every solution to such a task -- incomplete.

Be aware that the incompleteness of this concrete implementation is especially the restriction to processing German (and probably English) texts only. Also, the list of detected encodings is very short: Unicode and Windows encodings are recognized... nothing more.

Also, this program is slow: It reads all files sequentially into memory and then looks for patterns to identify encodings in a very simple and inefficient way. If you intend to introduce something similar to this implementation in your application, be warned that it is likely not to be good for perforance.


Reads files in the exercise format and multiple encodings and writes the result in another exercise format in UTF-8 and Unix-Newlines.







No releases published


No packages published