- Roland Schäfer’s academic CV (German)
- Roland Schäfer’s publications PDF
- List of courses taught by Roland Schäfer (German)
Updated: 1 December 2019
Probabilistic German Morphosyntax is a sequence of papers with a methodological introduction representing my kumulative Habilitation (cumulative version of the second thesis in the German-speaking systems). As a result, I obtained the venia legendi for German and General Linguistics from the Faculty of Language Sciences (Sprach- und literaturwissenschaftliche Fakultät) at Humboldt-Universität zu Berlin on 10 April 2019.
Work on this project at Freie Universität Berlin, German Grammar Group / German and Dutch Philology, is supported by the German Research Council (Deutsche Forschungsgemeinschaft, DFG) grant SCHA1916/1-1.
Publications as of January 2019:
Software/data releases as of January 2019:
Principle investigator: Roland Schäfer
Funding amount: 286,100€
Runtime: January 2015 – June 2018 (interrupted April–September 2016)
Officially collaborating institutions:
ClaraX (funded by the German Research Council through grant SCHA1916/1-1 Linguistic web characterization) is the companion of the planned (but delayed) HeidiX (Heidi is a crawler system) software. It performs parametrized random walk crawls in the web graph and integrates full texrex‘s web page cleaning functionality. It is purely experimental in the sense that it is designed to conduct experiments and fundamental research. It is in no way suitable for large-scale productive crawling. It is released under a permissive 2-clause BSD license.
Because none of the available web interfaces to the IMS Open Corpus Workbench was right for hosting the COW web corpora, I started working on a bespoke interface called Colibri². It is really a spare-time project, and I do not release the code because I consider it trivialware.
COW (COrpora from the Web) is a collection of linguistically processed gigatoken web corpora. We have corpora in major European languages (Dutch, English, French, German, Spanish, Swedish). Currently, the corpora are between 1 billion and 10 billion tokens large. The third-generation corpora COW2014 are all larger than their predecessors, some containing 10 billion tokens or more. We are also focusing on corpus quality in all areas (data collection as well as post-processing and linguistic annotation), not just larger corpus sizes. To avoid legal problems with copyright claims, the published corpora are sentence shuffles.
Please go to the web page of COW (Corpoa from the Web).