EC project "Review of Historical Seismicity in Europe" (RHISE) 1989-1993



[Deliverables and queries] [Presentation] [Homepage]


Massimiliano Stucchi *
* Istituto di Ricerca sul Rischio Sismico, CNR, via Ampère 56, 20131 Milano, Italy.

Recommendations for the compilation
of a European parametric earthquake catalogue,
with special reference to historical records


Foreword
Though the scope of the RHISE project was not to review parametric catalogues, these recommendations partly follow from the results of the project. They apply to earthquake data derived from written accounts (historical records), regardless if produced 10 or 1000 years ago. However, as 20th century macroseismic and instrumental data are to be dealt with together, earthquakes before 1900 are here the main concern.

Parametric catalogues and historical records
Users assume parametric catalogues as primary data. It is useful to recall that, in their turn, parametric catalogues are the results of elaboration performed on other data. This elaboration, well known in the instrumental case where waveforms are the primary data, is similar in the macroseismic case, where primary data are historical records or macroseismic questionnaires. Their processing can be divided into four steps (Fig. 1):

a historical sources are investigated: earthquake records (historical observations) are found and located in time and space;
b the investigator decides which records belong to the same event and "builds up" an earthquake: To is assessed;
c earthquake records are interpreted in terms of macroseismic intensity, producing intensity data points (macroseismic observations);
d focal parameters (lo, fo, ho, Io, Mo, etc.) are evaluated, according to some rules, and catalogue records are compiled.

For macroseismic questionnaires, in principle step a does not exist and step b does not introduce many errors.

The real difference between macroseismic and instrumental data is that waveforms (step a) are more easily available and that procedures by which instrumental data are processed, from step b to d, are known and standard, at least to a certain extent: instrumental catalogue records are therefore comparable and reliability can be assessed. On the contrary, in the case of macroseismic data these steps are not performed according to standard procedures: moreover, procedures are often just unknown. Under such circumstances, catalogue records corresponding to different datasets and procedures cannot be easily compared.

Current parametric catalogues
A comprehensive analysis of parametric catalogues in Europe is out of the scope of this paper: moreover, it appears difficult, as many of them are not published and come only as computer printout, without explanations.
It is worth remembering that parametric catalogues were developed in the current format of files of records around the end of the 60's. This format was shaped to instrumental data, and was designed to be a suitable input for computer routines. At that time, computer facilities did not allow handling of many data, and synthesis were necessary: catalogue records represent therefore synthesis of earthquake data, referred to the "centre of mass".
The genesis of current historical catalogues is clear. When parametric catalogues developed, catalogue compilers started massive, sometimes careless, parameterisations of written seismological compilations, directly in terms of focal parameters; most of these parameterisations were performed according to personal, unreported procedures.
The weak points and the negative consequences of these operations have been analysed by many authors; however, it is a matter of fact that a great part of current historical catalogues still follow from those operations.

Fig. 1 - Scheme of macroseismic data processing compared to instrumental,
from level 3 to level 1a (from Stucchi and Albini, 1991).

A further problem is represented by 20th century earthquakes, for which both instrumental and macroseismic data are available; for them, compilers usually provide only one catalogue record, either instrumental and macroseismic or, commonly, a mix of both.
In recent times, "reviewing" parametric catalogues became a general trend. "Reviewing" means partially revising the content, updating some records with reference to recent studies, re-evaluating some parameters according to new procedures and scales. This trend follows from the need for updating the dataset with the limited resources usually available. Nevertheless, partial revisions are likely to enhance the inhomogeneity of a catalogue, mostly when they are not performed by the same "hand". For instance, a very detailed investigation of a single earthquake may improve the knowledge on that earthquake, but also lower the global homogeneity of the catalogue.
Current parametric catalogues are today the output of several rounds of remake, the base of which is often unknown: in many cases, duplications or other mistakes (very frequent, for instance, in the case of magnitude evaluation from macroseismic data, or vice versa) have been expanded through the various versions of the "same" catalogue.

The European catalogue by Van Gils and Leydecker
The only parametric catalogue covering a large part of Europe and a large time-window is the one by Van Gils and Leydecker (1991), now in expansion towards the East. It is indeed a useful tool, also because it allows the understanding of many problems, still to be solved. The introduction to the first version (Van Gils, 1988) carried clear views about problems connected with the compilation of a European catalogue, with respect to historical data:

II. 3 - The national catalogues.
The source for the data base in most cases are the national catalogues. Aside their state of more or less completeness, they are different in outlook from one country to another and don't contain the same seismic parameters. Some catalogues express the time of occurrence in local time, others do it in Universal time. In other catalogues two events are reported where only one should be listed, their dates making exactly the difference between the Julian and the Gregorian calendar.
When epicentres are located in the immediate vicinity of the borderline separating two countries, some of these epicentres may reveal in a "double nationality" and therefore can be taken up in two catalogues. Such failures should be rid of on a bilateral basis in order to agree on the proper location of the epicentre.
Another default consists in the fact that the applied intensity-scale is not the same everywhere; there are still countries that make reference to the Modified Mercalli scale while others use the MSK-64 scale.
[ ... ]
Another thing to stress on is the disparity of the data base as it is distributed over several countries, the consequence being its inhomogeneity.
Aside, written documentation may appear sufficient but not necessary reliable. Several catalogues of historical earthquakes have been compiled but all of them are different for what concerns their contents.

II.5 - Improvements to be considered for the data base.
In order to improve the actually available data base, the following points should be considered and realized when possible.
5.1. First of all, macroseismic data should be collected systematically using the same terms of reference; i.e. unique "intensity-scale" and an identical "questionnaire" in conformity with the adopted intensity scale.
5.2. All data relevant to historical events should be gathered and then confronted among each other in order to eliminate misinterpretations, wrong evaluations, duplicates, a.s.o. ...When the event struck more than one country this should be done on a multi-lateral base ...

These recommendations are completely acceptable: they wait only to be adopted. The only, minor disagreement concerns the statement
"the compilation of a comprehensive catalogue, i.e. one of the objectives of the present publication, may serve for improving the data base."

It seems more opportune to change this statement as follows:
"the compilation of a database is a condition for the compilation of a comprehensive catalogue".

It is not clear to what extent the recommendations were followed during the compilation of the catalogue. Apparently very little, as the catalogue looks like a simple merge of national catalogues, with little multi-lateral based effort; most of the problems above mentioned have not been solved.

To merge or to re-compile ?
Each parametric catalogue derives from a dataset, although it may be sometimes unknown. As recalled, national catalogues are compiled according to varied criteria with reference to the main steps described before: type of sources, intensity assessment, earthquake parameters assessment.
They also vary in the time-window, with respect both to the source potential, and to the historical investigation. Furthermore, some catalogues do not consider earthquakes with epicentres outside national boundaries, some others do: in these cases, it may occur that such earthquakes are treated in different ways by the two sides.
The most common problem is duplications which, by the way, are widely found also inside national catalogues. Many examples are available: duplications derive mostly from careless interpretation of historical sources, and many of them escape any search by means of automatic 'windows'.
Therefore, though the easiest way to compile a European catalogue, starting from national ones, would be to merge them carefully (Fig. 2, A), such attempt would produce unsatisfactory results. The only way to sort out the problems is to go back to sources and historical observations, reviewing carefully how earthquake parameters were assessed. For that purpose it is recommended that all sets of primary information, including those concerning the same earthquakes, are carefully merged into a single dataset, from which a more reliable parametric catalogue can be derived (Fig. 2, B).


Fig. 2 - Two ways for compiling a European catalogue, starting from national ones :

A) without considering the primary data;

B) considering the primary data.


To compile an historical, parametric catalogue, some general standards are to be established. A good starting point is to agree that an earthquake can be included in a parametric catalogue when some observations are available: it can be requested that they accomplish some standard level.
In general, the main requirement for compiling a parametric historical catalogue is that observations are to be available for each earthquake, no matter whether only a few, or even just one, as it happens in many cases.
The recommendations which follow are organised according to steps a, b, c and d described above.

Historical records: investigation and processing
As discussed in the previous paragraph, observations are represented in our case by historical records or questionnaires. The problem of how historical records are retrieved, interpreted, assembled - and how their reliability is evaluated - is of primary importance for the quality and homogeneity of the catalogue. It is well known, for instance, that presumed seismic gaps can be explained by historical source gaps and that poor information can be simply due to non-exhaustive investigation.
It has already been recalled that most historical parametric catalogues currently in use still follow from massive parameterisation of seismological compilations which are, in their turn, the result of historical investigations performed according to very different criteria: their reliability is also very varied. Many papers pointed out, in recent times, wrong interpretations or fake quakes supplied by the seismological compilations. In general it can be agreed that, as a general trend, their reliability decreases with time, reaching the bottom level in the period between the 20's and the 40's of this century.

Today, historical seismology makes use of the historical method, adopted from nearly 15 years on. There exist some recommendations on this subject and, mostly, a well established set of case-histories, including the output of the RHISE project; therefore, this topic will not be dealt with here in detail.
As the historical method requires the use of "primary" historical sources, to compile a new catalogue requires in principle starting a new, global round of historical investigation; the goal being to have, for each earthquake, detailed records coming from reliable sources, interpreted by professional historians, aware of seismological problems.
Nevertheless, to start such a project would require a large amount of time, funds and expertise, which is unrealistic in many cases: lower standards might then be accepted, considering that the homogeneity of a catalogue requires uniform level of investigation for all entries.
It is recommendable that any re-interpretation pointing out mistakes made by previous compilers is performed in such a way to provide users with strong evidence about disappearing or heavy entries modification in the existing parametric catalogues.

The database of macroseismic observations
To be suitable for seismological elaborations, historical records need to be located in space and time and interpreted in terms of macroseismic intensity; this operation produces the so-called intensity data points, which can be assumed as macroseismic observations. To be included in a database, intensity data points need to be homogeneous with respect to:
- timing criteria, locality denominations and coordinates;

- intensity scale and intensity assessment procedure.

Timing. Timings carried by historical records are often expressed according to varied calendars and time-systems: conversion to a uniform time-system is useful if mastered with care. Timings of records related to the same earthquake can be scattered along a large time-span. Assigning them the same To is a decision of the investigator: therefore, it is recommendable that intensity data points carry both the original and the given time.
Locality directory. Same problems hold in principle for locality denominations: the problem can be very complicated by the changing of names and coordinates in time and by inexplicit interpretations. Therefore, it is suggested to express locality names and coordinates according to ad-hoc directories; it is also suggested to keep trace of the original input.
Intensity assessment. As for the second point, it is to be considered that intensity assessment is still an important reason for inhomogeneity. This topic is well known and will not be discussed in detail here: however, the publication of the new EMS-92 intensity scale (Grünthal, 1993) seems to be a good opportunity for reconsidering this problem with care.
Existing data. As a first step, it is recommendable to inventory, analyse and, if necessary, re-compile according to the same standard all the existing compilations which provide intensity data points. Isoseismal maps without data points are not recommendable for this purpose, as they are elaborations, not observations. When data related to the same earthquake, but coming from different compilations (for instance, partial, national investigations of transfrontier earthquakes), are available, these data are to be carefully considered and merged only if they are homogeneous.
Some criteria are also needed in order to evaluate the available information (for instance: type of sources, number and distribution of observations, reliability of intensity assessment, and so on). Such evaluation will point out whether the data accomplish the required standard or, on the contrary, further investigation is needed.

The database of primary observations will enable:
- to re-assess earthquake parameters (To, lo, fo, Io etc.) according to homogeneous criteria;
- to draw isoseismal maps (if needed) according to homogeneous criteria;
- to evaluate, or to calibrate, seismic hazard estimates at the sites.

Assessing earthquake parameters
Current catalogues are very inhomogeneous with respect to how earthquake parameters have been assessed from macroseismic data. This holds from one catalogue to another and, in some cases, even inside the same catalogue. The only way to improve the situation is to re-assess earthquake parameters from primary observations according to standard procedures.
Even in case that no historical investigation can be performed, the quality of a catalogue can improve by assessing earthquake parameters according to standard procedures from a dataset of macroseismic observations.

Origin time. To is the leading parameter which characterises the earthquake: it is assigned by the historical investigation. In principle, To proposed by the database does not need further elaboration. It can be observed that precision of seconds is of little sense for macroseismic data, even in the 20th century.
Epicentral parameters. lo, fo and Io are to be evaluated following rigorous procedures. No inner closed isoseismal is available in many cases, such as offshore earthquakes or events with a poor number of observations. lo, fo and Io can be evaluated according to different procedures, depending whether they are intended to be used for seismotectonic or hazard assessment: it is important that the same procedure is adopted for all earthquakes or, at least, that different procedures are evidenced. It is worth recalling that instrumentally derived parameters usually come from routines which allow to drop data when they do not fit in the required standard. Furthermore, earthquakes which do not accomplish the adopted standard can be sometimes simply discarded; on the contrary, applying similar criteria to macroseismic data would require to discard most of the historical content.
Depth. Individual depth determinations are very unstable and show acceptable reliability only in a statistical way.
Magnitude. Regressions from Io have been often discussed in the literature: they are strongly influenced by Io instability and other factors. On the other hand, only a few regressions from isoseismals or intensity data points are available, but they are all determined from limited samples and very unstable. Investigators willing to test regressions on a regional basis are recommended to use calibration magnitudes together with their uncertainties. Users must be ready to accept large uncertainties associated to pre-instrumental magnitudes.
Aftershocks. It is to be considered that, in many cases of aftershock sequences, macroseismic observations may cumulate effects which are by no means separable. Aftershock parameters other than To can be assessable using far-field observations in a few cases only: on the contrary, epicentral intensities of 8 MSK for aftershocks following a few hours, or even a few days, a main shock of, say, Io = 10 MSK, are of little reliability, if assessed on the basis of damage reports coming from the epicentral area only.
Attenuation. It is strongly recommended that parameters of intensity attenuation relations are determined by the same set of data used to determine earthquake parameters.
Other parameters. As epicentral intensity may differ from maximum observed intensity, both parameters can be useful. A catalogue record should also indicate which set of intensity data and which parameterisation procedures were used. Quality factors are useful, provided that they are coherent with the nature of the data: for instance, the number of available observations (intensity data points) is a good, self-explaining indicator.
For 20th century, catalogues usually provide only one epicentral location associated with both macroseismic (Io) and instrumental sizes (Ml, Mb, Ms, and so on). It is a common feeling that instrumental and macroseismic data should not be mixed; it seems more suitable to provide both sets of parameters, macroseismic and instrumental, leaving the choice to users.

Catalogue completeness
The completeness of a historical catalogue is mainly determined by:
- historical factors, influencing existence and distribution of 'recorders', scattering and preservation of the records;
- investigation factors, such as the strategy and the skill of the investigator.
Therefore, current methods for assessing completeness looking inside a catalogue should be replaced, for historical catalogues, by historiographic analysis of source potential and by the evaluation of the sources used for the catalogue compilation. It should also be considered that the completeness of a catalogue for a given threshold is influenced by the assessment of earthquake parameters, the uncertainty of which can be very high in many cases.
It can be recommended to plot historical catalogues in different time-windows and to compare these plots with the most recent instrumental data; the problem whether time-space variations of seismicity are real or should be ascribed to historical source gaps must be tackled with care.

Conclusions
The knowledge of long-term seismicity of Europe is still rather uneven, because it has not been investigated in a systematic way at a European level.
The compilation of a European catalogue is a major need for assessing seismicity and seismic hazard, but it will really help if the following points are taken into account:
- national catalogues are compiled according to different criteria with respect to supporting datasets and procedures, with special reference to: historical investigation (types of sources, time-windows, range of investigation), intensity assessment, earthquake parameters assessment. With reference to boundary problems it may occur that transfrontier earthquakes are treated in very different ways by the two sides. Therefore, to merge national catalogues will not provide a reliable tool;
- though a systematic historical investigation is a primary need, a good improvement with respect to the present situation is possible; actually, many data are available but they are scattered in many places and compiled in many ways. The only requirement is to re-compile and review these data according to uniform, rigorous procedures;

- the preparation of a database of macroseismic observations is today a reasonable goal. In order to prepare the database it is desirable to inventory, homogenize and evaluate the existing intensity data which are today a good number and of good quality;
- the compilation of a European parametric catalogue, requires the assessment of earthquake parameters according to standard procedures. It should follow the preparation of the dataset or, at least, go along with it: existing parametric catalogues, including the one by Van Gils and Leydecker (1991), can serve as a side tool;
- the synthesis represented by catalogues has revealed unsatisfactory; many users prefer to make their own way from primary data. Moreover, computer facilities today allow handling large amounts of data without problems. Therefore, the current trend is to build up primary data banks, from which users may extract data and synthesis - such as parametric catalogues - according to their choice.

As a final point, it is recommended to keep in mind the qualitative nature of historical records, which can be spoiled by forced parameterisation. In many cases historical records are hardly interpretable in terms of intensity: therefore, such information may not fit in a catalogue or an intensity database. However, this is not a good reason to ignore it: alternative elaborations can be investigated.

Milano, May 1993

References
Grünhtal, G., (Editor), 1993. European Macroseismic Scale 1992 (updated MSK scale). Cahiers du Centre Européen de Géodynamique et de Séismologie, 7, Luxembourg.
Stucchi, M. and Albini, P., 1991. New developments in macroseismic investigation. Proc. Mexico-EC Workshop "Seismology and Earthquake Engineering", Mexico City, 22-26 April 1991, pp. 47-70.
Van Gils, J.M. and Leydecker, G., 1991. Catalogue of European Earthquakes with intensities higher than 4. CEC, Nuclear Science and Technology, Report EUR 13406 EN.
Van Gils, J.M., 1988. Catalogue of European earthquakes and an atlas of European seismic maps. CEC, Report EUR 11344 EN.


[Top]
[Vol.1] [Vol.2] [Deliverables and queries] [Presentation] [Homepage]