Data & Knowledge Engineering special issue on
Intelligent Data Mining
Editors: Juan-Carlos Cubero & Fernando Berzal, University of Granada, Spain

Call for Papers Program Committee Author Gateway Referee Gateway

Aims and scope of this special issue

Researchers in the Data Mining field have traditionally focused their efforts on obtaining fast and scalable algorithms in order to deal with huge amounts of data. It is nevertheless true that, more often than we might desire, the results we obtain using these efficient algorithms are of limited use in practice. The mere volume of these results causes what is known as a secondorder data mining problem. As a consequence, the quality of the resulting knowledge discovery process is poor, hence limiting data mining spread use and acceptance in many real-world situations.

Let us consider, for instance, the case for association rules. There are literally hundreds of papers in the literature devoted to the efficient discovery of association rules. However, without taking the proper post-processing steps (or imposing additional constraints on the rule discovery process), the overwhelming number of rules, usually in the order of thousands or even tens of thousands, makes knowledge discovery an oxymoron. In other words, no human expert can directly benefit from the results of such data mining techniques.

As a result, we can observe an increasing interest in devising new non-traditional methods that must be able to summarize the aforementioned high-volume results into more manageable chunks of knowledge. These non-traditional methods include

  • Efficient and scalable algorithms for extracting “new” kinds of knowledge. By definition, such algorithms should not produce a huge number of outputs. The resulting “new” kinds of knowledge will be therefore more focused on the problem at hand . They will probably lead to families of domain-specific data mining tools.

  • Novel techniques for solving the second-order data mining problem that standard data mining techniques create. In this case, additional computing resources are used in order to sift through the vast amount of data generated by the data mining algorithms themselves. This extra cost is usually a low price to pay if we consider the benefits we can obtain in practice.

In summary, better and more understandable models and exploratory techniques, and maybe wholly new techniques, are a must for getting the most of data mining techniques in case we want to foster the adoption of data mining in actual real-world problems.

Latest news

The final results of the refereeing process have been notified to authors by e-mail.

We would like to thank all the referees for their hard work and collaboration in the thorough evaluation of the outstanding number of submissions we received.

CFP Call for papers in PDF format


Important dates

Paper submission
May 20th, 2005
Notification of acceptance
November 3rd, 2005
Final version
December 12th, 2005

Additional information

Topics of interest

This special issue seeks papers dealing with models, methods, techniques, and algorithms focused on the quality and interpretability of the obtained results. Both improvements on traditional data mining techniques and the application of novel techniques will be taken into consideration. In the latter sense, authors are encouraged to submit papers dealing with the following topics:

  • The use of fuzzy and rough sets to improve the interpretability of data mining results.
  • The applicability of genetic algorithms and evolutionary computation in data mining tasks.
  • Ontologies and their role in discovering complex patterns.
  • The discovery of rarities, anomalies, exceptions, and other kinds of knowledge.
  • Alternative techniques for the representation and exploration of data mining results.
  • Novel models and techniques for summarizing data mining results.
  • Methods for dealing with imprecision and uncertainty in the data mining results.
  • The impact of quality on text and web mining.

Contributions on other areas related to the scope of this special issue will also be welcome.

About D&KE journal

D&KE is an Elsevier Science Journal, published since 1987. The major aim of the journal is to identify, investigate and analyze the underlying principles in the design and effective use of Database Systems and Knowledgebase Systems. DKE achieves this aim by publishing original research results, technical advances and news items concerning data engineering, knowledge engineering, and the interface of these two fields.
D&KE impact factor (ISI JCR): 0.962 in 2003, 1.039 in 2002, 0.697 in 2001.
  Elsevier Science
Data & Knowledge Engineering
An Elsevier Science journal
DKE cover  


   visits since January 20th, 2005