Publications by authors named "Mikhail Moshkov"

In this paper, we consider classes of decision tables with many-valued decisions closed under operations of the removal of columns, the changing of decisions, the permutation of columns, and the duplication of columns. We study relationships among three parameters of these tables: the complexity of a decision table (if we consider the depth of the decision trees, then the complexity of a decision table is the number of columns in it), the minimum complexity of a deterministic decision tree, and the minimum complexity of a nondeterministic decision tree. We consider the rough classification of functions characterizing relationships and enumerate all possible seven types of relationships.

View Article and Find Full Text PDF

In this paper, we consider classes of conventional decision tables closed relative to the removal of attributes (columns) and changing decisions assigned to rows. For tables from an arbitrary closed class, we study the dependence of the minimum complexity of deterministic and nondeterministic decision trees on the complexity of the set of attributes attached to columns. We also study the dependence of the minimum complexity of deterministic decision trees on the minimum complexity of nondeterministic decision trees.

View Article and Find Full Text PDF

In this paper, we deal with distributed data represented either as a finite set T of decision tables with equal sets of attributes or a finite set I of information systems with equal sets of attributes. In the former case, we discuss a way to the study decision trees common to all tables from the set T: building a decision table in which the set of decision trees coincides with the set of decision trees common to all tables from T. We show when we can build such a decision table and how to build it in a polynomial time.

View Article and Find Full Text PDF

In this research, we consider decision trees that incorporate standard queries with one feature per query as well as hypotheses consisting of all features' values. These decision trees are used to represent knowledge and are comparable to those investigated in exact learning, in which membership queries and equivalence queries are used. As an application, we look into the issue of creating decision trees for two cases: the sorting of a sequence that contains equal elements and multiple-value decision tables which are modified from UCI Machine Learning Repository.

View Article and Find Full Text PDF

In this paper, we study arbitrary subword-closed languages over the alphabet {0,1} (binary subword-closed languages). For the set of words L(n) of the length belonging to a binary subword-closed language , we investigate the depth of the decision trees solving the recognition and the membership problems deterministically and nondeterministically. In the case of the recognition problem, for a given word from L(n), we should recognize it using queries, each of which, for some i∈{1,…,n}, returns the th letter of the word.

View Article and Find Full Text PDF

In this paper, based on the results of rough set theory, test theory, and exact learning, we investigate decision trees over infinite sets of binary attributes represented as infinite binary information systems. We define the notion of a problem over an information system and study three functions of the Shannon type, which characterize the dependence in the worst case of the minimum depth of a decision tree solving a problem on the number of attributes in the problem description. The considered three functions correspond to (i) decision trees using attributes, (ii) decision trees using hypotheses (an analog of equivalence queries from exact learning), and (iii) decision trees using both attributes and hypotheses.

View Article and Find Full Text PDF

Conventional decision trees use queries each of which is based on one attribute. In this study, we also examine decision trees that handle additional queries based on hypotheses. This kind of query is similar to the equivalence queries considered in exact learning.

View Article and Find Full Text PDF

In this paper, we consider decision trees that use both conventional queries based on one attribute each and queries based on hypotheses of values of all attributes. Such decision trees are similar to those studied in exact learning, where membership and equivalence queries are allowed. We present greedy algorithm based on entropy for the construction of the above decision trees and discuss the results of computer experiments on various data sets and randomly generated Boolean functions.

View Article and Find Full Text PDF

Background: Hydrogen bonds (H-bonds) play a key role in both the formation and stabilization of protein structures. They form and break while a protein deforms, for instance during the transition from a non-functional to a functional state. The intrinsic strength of an individual H-bond has been studied from an energetic viewpoint, but energy alone may not be a very good predictor.

View Article and Find Full Text PDF