categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
0212039
null
null
http://arxiv.org/pdf/cs/0212039v1
2002-12-12T18:51:06Z
2002-12-12T18:51:06Z
Low Size-Complexity Inductive Logic Programming: The East-West Challenge Considered as a Problem in Cost-Sensitive Classification
The Inductive Logic Programming community has considered proof-complexity and model-complexity, but, until recently, size-complexity has received little attention. Recently a challenge was issued "to the international computing community" to discover low size-complexity Prolog programs for classifying trains. The challenge was based on a problem first proposed by Ryszard Michalski, 20 years ago. We interpreted the challenge as a problem in cost-sensitive classification and we applied a recently developed cost-sensitive classifier to the competition. Our algorithm was relatively successful (we won a prize). This paper presents our algorithm and analyzes the results of the competition.
[ "['Peter D. Turney']" ]
null
null
0212040
null
null
http://arxiv.org/pdf/cs/0212040v1
2002-12-12T19:11:11Z
2002-12-12T19:11:11Z
Data Engineering for the Analysis of Semiconductor Manufacturing Data
We have analyzed manufacturing data from several different semiconductor manufacturing plants, using decision tree induction software called Q-YIELD. The software generates rules for predicting when a given product should be rejected. The rules are intended to help the process engineers improve the yield of the product, by helping them to discover the causes of rejection. Experience with Q-YIELD has taught us the importance of data engineering -- preprocessing the data to enable or facilitate decision tree induction. This paper discusses some of the data engineering problems we have encountered with semiconductor manufacturing data. The paper deals with two broad classes of problems: engineering the features in a feature vector representation and engineering the definition of the target concept (the classes). Manufacturing process data present special problems for feature engineering, since the data have multiple levels of granularity (detail, resolution). Engineering the target concept is important, due to our focus on understanding the past, as opposed to the more common focus in machine learning on predicting the future.
[ "['Peter D. Turney']" ]
null
null
0212041
null
null
http://arxiv.org/pdf/cs/0212041v1
2002-12-12T19:26:52Z
2002-12-12T19:26:52Z
Robust Classification with Context-Sensitive Features
This paper addresses the problem of classifying observations when features are context-sensitive, especially when the testing set involves a context that is different from the training set. The paper begins with a precise definition of the problem, then general strategies are presented for enhancing the performance of classification algorithms on this type of problem. These strategies are tested on three domains. The first domain is the diagnosis of gas turbine engines. The problem is to diagnose a faulty engine in one context, such as warm weather, when the fault has previously been seen only in another context, such as cold weather. The second domain is speech recognition. The context is given by the identity of the speaker. The problem is to recognize words spoken by a new speaker, not represented in the training set. The third domain is medical prognosis. The problem is to predict whether a patient with hepatitis will live or die. The context is the age of the patient. For all three domains, exploiting context results in substantially more accurate classification.
[ "['Peter D. Turney']" ]
null
null
0301007
null
null
http://arxiv.org/pdf/cs/0301007v1
2003-01-09T15:08:47Z
2003-01-09T15:08:47Z
Kalman filter control in the reinforcement learning framework
There is a growing interest in using Kalman-filter models in brain modelling. In turn, it is of considerable importance to make Kalman-filters amenable for reinforcement learning. In the usual formulation of optimal control it is computed off-line by solving a backward recursion. In this technical note we show that slight modification of the linear-quadratic-Gaussian Kalman-filter model allows the on-line estimation of optimal control and makes the bridge to reinforcement learning. Moreover, the learning rule for value estimation assumes a Hebbian form weighted by the error of the value estimation.
[ "['Istvan Szita' 'Andras Lorincz']" ]
null
null
0301014
null
null
http://arxiv.org/abs/cs/0301014v1
2003-01-16T16:36:15Z
2003-01-16T16:36:15Z
Convergence and Loss Bounds for Bayesian Sequence Prediction
The probability of observing $x_t$ at time $t$, given past observations $x_1...x_{t-1}$ can be computed with Bayes' rule if the true generating distribution $mu$ of the sequences $x_1x_2x_3...$ is known. If $mu$ is unknown, but known to belong to a class $M$ one can base ones prediction on the Bayes mix $xi$ defined as a weighted sum of distributions $nuin M$. Various convergence results of the mixture posterior $xi_t$ to the true posterior $mu_t$ are presented. In particular a new (elementary) derivation of the convergence $xi_t/mu_tto 1$ is provided, which additionally gives the rate of convergence. A general sequence predictor is allowed to choose an action $y_t$ based on $x_1...x_{t-1}$ and receives loss $ell_{x_t y_t}$ if $x_t$ is the next symbol of the sequence. No assumptions are made on the structure of $ell$ (apart from being bounded) and $M$. The Bayes-optimal prediction scheme $Lambda_xi$ based on mixture $xi$ and the Bayes-optimal informed prediction scheme $Lambda_mu$ are defined and the total loss $L_xi$ of $Lambda_xi$ is bounded in terms of the total loss $L_mu$ of $Lambda_mu$. It is shown that $L_xi$ is bounded for bounded $L_mu$ and $L_xi/L_muto 1$ for $L_muto infty$. Convergence of the instantaneous losses are also proven.
[ "['Marcus Hutter']" ]
null
null
0302012
null
null
http://arxiv.org/pdf/cs/0302012v2
2003-11-27T10:01:42Z
2003-02-10T14:17:33Z
The New AI: General & Sound & Relevant for Physics
Most traditional artificial intelligence (AI) systems of the past 50 years are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress in the field of theoretically optimal and practically feasible algorithms for prediction, search, inductive inference based on Occam's razor, problem solving, decision making, and reinforcement learning in environments of a very general type. Since inductive inference is at the heart of all inductive sciences, some of the results are relevant not only for AI and computer science but also for physics, provoking nontraditional predictions based on Zuse's thesis of the computer-generated universe.
[ "['Juergen Schmidhuber']" ]
null
null
0302015
null
null
http://arxiv.org/pdf/cs/0302015v1
2003-02-12T09:39:00Z
2003-02-12T09:39:00Z
Unsupervised Learning in a Framework of Information Compression by Multiple Alignment, Unification and Search
This paper describes a novel approach to unsupervised learning that has been developed within a framework of "information compression by multiple alignment, unification and search" (ICMAUS), designed to integrate learning with other AI functions such as parsing and production of language, fuzzy pattern recognition, probabilistic and exact forms of reasoning, and others.
[ "['J. G. Wolff']" ]
null
null
0303025
null
null
http://arxiv.org/pdf/cs/0303025v1
2003-03-24T16:01:46Z
2003-03-24T16:01:46Z
Algorithmic Clustering of Music
We present a fully automatic method for music classification, based only on compression of strings that represent the music pieces. The method uses no background knowledge about music whatsoever: it is completely general and can, without change, be used in different areas like linguistic classification and genomics. It is based on an ideal theory of the information content in individual objects (Kolmogorov complexity), information distance, and a universal similarity metric. Experiments show that the method distinguishes reasonably well between various musical genres and can even cluster pieces by composer.
[ "['Rudi Cilibrasi' 'Paul Vitanyi' 'Ronald de Wolf']" ]
null
null
0305052
null
null
http://arxiv.org/pdf/cs/0305052v1
2003-05-29T11:11:01Z
2003-05-29T11:11:01Z
On the Existence and Convergence Computable Universal Priors
Solomonoff unified Occam's razor and Epicurus' principle of multiple explanations to one elegant, formal, universal theory of inductive inference, which initiated the field of algorithmic information theory. His central result is that the posterior of his universal semimeasure M converges rapidly to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal predictor in case of unknown mu. We investigate the existence and convergence of computable universal (semi)measures for a hierarchy of computability classes: finitely computable, estimable, enumerable, and approximable. For instance, M is known to be enumerable, but not finitely computable, and to dominate all enumerable semimeasures. We define seven classes of (semi)measures based on these four computability concepts. Each class may or may not contain a (semi)measure which dominates all elements of another class. The analysis of these 49 cases can be reduced to four basic cases, two of them being new. The results hold for discrete and continuous semimeasures. We also investigate more closely the types of convergence, possibly implied by universality: in difference and in ratio, with probability 1, in mean sum, and for Martin-Loef random sequences. We introduce a generalized concept of randomness for individual sequences and use it to exhibit difficulties regarding these issues.
[ "['Marcus Hutter']" ]
null
null
0305121
null
null
http://arxiv.org/pdf/math/0305121v1
2003-05-08T17:11:45Z
2003-05-08T17:11:45Z
Robust Estimators under the Imprecise Dirichlet Model
Walley's Imprecise Dirichlet Model (IDM) for categorical data overcomes several fundamental problems which other approaches to uncertainty suffer from. Yet, to be useful in practice, one needs efficient ways for computing the imprecise=robust sets or intervals. The main objective of this work is to derive exact, conservative, and approximate, robust and credible interval estimates under the IDM for a large class of statistical estimators, including the entropy and mutual information.
[ "['Marcus Hutter']" ]
null
null
0306036
null
null
http://arxiv.org/pdf/cs/0306036v1
2003-06-07T19:21:20Z
2003-06-07T19:21:20Z
Sequence Prediction based on Monotone Complexity
This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff's prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is known about the closeness of their posteriors, which are the important quantities for prediction. We show that for deterministic computable environments, the "posterior" and losses of m converge, but rapid convergence could only be shown on-sequence; the off-sequence behavior is unclear. In probabilistic environments, neither the posterior nor the losses converge, in general.
[ "['Marcus Hutter']" ]
null
null
0306055
null
null
http://arxiv.org/abs/nlin/0306055v2
2004-09-14T10:31:58Z
2003-06-26T10:12:58Z
A Model for Prejudiced Learning in Noisy Environments
Based on the heuristics that maintaining presumptions can be beneficial in uncertain environments, we propose a set of basic axioms for learning systems to incorporate the concept of prejudice. The simplest, memoryless model of a deterministic learning rule obeying the axioms is constructed, and shown to be equivalent to the logistic map. The system's performance is analysed in an environment in which it is subject to external randomness, weighing learning defectiveness against stability gained. The corresponding random dynamical system with inhomogeneous, additive noise is studied, and shown to exhibit the phenomena of noise induced stability and stochastic bifurcations. The overall results allow for the interpretation that prejudice in uncertain environments entails a considerable portion of stubbornness as a secondary phenomenon.
[ "['Andreas U. Schmidt']" ]
null
null
0306091
null
null
http://arxiv.org/pdf/cs/0306091v2
2004-09-30T13:56:40Z
2003-06-16T13:15:29Z
Universal Sequential Decisions in Unknown Environments
We give a brief introduction to the AIXI model, which unifies and overcomes the limitations of sequential decision theory and universal Solomonoff induction. While the former theory is suited for active agents in known environments, the latter is suited for passive prediction of unknown environments.
[ "['Marcus Hutter']" ]
null
null
0306120
null
null
http://arxiv.org/pdf/cs/0306120v2
2007-03-09T15:14:15Z
2003-06-22T08:00:09Z
Reinforcement Learning with Linear Function Approximation and LQ control Converges
Reinforcement learning is commonly used with function approximation. However, very few positive results are known about the convergence of function approximation based RL control algorithms. In this paper we show that TD(0) and Sarsa(0) with linear function approximation is convergent for a simple class of problems, where the system is linear and the costs are quadratic (the LQ control problem). Furthermore, we show that for systems with Gaussian noise and non-completely observable states (the LQG problem), the mentioned RL algorithms are still convergent, if they are combined with Kalman filtering.
[ "['Istvan Szita' 'Andras Lorincz']" ]
null
null
0306126
null
null
http://arxiv.org/pdf/cs/0306126v1
2003-06-24T09:50:29Z
2003-06-24T09:50:29Z
Bayesian Treatment of Incomplete Discrete Data applied to Mutual Information and Feature Selection
Given the joint chances of a pair of random variables one can compute quantities of interest, like the mutual information. The Bayesian treatment of unknown chances involves computing, from a second order prior distribution and the data likelihood, a posterior distribution of the chances. A common treatment of incomplete data is to assume ignorability and determine the chances by the expectation maximization (EM) algorithm. The two different methods above are well established but typically separated. This paper joins the two approaches in the case of Dirichlet priors, and derives efficient approximations for the mean, mode and the (co)variance of the chances and the mutual information. Furthermore, we prove the unimodality of the posterior distribution, whence the important property of convergence of EM to the global maximum in the chosen framework. These results are applied to the problem of selecting features for incremental learning and naive Bayes classification. A fast filter based on the distribution of mutual information is shown to outperform the traditional filter based on empirical mutual information on a number of incomplete real data sets.
[ "['Marcus Hutter' 'Marco Zaffalon']" ]
null
null
0307002
null
null
http://arxiv.org/pdf/cs/0307002v1
2003-07-01T23:22:44Z
2003-07-01T23:22:44Z
AWESOME: A General Multiagent Learning Algorithm that Converges in Self-Play and Learns a Best Response Against Stationary Opponents
A satisfactory multiagent learning algorithm should, {em at a minimum}, learn to play optimally against stationary opponents and converge to a Nash equilibrium in self-play. The algorithm that has come closest, WoLF-IGA, has been proven to have these two properties in 2-player 2-action repeated games--assuming that the opponent's (mixed) strategy is observable. In this paper we present AWESOME, the first algorithm that is guaranteed to have these two properties in {em all} repeated (finite) games. It requires only that the other players' actual actions (not their strategies) can be observed at each step. It also learns to play optimally against opponents that {em eventually become} stationary. The basic idea behind AWESOME ({em Adapt When Everybody is Stationary, Otherwise Move to Equilibrium}) is to try to adapt to the others' strategies when they appear stationary, but otherwise to retreat to a precomputed equilibrium strategy. The techniques used to prove the properties of AWESOME are fundamentally different from those used for previous algorithms, and may help in analyzing other multiagent learning algorithms also.
[ "['Vincent Conitzer' 'Tuomas Sandholm']" ]
null
null
0307006
null
null
http://arxiv.org/pdf/cs/0307006v1
2003-07-03T15:44:36Z
2003-07-03T15:44:36Z
BL-WoLF: A Framework For Loss-Bounded Learnability In Zero-Sum Games
We present BL-WoLF, a framework for learnability in repeated zero-sum games where the cost of learning is measured by the losses the learning agent accrues (rather than the number of rounds). The game is adversarially chosen from some family that the learner knows. The opponent knows the game and the learner's learning strategy. The learner tries to either not accrue losses, or to quickly learn about the game so as to avoid future losses (this is consistent with the Win or Learn Fast (WoLF) principle; BL stands for ``bounded loss''). Our framework allows for both probabilistic and approximate learning. The resultant notion of {em BL-WoLF}-learnability can be applied to any class of games, and allows us to measure the inherent disadvantage to a player that does not know which game in the class it is in. We present {em guaranteed BL-WoLF-learnability} results for families of games with deterministic payoffs and families of games with stochastic payoffs. We demonstrate that these families are {em guaranteed approximately BL-WoLF-learnable} with lower cost. We then demonstrate families of games (both stochastic and deterministic) that are not guaranteed BL-WoLF-learnable. We show that those families, nevertheless, are {em BL-WoLF-learnable}. To prove these results, we use a key lemma which we derive.
[ "['Vincent Conitzer' 'Tuomas Sandholm']" ]
null
null
0307038
null
null
http://arxiv.org/pdf/cs/0307038v1
2003-07-16T23:50:53Z
2003-07-16T23:50:53Z
Manifold Learning with Geodesic Minimal Spanning Trees
In the manifold learning problem one seeks to discover a smooth low dimensional surface, i.e., a manifold embedded in a higher dimensional linear vector space, based on a set of measured sample points on the surface. In this paper we consider the closely related problem of estimating the manifold's intrinsic dimension and the intrinsic entropy of the sample points. Specifically, we view the sample points as realizations of an unknown multivariate density supported on an unknown smooth manifold. We present a novel geometrical probability approach, called the geodesic-minimal-spanning-tree (GMST), to obtaining asymptotically consistent estimates of the manifold dimension and the R'{e}nyi $alpha$-entropy of the sample density on the manifold. The GMST approach is striking in its simplicity and does not require reconstructing the manifold or estimating the multivariate density of the samples. The GMST method simply constructs a minimal spanning tree (MST) sequence using a geodesic edge matrix and uses the overall lengths of the MSTs to simultaneously estimate manifold dimension and entropy. We illustrate the GMST approach for dimension and entropy estimation of a human face dataset.
[ "['Jose Costa' 'Alfred Hero']" ]
null
null
0307055
null
null
http://arxiv.org/pdf/cs/0307055v1
2003-07-24T21:09:43Z
2003-07-24T21:09:43Z
Learning Analogies and Semantic Relations
We present an algorithm for learning from unlabeled text, based on the Vector Space Model (VSM) of information retrieval, that can solve verbal analogy questions of the kind found in the Scholastic Aptitude Test (SAT). A verbal analogy has the form A:B::C:D, meaning "A is to B as C is to D"; for example, mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B, and the problem is to select the most analogous word pair, C:D, from a set of five choices. The VSM algorithm correctly answers 47% of a collection of 374 college-level analogy questions (random guessing would yield 20% correct). We motivate this research by relating it to work in cognitive science and linguistics, and by applying it to a difficult problem in natural language processing, determining semantic relations in noun-modifier pairs. The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the noun (printer) and the modifier (laser). We use a supervised nearest-neighbour algorithm that assigns a class to a given noun-modifier pair by finding the most analogous noun-modifier pair in the training data. With 30 classes of semantic relations, on a collection of 600 labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5% (random guessing: 3.3%). With 5 classes of semantic relations, the F value is 43.2% (random: 20%). The performance is state-of-the-art for these challenging problems.
[ "['Peter D. Turney' 'Michael L. Littman']" ]
null
null
0308025
null
null
http://arxiv.org/pdf/cs/0308025v1
2003-08-16T07:31:57Z
2003-08-16T07:31:57Z
Controlled hierarchical filtering: Model of neocortical sensory processing
A model of sensory information processing is presented. The model assumes that learning of internal (hidden) generative models, which can predict the future and evaluate the precision of that prediction, is of central importance for information extraction. Furthermore, the model makes a bridge to goal-oriented systems and builds upon the structural similarity between the architecture of a robust controller and that of the hippocampal entorhinal loop. This generative control architecture is mapped to the neocortex and to the hippocampal entorhinal loop. Implicit memory phenomena; priming and prototype learning are emerging features of the model. Mathematical theorems ensure stability and attractive learning properties of the architecture. Connections to reinforcement learning are also established: both the control network, and the network with a hidden model converge to (near) optimal policy under suitable conditions. Falsifying predictions, including the role of the feedback connections between neocortical areas are made.
[ "['Andras Lorincz']" ]
null
null
0308033
null
null
http://arxiv.org/pdf/cs/0308033v1
2003-08-20T20:42:19Z
2003-08-20T20:42:19Z
Coherent Keyphrase Extraction via Web Mining
Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. A limitation of previous keyphrase extraction algorithms is that the selected keyphrases are occasionally incoherent. That is, the majority of the output keyphrases may fit together well, but there may be a minority that appear to be outliers, with no clear semantic relation to the majority or to each other. This paper presents enhancements to the Kea keyphrase extraction algorithm that are designed to increase the coherence of the extracted keyphrases. The approach is to use the degree of statistical association among candidate keyphrases as evidence that they may be semantically related. The statistical association is measured using web mining. Experiments demonstrate that the enhancements improve the quality of the extracted keyphrases. Furthermore, the enhancements are not domain-specific: the algorithm generalizes well when it is trained on one domain (computer science documents) and tested on another (physics documents).
[ "['Peter D. Turney']" ]
null
null
0309015
null
null
http://arxiv.org/pdf/cs/0309015v1
2003-09-10T13:56:41Z
2003-09-10T13:56:41Z
Reliable and Efficient Inference of Bayesian Networks from Sparse Data by Statistical Learning Theory
To learn (statistical) dependencies among random variables requires exponentially large sample size in the number of observed random variables if any arbitrary joint probability distribution can occur. We consider the case that sparse data strongly suggest that the probabilities can be described by a simple Bayesian network, i.e., by a graph with small in-degree Delta. Then this simple law will also explain further data with high confidence. This is shown by calculating bounds on the VC dimension of the set of those probability measures that correspond to simple graphs. This allows to select networks by structural risk minimization and gives reliability bounds on the error of the estimated joint measure without (in contrast to a previous paper) any prior assumptions on the set of possible joint measures. The complexity for searching the optimal Bayesian networks of in-degree Delta increases only polynomially in the number of random varibales for constant Delta and the optimal joint measure associated with a given graph can be found by convex optimization.
[ "['Dominik Janzing' 'Daniel Herrmann']" ]
null
null
0309016
null
null
http://arxiv.org/pdf/cs/0309016v1
2003-09-10T15:11:44Z
2003-09-10T15:11:44Z
Using Simulated Annealing to Calculate the Trembles of Trembling Hand Perfection
Within the literature on non-cooperative game theory, there have been a number of attempts to propose logorithms which will compute Nash equilibria. Rather than derive a new algorithm, this paper shows that the family of algorithms known as Markov chain Monte Carlo (MCMC) can be used to calculate Nash equilibria. MCMC is a type of Monte Carlo simulation that relies on Markov chains to ensure its regularity conditions. MCMC has been widely used throughout the statistics and optimization literature, where variants of this algorithm are known as simulated annealing. This paper shows that there is interesting connection between the trembles that underlie the functioning of this algorithm and the type of Nash refinement known as trembling hand perfection.
[ "['Stuart McDonald' 'Liam Wagner']" ]
null
null
0309034
null
null
http://arxiv.org/pdf/cs/0309034v1
2003-09-19T16:30:55Z
2003-09-19T16:30:55Z
Measuring Praise and Criticism: Inference of Semantic Orientation from Association
The evaluative character of a word is called its semantic orientation. Positive semantic orientation indicates praise (e.g., "honest", "intrepid") and negative semantic orientation indicates criticism (e.g., "disturbing", "superfluous"). Semantic orientation varies in both direction (positive or negative) and degree (mild to strong). An automated system for measuring semantic orientation would have application in text classification, text filtering, tracking opinions in online discussions, analysis of survey responses, and automated chat systems (chatbots). This paper introduces a method for inferring the semantic orientation of a word from its statistical association with a set of positive and negative paradigm words. Two instances of this approach are evaluated, based on two different statistical measures of word association: pointwise mutual information (PMI) and latent semantic analysis (LSA). The method is experimentally tested with 3,596 words (including adjectives, adverbs, nouns, and verbs) that have been manually labeled positive (1,614 words) and negative (1,982 words). The method attains an accuracy of 82.8% on the full test set, but the accuracy rises above 95% when the algorithm is allowed to abstain from classifying mild words.
[ "['Peter D. Turney' 'Michael L. Littman']" ]
null
null
0309035
null
null
http://arxiv.org/pdf/cs/0309035v1
2003-09-19T20:13:07Z
2003-09-19T20:13:07Z
Combining Independent Modules to Solve Multiple-choice Synonym and Analogy Problems
Existing statistical approaches to natural language problems are very coarse approximations to the true complexity of language processing. As such, no single technique will be best for all problem instances. Many researchers are examining ensemble methods that combine the output of successful, separately developed modules to create more accurate solutions. This paper examines three merging rules for combining probability distributions: the well known mixture rule, the logarithmic rule, and a novel product rule. These rules were applied with state-of-the-art results to two problems commonly used to assess human mastery of lexical semantics -- synonym questions and analogy questions. All three merging rules result in ensembles that are more accurate than any of their component modules. The differences among the three rules are not statistically significant, but it is suggestive that the popular mixture rule is not the best rule for either of the two problems.
[ "['Peter D. Turney' 'Michael L. Littman' 'Jeffrey Bigham' 'Victor Shnayder']" ]
null
null
0311014
null
null
http://arxiv.org/pdf/cs/0311014v1
2003-11-13T12:02:04Z
2003-11-13T12:02:04Z
Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet
Various optimality properties of universal sequence predictors based on Bayes-mixtures in general, and Solomonoff's prediction scheme in particular, will be studied. The probability of observing $x_t$ at time $t$, given past observations $x_1...x_{t-1}$ can be computed with the chain rule if the true generating distribution $mu$ of the sequences $x_1x_2x_3...$ is known. If $mu$ is unknown, but known to belong to a countable or continuous class $M$ one can base ones prediction on the Bayes-mixture $xi$ defined as a $w_nu$-weighted sum or integral of distributions $nuinM$. The cumulative expected loss of the Bayes-optimal universal prediction scheme based on $xi$ is shown to be close to the loss of the Bayes-optimal, but infeasible prediction scheme based on $mu$. We show that the bounds are tight and that no other predictor can lead to significantly smaller bounds. Furthermore, for various performance measures, we show Pareto-optimality of $xi$ and give an Occam's razor argument that the choice $w_nusim 2^{-K(nu)}$ for the weights is optimal, where $K(nu)$ is the length of the shortest program describing $nu$. The results are applied to games of chance, defined as a sequence of bets, observations, and rewards. The prediction schemes (and bounds) are compared to the popular predictors based on expert advice. Extensions to infinite alphabets, partial, delayed and probabilistic prediction, classification, and more active systems are briefly discussed.
[ "['Marcus Hutter']" ]
null
null
0311042
null
null
http://arxiv.org/pdf/cs/0311042v1
2003-11-27T05:34:04Z
2003-11-27T05:34:04Z
Toward Attribute Efficient Learning Algorithms
We make progress on two important problems regarding attribute efficient learnability. First, we give an algorithm for learning decision lists of length $k$ over $n$ variables using $2^{tilde{O}(k^{1/3})} log n$ examples and time $n^{tilde{O}(k^{1/3})}$. This is the first algorithm for learning decision lists that has both subexponential sample complexity and subexponential running time in the relevant parameters. Our approach establishes a relationship between attribute efficient learning and polynomial threshold functions and is based on a new construction of low degree, low weight polynomial threshold functions for decision lists. For a wide range of parameters our construction matches a 1994 lower bound due to Beigel for the ODDMAXBIT predicate and gives an essentially optimal tradeoff between polynomial threshold function degree and weight. Second, we give an algorithm for learning an unknown parity function on $k$ out of $n$ variables using $O(n^{1-1/k})$ examples in time polynomial in $n$. For $k=o(log n)$ this yields a polynomial time algorithm with sample complexity $o(n)$. This is the first polynomial time algorithm for learning parity on a superconstant number of variables with sublinear sample complexity.
[ "['Adam R. Klivans' 'Rocco A. Servedio']" ]
null
null
0312003
null
null
http://arxiv.org/pdf/cs/0312003v1
2003-11-30T00:19:19Z
2003-11-30T00:19:19Z
Hybrid LQG-Neural Controller for Inverted Pendulum System
The paper presents a hybrid system controller, incorporating a neural and an LQG controller. The neural controller has been optimized by genetic algorithms directly on the inverted pendulum system. The failure free optimization process stipulated a relatively small region of the asymptotic stability of the neural controller, which is concentrated around the regulation point. The presented hybrid controller combines benefits of a genetically optimized neural controller and an LQG controller in a single system controller. High quality of the regulation process is achieved through utilization of the neural controller, while stability of the system during transient processes and a wide range of operation are assured through application of the LQG controller. The hybrid controller has been validated by applying it to a simulation model of an inherently unstable system of inverted pendulum.
[ "['E. S. Sazonov' 'P. Klinkhachorn' 'R. L. Klein']" ]
null
null
0312004
null
null
http://arxiv.org/pdf/cs/0312004v1
2003-11-30T20:41:18Z
2003-11-30T20:41:18Z
Improving spam filtering by combining Naive Bayes with simple k-nearest neighbor searches
Using naive Bayes for email classification has become very popular within the last few months. They are quite easy to implement and very efficient. In this paper we want to present empirical results of email classification using a combination of naive Bayes and k-nearest neighbor searches. Using this technique we show that the accuracy of a Bayes filter can be improved slightly for a high number of features and significantly for a small number of features.
[ "['Daniel Etzold']" ]
null
null
0312009
null
null
http://arxiv.org/pdf/cs/0312009v1
2003-12-03T22:29:01Z
2003-12-03T22:29:01Z
Failure-Free Genetic Algorithm Optimization of a System Controller Using SAFE/LEARNING Controllers in Tandem
The paper presents a method for failure free genetic algorithm optimization of a system controller. Genetic algorithms present a powerful tool that facilitates producing near-optimal system controllers. Applied to such methods of computational intelligence as neural networks or fuzzy logic, these methods are capable of combining the non-linear mapping capabilities of the latter with learning the system behavior directly, that is, without a prior model. At the same time, genetic algorithms routinely produce solutions that lead to the failure of the controlled system. Such solutions are generally unacceptable for applications where safe operation must be guaranteed. We present here a method of design, which allows failure-free application of genetic algorithms through utilization of SAFE and LEARNING controllers in tandem, where the SAFE controller recovers the system from dangerous states while the LEARNING controller learns its behavior. The method has been validated by applying it to an inherently unstable system of inverted pendulum.
[ "['E. S. Sazonov' 'D. Del Gobbo' 'P. Klinkhachorn' 'R. L. Klein']" ]
null
null
0312018
null
null
http://arxiv.org/abs/cs/0312018v1
2003-12-11T20:07:39Z
2003-12-11T20:07:39Z
Mapping Subsets of Scholarly Information
We illustrate the use of machine learning techniques to analyze, structure, maintain, and evolve a large online corpus of academic literature. An emerging field of research can be identified as part of an existing corpus, permitting the implementation of a more coherent community structure for its practitioners.
[ "['Paul Ginsparg' 'Paul Houle' 'Thorsten Joachims' 'Jae-Hoon Sul']" ]
null
null
0312058
null
null
http://arxiv.org/pdf/cs/0312058v1
2003-12-25T16:45:20Z
2003-12-25T16:45:20Z
Acquiring Lexical Paraphrases from a Single Corpus
This paper studies the potential of identifying lexical paraphrases within a single corpus, focusing on the extraction of verb paraphrases. Most previous approaches detect individual paraphrase instances within a pair (or set) of comparable corpora, each of them containing roughly the same information, and rely on the substantial level of correspondence of such corpora. We present a novel method that successfully detects isolated paraphrase instances within a single corpus without relying on any a-priori structure and information. A comparison suggests that an instance-based approach may be combined with a vector based approach in order to assess better the paraphrase likelihood for many verb pairs.
[ "['Oren Glickman' 'Ido Dagan']" ]
null
null
0312060
null
null
http://arxiv.org/pdf/cs/0312060v1
2003-12-27T21:21:48Z
2003-12-27T21:21:48Z
Part-of-Speech Tagging with Minimal Lexicalization
We use a Dynamic Bayesian Network to represent compactly a variety of sublexical and contextual features relevant to Part-of-Speech (PoS) tagging. The outcome is a flexible tagger (LegoTag) with state-of-the-art performance (3.6% error on a benchmark corpus). We explore the effect of eliminating redundancy and radically reducing the size of feature vocabularies. We find that a small but linguistically motivated set of suffixes results in improved cross-corpora generalization. We also show that a minimal lexicon limited to function words is sufficient to ensure reasonable performance.
[ "['Virginia Savova' 'Leonid Peshkin']" ]
null
null
0401005
null
null
http://arxiv.org/pdf/cs/0401005v1
2004-01-08T07:50:51Z
2004-01-08T07:50:51Z
About Unitary Rating Score Constructing
It is offered to pool test points of different subjects and different aspects of the same subject together in order to get the unitary rating score, by the way of nonlinear transformation of indicator points in accordance with Zipf's distribution. It is proposed to use the well-studied distribution of Intellectuality Quotient IQ as the reference distribution for latent variable "progress in studies".
[ "['Kromer Victor']" ]
null
null
0401033
null
null
http://arxiv.org/abs/q-bio/0401033v1
2004-01-26T03:50:03Z
2004-01-26T03:50:03Z
Parametric Inference for Biological Sequence Analysis
One of the major successes in computational biology has been the unification, using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied towards these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems associated with different statistical models. This paper introduces the emph{polytope propagation algorithm} for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.
[ "['Lior Pachter' 'Bernd Sturmfels']" ]
null
null
0402021
null
null
http://arxiv.org/pdf/cs/0402021v1
2004-02-11T15:45:14Z
2004-02-11T15:45:14Z
A Numerical Example on the Principles of Stochastic Discrimination
Studies on ensemble methods for classification suffer from the difficulty of modeling the complementary strengths of the components. Kleinberg's theory of stochastic discrimination (SD) addresses this rigorously via mathematical notions of enrichment, uniformity, and projectability of an ensemble. We explain these concepts via a very simple numerical example that captures the basic principles of the SD theory and method. We focus on a fundamental symmetry in point set covering that is the key observation leading to the foundation of the theory. We believe a better understanding of the SD method will lead to developments of better tools for analyzing other ensemble methods.
[ "['Tin Kam Ho']" ]
null
null
0402029
null
null
http://arxiv.org/pdf/q-bio/0402029v2
2004-10-26T16:21:17Z
2004-02-12T22:36:01Z
Fluctuation-dissipation theorem and models of learning
Advances in statistical learning theory have resulted in a multitude of different designs of learning machines. But which ones are implemented by brains and other biological information processors? We analyze how various abstract Bayesian learners perform on different data and argue that it is difficult to determine which learning-theoretic computation is performed by a particular organism using just its performance in learning a stationary target (learning curve). Basing on the fluctuation-dissipation relation in statistical physics, we then discuss a different experimental setup that might be able to solve the problem.
[ "['Ilya Nemenman']" ]
null
null
0402032
null
null
http://arxiv.org/pdf/cs/0402032v1
2004-02-15T07:40:45Z
2004-02-15T07:40:45Z
Fitness inheritance in the Bayesian optimization algorithm
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions, but also for estimating their fitness. The results indicate that fitness inheritance is a promising concept in BOA, because population-sizing requirements for building appropriate models of promising solutions lead to good fitness estimates even if only a small proportion of candidate solutions is evaluated using the actual fitness function. This can lead to a reduction of the number of actual fitness evaluations by a factor of 30 or more.
[ "['Martin Pelikan' 'Kumara Sastry']" ]
null
null
0403025
null
null
http://arxiv.org/pdf/cs/0403025v1
2004-03-15T16:33:55Z
2004-03-15T16:33:55Z
Distribution of Mutual Information from Complete and Incomplete Data
Mutual information is widely used, in a descriptive way, to measure the stochastic dependence of categorical random variables. In order to address questions such as the reliability of the descriptive value, one must consider sample-to-population inferential approaches. This paper deals with the posterior distribution of mutual information, as obtained in a Bayesian framework by a second-order Dirichlet prior distribution. The exact analytical expression for the mean, and analytical approximations for the variance, skewness and kurtosis are derived. These approximations have a guaranteed accuracy level of the order O(1/n^3), where n is the sample size. Leading order approximations for the mean and the variance are derived in the case of incomplete samples. The derived analytical expressions allow the distribution of mutual information to be approximated reliably and quickly. In fact, the derived expressions can be computed with the same order of complexity needed for descriptive mutual information. This makes the distribution of mutual information become a concrete alternative to descriptive mutual information in many applications which would benefit from moving to the inductive side. Some of these prospective applications are discussed, and one of them, namely feature selection, is shown to perform significantly better when inductive mutual information is used.
[ "['Marcus Hutter' 'Marco Zaffalon']" ]
null
null
0403031
null
null
http://arxiv.org/pdf/cs/0403031v2
2004-03-20T07:51:11Z
2004-03-19T17:13:55Z
Concept of E-machine: How does a "dynamical" brain learn to process "symbolic" information? Part I
The human brain has many remarkable information processing characteristics that deeply puzzle scientists and engineers. Among the most important and the most intriguing of these characteristics are the brain's broad universality as a learning system and its mysterious ability to dynamically change (reconfigure) its behavior depending on a combinatorial number of different contexts. This paper discusses a class of hypothetically brain-like dynamically reconfigurable associative learning systems that shed light on the possible nature of these brain's properties. The systems are arranged on the general principle referred to as the concept of E-machine. The paper addresses the following questions: 1. How can "dynamical" neural networks function as universal programmable "symbolic" machines? 2. What kind of a universal programmable symbolic machine can form arbitrarily complex software in the process of programming similar to the process of biological associative learning? 3. How can a universal learning machine dynamically reconfigure its software depending on a combinatorial number of possible contexts?
[ "['Victor Eliashberg']" ]
null
null
0403038
null
null
http://arxiv.org/pdf/cs/0403038v1
2004-03-23T15:17:53Z
2004-03-23T15:17:53Z
Tournament versus Fitness Uniform Selection
In evolutionary algorithms a critical parameter that must be tuned is that of selection pressure. If it is set too low then the rate of convergence towards the optimum is likely to be slow. Alternatively if the selection pressure is set too high the system is likely to become stuck in a local optimum due to a loss of diversity in the population. The recent Fitness Uniform Selection Scheme (FUSS) is a conceptually simple but somewhat radical approach to addressing this problem - rather than biasing the selection towards higher fitness, FUSS biases selection towards sparsely populated fitness levels. In this paper we compare the relative performance of FUSS with the well known tournament selection scheme on a range of problems.
[ "['Shane Legg' 'Marcus Hutter' 'Akshat Kumar']" ]
null
null
0404032
null
null
http://arxiv.org/pdf/cs/0404032v1
2004-04-15T02:59:10Z
2004-04-15T02:59:10Z
When Do Differences Matter? On-Line Feature Extraction Through Cognitive Economy
For an intelligent agent to be truly autonomous, it must be able to adapt its representation to the requirements of its task as it interacts with the world. Most current approaches to on-line feature extraction are ad hoc; in contrast, this paper presents an algorithm that bases judgments of state compatibility and state-space abstraction on principled criteria derived from the psychological principle of cognitive economy. The algorithm incorporates an active form of Q-learning, and partitions continuous state-spaces by merging and splitting Voronoi regions. The experiments illustrate a new methodology for testing and comparing representations by means of learning curves. Results from the puck-on-a-hill task demonstrate the algorithm's ability to learn effective representations, superior to those produced by some other, well-known, methods.
[ "['David J. Finton']" ]
null
null
0404057
null
null
http://arxiv.org/pdf/cs/0404057v1
2004-04-28T15:58:35Z
2004-04-28T15:58:35Z
Convergence of Discrete MDL for Sequential Prediction
We study the properties of the Minimum Description Length principle for sequence prediction, considering a two-part MDL estimator which is chosen from a countable class of models. This applies in particular to the important case of universal sequence prediction, where the model class corresponds to all algorithms for some fixed universal Turing machine (this correspondence is by enumerable semimeasures, hence the resulting models are stochastic). We prove convergence theorems similar to Solomonoff's theorem of universal induction, which also holds for general Bayes mixtures. The bound characterizing the convergence speed for MDL predictions is exponentially larger as compared to Bayes mixtures. We observe that there are at least three different ways of using MDL for prediction. One of these has worse prediction properties, for which predictions only converge if the MDL estimator stabilizes. We establish sufficient conditions for this to occur. Finally, some immediate consequences for complexity relations and randomness criteria are proven.
[ "['Jan Poland' 'Marcus Hutter']" ]
null
null
0405043
null
null
http://arxiv.org/pdf/cs/0405043v2
2004-05-12T20:37:07Z
2004-05-12T16:41:01Z
Prediction with Expert Advice by Following the Perturbed Leader for General Weights
When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of sqrt(complexity/current loss) renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative "Follow the Perturbed Leader" (FPL) algorithm from Kalai (2003} (based on Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are (to our knowledge) new.
[ "['Marcus Hutter' 'Jan Poland']" ]
null
null
0405104
null
null
http://arxiv.org/pdf/cs/0405104v1
2004-05-27T11:26:18Z
2004-05-27T11:26:18Z
Knowledge Reduction and Discovery based on Demarcation Information
Knowledge reduction, includes attribute reduction and value reduction, is an important topic in rough set literature. It is also closely relevant to other fields, such as machine learning and data mining. In this paper, an algorithm called TWI-SQUEEZE is proposed. It can find a reduct, or an irreducible attribute subset after two scans. Its soundness and computational complexity are given, which show that it is the fastest algorithm at present. A measure of variety is brought forward, of which algorithm TWI-SQUEEZE can be regarded as an application. The author also argues the rightness of this measure as a measure of information, which can make it a unified measure for "differentiation, a concept appeared in cognitive psychology literature. Value reduction is another important aspect of knowledge reduction. It is interesting that using the same algorithm we can execute a complete value reduction efficiently. The complete knowledge reduction, which results in an irreducible table, can therefore be accomplished after four scans of table. The byproducts of reduction are two classifiers of different styles. In this paper, various cases and models will be discussed to prove the efficiency and effectiveness of the algorithm. Some topics, such as how to integrate user preference to find a local optimal attribute subset will also be discussed.
[ "['Yuguo He']" ]
null
null
0406011
null
null
http://arxiv.org/pdf/cs/0406011v1
2004-06-06T18:57:05Z
2004-06-06T18:57:05Z
Blind Construction of Optimal Nonlinear Recursive Predictors for Discrete Sequences
We present a new method for nonlinear prediction of discrete random sequences under minimal structural assumptions. We give a mathematical construction for optimal predictors of such processes, in the form of hidden Markov models. We then describe an algorithm, CSSR (Causal-State Splitting Reconstruction), which approximates the ideal predictor from data. We discuss the reliability of CSSR, its data requirements, and its performance in simulations. Finally, we compare our approach to existing methods using variable-length Markov models and cross-validated hidden Markov models, and show theoretically and experimentally that our method delivers results superior to the former and at least comparable to the latter.
[ "['Cosma Rohilla Shalizi' 'Kristina Lisa Shalizi']" ]
null
null
0406077
null
null
http://arxiv.org/pdf/math/0406077v1
2004-06-04T09:11:18Z
2004-06-04T09:11:18Z
A tutorial introduction to the minimum description length principle
This tutorial provides an overview of and introduction to Rissanen's Minimum Description Length (MDL) Principle. The first chapter provides a conceptual, entirely non-technical introduction to the subject. It serves as a basis for the technical introduction given in the second chapter, in which all the ideas of the first chapter are made mathematically precise. The main ideas are discussed in great conceptual and technical detail. This tutorial is an extended version of the first two chapters of the collection "Advances in Minimum Description Length: Theory and Application" (edited by P.Grunwald, I.J. Myung and M. Pitt, to be published by the MIT Press, Spring 2005).
[ "['Peter Grunwald']" ]
null
null
0406221
null
null
http://arxiv.org/pdf/math/0406221v1
2004-06-10T16:36:54Z
2004-06-10T16:36:54Z
Suboptimal behaviour of Bayes and MDL in classification under misspecification
We show that forms of Bayesian and MDL inference that are often applied to classification problems can be *inconsistent*. This means there exists a learning problem such that for all amounts of data the generalization errors of the MDL classifier and the Bayes classifier relative to the Bayesian posterior both remain bounded away from the smallest achievable generalization error.
[ "['Peter Grunwald' 'John Langford']" ]
null
null
0407016
null
null
http://arxiv.org/pdf/cs/0407016v1
2004-07-06T22:18:25Z
2004-07-06T22:18:25Z
Learning for Adaptive Real-time Search
Real-time heuristic search is a popular model of acting and learning in intelligent autonomous agents. Learning real-time search agents improve their performance over time by acquiring and refining a value function guiding the application of their actions. As computing the perfect value function is typically intractable, a heuristic approximation is acquired instead. Most studies of learning in real-time search (and reinforcement learning) assume that a simple value-function-greedy policy is used to select actions. This is in contrast to practice, where high-performance is usually attained by interleaving planning and acting via a lookahead search of a non-trivial depth. In this paper, we take a step toward bridging this gap and propose a novel algorithm that (i) learns a heuristic function to be used specifically with a lookahead-based policy, (ii) selects the lookahead depth adaptively in each state, (iii) gives the user control over the trade-off between exploration and exploitation. We extensively evaluate the algorithm in the sliding tile puzzle testbed comparing it to the classical LRTA* and the more recent weighted LRTA*, bounded LRTA*, and FALCONS. Improvements of 5 to 30 folds in convergence speed are observed.
[ "['Vadim Bulitko']" ]
null
null
0407039
null
null
http://arxiv.org/pdf/cs/0407039v1
2004-07-16T10:36:49Z
2004-07-16T10:36:49Z
On the Convergence Speed of MDL Predictions for Bernoulli Sequences
We consider the Minimum Description Length principle for online sequence prediction. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is bounded, implying convergence with probability one, and (b) it additionally specifies a `rate of convergence'. Generally, for MDL only exponential loss bounds hold, as opposed to the linear bounds for a Bayes mixture. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable Bernoulli classes. This implies a small bound (comparable to the one for Bayes mixtures) for certain important model classes. The results apply to many Machine Learning tasks including classification and hypothesis testing. We provide arguments that our theorems generalize to countable classes of i.i.d. models.
[ "['Jan Poland' 'Marcus Hutter']" ]
null
null
0407057
null
null
http://arxiv.org/pdf/cs/0407057v1
2004-07-23T12:43:28Z
2004-07-23T12:43:28Z
Universal Convergence of Semimeasures on Individual Random Sequences
Solomonoff's central result on induction is that the posterior of a universal semimeasure M converges rapidly and with probability 1 to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal sequence predictor in case of unknown mu. Despite some nearby results and proofs in the literature, the stronger result of convergence for all (Martin-Loef) random sequences remained open. Such a convergence result would be particularly interesting and natural, since randomness can be defined in terms of M itself. We show that there are universal semimeasures M which do not converge for all random sequences, i.e. we give a partial negative answer to the open problem. We also provide a positive answer for some non-universal semimeasures. We define the incomputable measure D as a mixture over all computable measures and the enumerable semimeasure W as a mixture over all enumerable nearly-measures. We show that W converges to D and D to mu on all random sequences. The Hellinger distance measuring closeness of two distributions plays a central role.
[ "['Marcus Hutter' 'Andrej Muchnik']" ]
null
null
0407065
null
null
http://arxiv.org/pdf/cs/0407065v1
2004-07-29T19:46:01Z
2004-07-29T19:46:01Z
Word Sense Disambiguation by Web Mining for Word Co-occurrence Probabilities
This paper describes the National Research Council (NRC) Word Sense Disambiguation (WSD) system, as applied to the English Lexical Sample (ELS) task in Senseval-3. The NRC system approaches WSD as a classical supervised machine learning problem, using familiar tools such as the Weka machine learning software and Brill's rule-based part-of-speech tagger. Head words are represented as feature vectors with several hundred features. Approximately half of the features are syntactic and the other half are semantic. The main novelty in the system is the method for generating the semantic features, based on word hbox{co-occurrence} probabilities. The probabilities are estimated using the Waterloo MultiText System with a corpus of about one terabyte of unlabeled text, collected by a web crawler.
[ "['Peter D. Turney']" ]
null
null
0408001
null
null
http://arxiv.org/pdf/cs/0408001v1
2004-07-31T14:04:04Z
2004-07-31T14:04:04Z
Semantic Linking - a Context-Based Approach to Interactivity in Hypermedia
The semantic Web initiates new, high level access schemes to online content and applications. One area of superior need for a redefined content exploration is given by on-line educational applications and their concepts of interactivity in the framework of open hypermedia systems. In the present paper we discuss aspects and opportunities of gaining interactivity schemes from semantic notions of components. A transition from standard educational annotation to semantic statements of hyperlinks is discussed. Further on we introduce the concept of semantic link contexts as an approach to manage a coherent rhetoric of linking. A practical implementation is introduced, as well. Our semantic hyperlink implementation is based on the more general Multimedia Information Repository MIR, an open hypermedia system supporting the standards XML, Corba and JNDI.
[ "['Michael Engelhardt' 'Thomas C. Schmidt']" ]
null
null
0408004
null
null
http://arxiv.org/pdf/cs/0408004v1
2004-07-31T22:16:37Z
2004-07-31T22:16:37Z
Hypermedia Learning Objects System - On the Way to a Semantic Educational Web
While eLearning systems become more and more popular in daily education, available applications lack opportunities to structure, annotate and manage their contents in a high-level fashion. General efforts to improve these deficits are taken by initiatives to define rich meta data sets and a semanticWeb layer. In the present paper we introduce Hylos, an online learning system. Hylos is based on a cellular eLearning Object (ELO) information model encapsulating meta data conforming to the LOM standard. Content management is provisioned on this semantic meta data level and allows for variable, dynamically adaptable access structures. Context aware multifunctional links permit a systematic navigation depending on the learners and didactic needs, thereby exploring the capabilities of the semantic web. Hylos is built upon the more general Multimedia Information Repository (MIR) and the MIR adaptive context linking environment (MIRaCLE), its linking extension. MIR is an open system supporting the standards XML, Corba and JNDI. Hylos benefits from manageable information structures, sophisticated access logic and high-level authoring tools like the ELO editor responsible for the semi-manual creation of meta data and WYSIWYG like content editing.
[ "['Michael Engelhardt' 'Andreas Kárpáti' 'Torsten Rack' 'Ivette Schmidt'\n 'Thomas C. Schmidt']" ]
null
null
0408007
null
null
http://arxiv.org/pdf/cs/0408007v1
2004-08-02T21:24:41Z
2004-08-02T21:24:41Z
Online convex optimization in the bandit setting: gradient descent without a gradient
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
[ "['Abraham D. Flaxman' 'Adam Tauman Kalai' 'H. Brendan McMahan']" ]
null
null
0408039
null
null
http://arxiv.org/abs/nlin/0408039v2
2004-11-22T23:33:13Z
2004-08-20T05:17:14Z
Stability and Diversity in Collective Adaptation
We derive a class of macroscopic differential equations that describe collective adaptation, starting from a discrete-time stochastic microscopic model. The behavior of each agent is a dynamic balance between adaptation that locally achieves the best action and memory loss that leads to randomized behavior. We show that, although individual agents interact with their environment and other agents in a purely self-interested way, macroscopic behavior can be interpreted as game dynamics. Application to several familiar, explicit game interactions shows that the adaptation dynamics exhibits a diversity of collective behaviors. The simplicity of the assumptions underlying the macroscopic equations suggests that these behaviors should be expected broadly in collective adaptation. We also analyze the adaptation dynamics from an information-theoretic viewpoint and discuss self-organization induced by information flux between agents, giving a novel view of collective adaptation.
[ "['Yuzuru Sato' 'Eizo Akiyama' 'James P. Crutchfield']" ]
null
null
0408048
null
null
http://arxiv.org/pdf/cs/0408048v1
2004-08-21T16:57:34Z
2004-08-21T16:57:34Z
Journal of New Democratic Methods: An Introduction
This paper describes a new breed of academic journals that use statistical machine learning techniques to make them more democratic. In particular, not only can anyone submit an article, but anyone can also become a reviewer. Machine learning is used to decide which reviewers accurately represent the views of the journal's readers and thus deserve to have their opinions carry more weight. The paper concentrates on describing a specific experimental prototype of a democratic journal called the Journal of New Democratic Methods (JNDM). The paper also mentions the wider implications that machine learning and the techniques used in the JNDM may have for representative democracy in general.
[ "['John David Funge']" ]
null
null
0408058
null
null
http://arxiv.org/pdf/cs/0408058v1
2004-08-25T20:25:43Z
2004-08-25T20:25:43Z
Non-negative matrix factorization with sparseness constraints
Non-negative matrix factorization (NMF) is a recently developed technique for finding parts-based, linear representations of non-negative data. Although it has successfully been applied in several applications, it does not always result in parts-based representations. In this paper, we show how explicitly incorporating the notion of `sparseness' improves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF and for our extension. Our hope is that this will further the application of these methods to solving novel data-analysis problems.
[ "['Patrik O. Hoyer']" ]
null
null
0408146
null
null
http://arxiv.org/pdf/math/0408146v1
2004-08-11T06:38:50Z
2004-08-11T06:38:50Z
Learning a Machine for the Decision in a Partially Observable Markov Universe
In this paper, we are interested in optimal decisions in a partially observable Markov universe. Our viewpoint departs from the dynamic programming viewpoint: we are directly approximating an optimal strategic tree depending on the observation. This approximation is made by means of a parameterized probabilistic law. In this paper, a particular family of hidden Markov models, with input and output, is considered as a learning framework. A method for optimizing the parameters of these HMMs is proposed and applied. This optimization method is based on the cross-entropic principle.
[ "['Frederic Dambreville']" ]
null
null
0410004
null
null
http://arxiv.org/pdf/cs/0410004v1
2004-10-02T07:19:49Z
2004-10-02T07:19:49Z
Applying Policy Iteration for Training Recurrent Neural Networks
Recurrent neural networks are often used for learning time-series data. Based on a few assumptions we model this learning task as a minimization problem of a nonlinear least-squares cost function. The special structure of the cost function allows us to build a connection to reinforcement learning. We exploit this connection and derive a convergent, policy iteration-based algorithm. Furthermore, we argue that RNN training can be fit naturally into the reinforcement learning framework.
[ "['I. Szita' 'A. Lorincz']" ]
null
null
0410015
null
null
http://arxiv.org/pdf/cs/0410015v1
2004-10-07T10:57:08Z
2004-10-07T10:57:08Z
L1 regularization is better than L2 for learning and predicting chaotic systems
Emergent behaviors are in the focus of recent research interest. It is then of considerable importance to investigate what optimizations suit the learning and prediction of chaotic systems, the putative candidates for emergence. We have compared L1 and L2 regularizations on predicting chaotic time series using linear recurrent neural networks. The internal representation and the weights of the networks were optimized in a unifying framework. Computational tests on different problems indicate considerable advantages for the L1 regularization: It had considerably better learning time and better interpolating capabilities. We shall argue that optimization viewed as a maximum likelihood estimation justifies our results, because L1 regularization fits heavy-tailed distributions -- an apparently general feature of emergent systems -- better.
[ "['Z. Szabo' 'A. Lorincz']" ]
null
null
0410017
null
null
http://arxiv.org/pdf/cs/0410017v1
2004-10-07T17:20:56Z
2004-10-07T17:20:56Z
Automated Pattern Detection--An Algorithm for Constructing Optimally Synchronizing Multi-Regular Language Filters
In the computational-mechanics structural analysis of one-dimensional cellular automata the following automata-theoretic analogue of the emph{change-point problem} from time series analysis arises: emph{Given a string $sigma$ and a collection ${mc{D}_i}$ of finite automata, identify the regions of $sigma$ that belong to each $mc{D}_i$ and, in particular, the boundaries separating them.} We present two methods for solving this emph{multi-regular language filtering problem}. The first, although providing the ideal solution, requires a stack, has a worst-case compute time that grows quadratically in $sigma$'s length and conditions its output at any point on arbitrarily long windows of future input. The second method is to algorithmically construct a transducer that approximates the first algorithm. In contrast to the stack-based algorithm, however, the transducer requires only a finite amount of memory, runs in linear time, and gives immediate output for each letter read; it is, moreover, the best possible finite-state approximation with these three features.
[ "['Carl S. McTague' 'James P. Crutchfield']" ]
null
null
0410036
null
null
http://arxiv.org/pdf/cs/0410036v2
2005-09-09T17:48:53Z
2004-10-15T20:25:24Z
Self-Organised Factorial Encoding of a Toroidal Manifold
It is shown analytically how a neural network can be used optimally to encode input data that is derived from a toroidal manifold. The case of a 2-layer network is considered, where the output is assumed to be a set of discrete neural firing events. The network objective function measures the average Euclidean error that occurs when the network attempts to reconstruct its input from its output. This optimisation problem is solved analytically for a toroidal input manifold, and two types of solution are obtained: a joint encoder in which the network acts as a soft vector quantiser, and a factorial encoder in which the network acts as a pair of soft vector quantisers (one for each of the circular subspaces of the torus). The factorial encoder is favoured for small network sizes when the number of observed firing events is large. Such self-organised factorial encoding may be used to restrict the size of network that is required to perform a given encoding task, and will decompose an input manifold into its constituent submanifolds.
[ "['Stephen Luttrell']" ]
null
null
0410042
null
null
http://arxiv.org/pdf/cs/0410042v1
2004-10-18T10:50:28Z
2004-10-18T10:50:28Z
Neural Architectures for Robot Intelligence
We argue that the direct experimental approaches to elucidate the architecture of higher brains may benefit from insights gained from exploring the possibilities and limits of artificial control architectures for robot systems. We present some of our recent work that has been motivated by that view and that is centered around the study of various aspects of hand actions since these are intimately linked with many higher cognitive abilities. As examples, we report on the development of a modular system for the recognition of continuous hand postures based on neural nets, the use of vision and tactile sensing for guiding prehensile movements of a multifingered hand, and the recognition and use of hand gestures for robot teaching. Regarding the issue of learning, we propose to view real-world learning from the perspective of data mining and to focus more strongly on the imitation of observed actions instead of purely reinforcement-based exploration. As a concrete example of such an effort we report on the status of an ongoing project in our lab in which a robot equipped with an attention system with a neurally inspired architecture is taught actions by using hand gestures in conjunction with speech commands. We point out some of the lessons learnt from this system, and discuss how systems of this kind can contribute to the study of issues at the junction between natural and artificial cognitive systems.
[ "['H. Ritter' 'J. J. Steil' 'C. Noelker' 'F. Roethling' 'P. C. McGuire']" ]
null
null
0411099
null
null
http://arxiv.org/pdf/cs/0411099v1
2004-11-30T08:36:59Z
2004-11-30T08:36:59Z
A Note on the PAC Bayesian Theorem
We prove general exponential moment inequalities for averages of [0,1]-valued iid random variables and use them to tighten the PAC Bayesian Theorem. The logarithmic dependence on the sample count in the enumerator of the PAC Bayesian bound is halved.
[ "['Andreas Maurer']" ]
null
null
0411140
null
null
http://arxiv.org/abs/quant-ph/0411140v2
2005-07-29T21:05:02Z
2004-11-18T20:14:16Z
Improved Bounds on Quantum Learning Algorithms
In this article we give several new results on the complexity of algorithms that learn Boolean functions from quantum queries and quantum examples. Hunziker et al. conjectured that for any class C of Boolean functions, the number of quantum black-box queries which are required to exactly identify an unknown function from C is $O(frac{log |C|}{sqrt{{hat{gamma}}^{C}}})$, where $hat{gamma}^{C}$ is a combinatorial parameter of the class C. We essentially resolve this conjecture in the affirmative by giving a quantum algorithm that, for any class C, identifies any unknown function from C using $O(frac{log |C| log log |C|}{sqrt{{hat{gamma}}^{C}}})$ quantum black-box queries. We consider a range of natural problems intermediate between the exact learning problem (in which the learner must obtain all bits of information about the black-box function) and the usual problem of computing a predicate (in which the learner must obtain only one bit of information about the black-box function). We give positive and negative results on when the quantum and classical query complexities of these intermediate problems are polynomially related to each other. Finally, we improve the known lower bounds on the number of quantum examples (as opposed to quantum black-box queries) required for $(epsilon,delta)$-PAC learning any concept class of Vapnik-Chervonenkis dimension d over the domain ${0,1}^n$ from $Omega(frac{d}{n})$ to $Omega(frac{1}{epsilon}log frac{1}{delta}+d+frac{sqrt{d}}{epsilon})$. This new lower bound comes closer to matching known upper bounds for classical PAC learning.
[ "['Alp Atici' 'Rocco A. Servedio']" ]
null
null
0411515
null
null
http://arxiv.org/pdf/math/0411515v1
2004-11-23T16:39:07Z
2004-11-23T16:39:07Z
Fast Non-Parametric Bayesian Inference on Infinite Trees
Given i.i.d. data from an unknown distribution, we consider the problem of predicting future items. An adaptive way to estimate the probability density is to recursively subdivide the domain to an appropriate data-dependent granularity. A Bayesian would assign a data-independent prior probability to "subdivide", which leads to a prior over infinite(ly many) trees. We derive an exact, fast, and simple inference algorithm for such a prior, for the data evidence, the predictive distribution, the effective model dimension, and other quantities.
[ "['Marcus Hutter']" ]
null
null
0412003
null
null
http://arxiv.org/pdf/cs/0412003v1
2004-12-01T16:32:49Z
2004-12-01T16:32:49Z
Mining Heterogeneous Multivariate Time-Series for Learning Meaningful Patterns: Application to Home Health Telecare
For the last years, time-series mining has become a challenging issue for researchers. An important application lies in most monitoring purposes, which require analyzing large sets of time-series for learning usual patterns. Any deviation from this learned profile is then considered as an unexpected situation. Moreover, complex applications may involve the temporal study of several heterogeneous parameters. In that paper, we propose a method for mining heterogeneous multivariate time-series for learning meaningful patterns. The proposed approach allows for mixed time-series -- containing both pattern and non-pattern data -- such as for imprecise matches, outliers, stretching and global translating of patterns instances in time. We present the early results of our approach in the context of monitoring the health status of a person at home. The purpose is to build a behavioral profile of a person by analyzing the time variations of several quantitative or qualitative parameters recorded through a provision of sensors installed in the home.
[ "['Florence Duchene' 'Catherine Garbay' 'Vincent Rialle']" ]
null
null
0412024
null
null
http://arxiv.org/pdf/cs/0412024v1
2004-12-06T21:50:18Z
2004-12-06T21:50:18Z
Human-Level Performance on Word Analogy Questions by Latent Relational Analysis
This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, machine translation, and information retrieval. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason/stone is analogous to the pair carpenter/wood. Past work on semantic similarity measures has mainly been concerned with attributional similarity. Recently the Vector Space Model (VSM) of information retrieval has been adapted to the task of measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus (they are not predefined), (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data (it is also used this way in Latent Semantic Analysis), and (3) automatically generated synonyms are used to explore reformulations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying noun-modifier relations, LRA achieves similar gains over the VSM, while using a smaller corpus.
[ "['Peter D. Turney']" ]
null
null
0412098
null
null
http://arxiv.org/pdf/cs/0412098v3
2007-05-30T17:23:04Z
2004-12-21T16:05:36Z
The Google Similarity Distance
Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers the equivalent of `society' is `database,' and the equivalent of `use' is `way to search the database.' We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts we use the world-wide-web as database, and Google as search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the world-wide-web using Google page counts. The world-wide-web is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies, and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87% with the expert crafted WordNet categories.
[ "['Rudi Cilibrasi' 'Paul M. B. Vitanyi']" ]
null
null
0412106
null
null
http://arxiv.org/pdf/cs/0412106v1
2004-12-23T15:21:40Z
2004-12-23T15:21:40Z
Online Learning of Aggregate Knowledge about Non-linear Preferences Applied to Negotiating Prices and Bundles
In this paper, we consider a form of multi-issue negotiation where a shop negotiates both the contents and the price of bundles of goods with his customers. We present some key insights about, as well as a procedure for, locating mutually beneficial alternatives to the bundle currently under negotiation. The essence of our approach lies in combining aggregate (anonymous) knowledge of customer preferences with current data about the ongoing negotiation process. The developed procedure either works with already obtained aggregate knowledge or, in the absence of such knowledge, learns the relevant information online. We conduct computer experiments with simulated customers that have_nonlinear_ preferences. We show how, for various types of customers, with distinct negotiation heuristics, our procedure (with and without the necessary aggregate knowledge) increases the speed with which deals are reached, as well as the number and the Pareto efficiency of the deals reached compared to a benchmark.
[ "['Koye Somefun' 'Tomas Klos' 'Han La Poutré']" ]
null
null
0501018
null
null
http://arxiv.org/pdf/cs/0501018v1
2005-01-10T21:03:14Z
2005-01-10T21:03:14Z
Combining Independent Modules in Lexical Multiple-Choice Problems
Existing statistical approaches to natural language problems are very coarse approximations to the true complexity of language processing. As such, no single technique will be best for all problem instances. Many researchers are examining ensemble methods that combine the output of multiple modules to create more accurate solutions. This paper examines three merging rules for combining probability distributions: the familiar mixture rule, the logarithmic rule, and a novel product rule. These rules were applied with state-of-the-art results to two problems used to assess human mastery of lexical semantics -- synonym questions and analogy questions. All three merging rules result in ensembles that are more accurate than any of their component modules. The differences among the three rules are not statistically significant, but it is suggestive that the popular mixture rule is not the best rule for either of the two problems.
[ "['Peter D. Turney' 'Michael L. Littman' 'Jeffrey Bigham' 'Victor Shnayder']" ]
null
null
0501028
null
null
http://arxiv.org/pdf/cs/0501028v1
2005-01-14T15:50:28Z
2005-01-14T15:50:28Z
An Empirical Study of MDL Model Selection with Infinite Parametric Complexity
Parametric complexity is a central concept in MDL model selection. In practice it often turns out to be infinite, even for quite simple models such as the Poisson and Geometric families. In such cases, MDL model selection as based on NML and Bayesian inference based on Jeffreys' prior can not be used. Several ways to resolve this problem have been proposed. We conduct experiments to compare and evaluate their behaviour on small sample sizes. We find interestingly poor behaviour for the plug-in predictive code; a restricted NML model performs quite well but it is questionable if the results validate its theoretical motivation. The Bayesian model with the improper Jeffreys' prior is the most dependable.
[ "['Steven de Rooij' 'Peter Grunwald']" ]
null
null
0501063
null
null
http://arxiv.org/abs/cs/0501063v1
2005-01-22T22:07:18Z
2005-01-22T22:07:18Z
Bandit Problems with Side Observations
An extension of the traditional two-armed bandit problem is considered, in which the decision maker has access to some side information before deciding which arm to pull. At each time t, before making a selection, the decision maker is able to observe a random variable X_t that provides some information on the rewards to be obtained. The focus is on finding uniformly good rules (that minimize the growth rate of the inferior sampling time) and on quantifying how much the additional information helps. Various settings are considered and for each setting, lower bounds on the achievable inferior sampling time are developed and asymptotically optimal adaptive schemes achieving these lower bounds are constructed.
[ "['Chih-Chun Wang' 'Sanjeev R. Kulkarni' 'H. Vincent Poor']" ]
null
null
0502004
null
null
http://arxiv.org/pdf/cs/0502004v1
2005-02-01T13:42:49Z
2005-02-01T13:42:49Z
Asymptotic Log-loss of Prequential Maximum Likelihood Codes
We analyze the Dawid-Rissanen prequential maximum likelihood codes relative to one-parameter exponential family models M. If data are i.i.d. according to an (essentially) arbitrary P, then the redundancy grows at rate c/2 ln n. We show that c=v1/v2, where v1 is the variance of P, and v2 is the variance of the distribution m* in M that is closest to P in KL divergence. This shows that prequential codes behave quite differently from other important universal codes such as the 2-part MDL, Shtarkov and Bayes codes, for which c=1. This behavior is undesirable in an MDL model selection setting.
[ "['Peter Grunwald' 'Steven de Rooij']" ]
null
null
0502016
null
null
http://arxiv.org/pdf/cs/0502016v1
2005-02-03T19:54:02Z
2005-02-03T19:54:02Z
Stability Analysis for Regularized Least Squares Regression
We discuss stability for a class of learning algorithms with respect to noisy labels. The algorithms we consider are for regression, and they involve the minimization of regularized risk functionals, such as L(f) := 1/N sum_i (f(x_i)-y_i)^2+ lambda ||f||_H^2. We shall call the algorithm `stable' if, when y_i is a noisy version of f*(x_i) for some function f* in H, the output of the algorithm converges to f* as the regularization term and noise simultaneously vanish. We consider two flavors of this problem, one where a data set of N points remains fixed, and the other where N -> infinity. For the case where N -> infinity, we give conditions for convergence to f_E (the function which is the expectation of y(x) for each x), as lambda -> 0. For the fixed N case, we describe the limiting 'non-noisy', 'non-regularized' function f*, and give conditions for convergence. In the process, we develop a set of tools for dealing with functionals such as L(f), which are applicable to many other problems in learning theory.
[ "['Cynthia Rudin']" ]
null
null
0502017
null
null
http://arxiv.org/pdf/cs/0502017v1
2005-02-03T21:11:54Z
2005-02-03T21:11:54Z
Estimating mutual information and multi--information in large networks
We address the practical problems of estimating the information relations that characterize large networks. Building on methods developed for analysis of the neural code, we show that reliable estimates of mutual information can be obtained with manageable computational effort. The same methods allow estimation of higher order, multi--information terms. These ideas are illustrated by analyses of gene expression, financial markets, and consumer preferences. In each case, information theoretic measures correlate with independent, intuitive measures of the underlying structures in the system.
[ "['Noam Slonim' 'Gurinder S. Atwal' 'Gasper Tkacik' 'William Bialek']" ]
null
null
0502067
null
null
http://arxiv.org/pdf/cs/0502067v1
2005-02-15T14:59:49Z
2005-02-15T14:59:49Z
Master Algorithms for Active Experts Problems based on Increasing Loss Values
We specify an experts algorithm with the following characteristics: (a) it uses only feedback from the actions actually chosen (bandit setup), (b) it can be applied with countably infinite expert classes, and (c) it copes with losses that may grow in time appropriately slowly. We prove loss bounds against an adaptive adversary. From this, we obtain master algorithms for "active experts problems", which means that the master's actions may influence the behavior of the adversary. Our algorithm can significantly outperform standard experts algorithms on such problems. Finally, we combine it with a universal expert class. This results in a (computationally infeasible) universal master algorithm which performs - in a certain sense - almost as well as any computable strategy, for any online problem.
[ "['Jan Poland' 'Marcus Hutter']" ]
null
null
0502074
null
null
http://arxiv.org/abs/cs/0502074v2
2005-10-17T07:59:18Z
2005-02-17T14:58:28Z
On sample complexity for computational pattern recognition
In statistical setting of the pattern recognition problem the number of examples required to approximate an unknown labelling function is linear in the VC dimension of the target learning class. In this work we consider the question whether such bounds exist if we restrict our attention to computable pattern recognition methods, assuming that the unknown labelling function is also computable. We find that in this case the number of examples required for a computable method to approximate the labelling function not only is not linear, but grows faster (in the VC dimension of the class) than any computable function. No time or space constraints are put on the predictors or target functions; the only resource we consider is the training examples. The task of pattern recognition is considered in conjunction with another learning problem -- data compression. An impossibility result for the task of data compression allows us to estimate the sample complexity for pattern recognition.
[ "['Daniil Ryabko']" ]
null
null
0502076
null
null
http://arxiv.org/abs/cs/0502076v2
2006-07-05T05:29:36Z
2005-02-18T01:31:53Z
Learning nonsingular phylogenies and hidden Markov models
In this paper we study the problem of learning phylogenies and hidden Markov models. We call a Markov model nonsingular if all transition matrices have determinants bounded away from 0 (and 1). We highlight the role of the nonsingularity condition for the learning problem. Learning hidden Markov models without the nonsingularity condition is at least as hard as learning parity with noise, a well-known learning problem conjectured to be computationally hard. On the other hand, we give a polynomial-time algorithm for learning nonsingular phylogenies and hidden Markov models.
[ "['Elchanan Mossel' 'Sébastien Roch']" ]
null
null
0502086
null
null
http://arxiv.org/pdf/cs/0502086v1
2005-02-22T09:51:16Z
2005-02-22T09:51:16Z
The Self-Organization of Speech Sounds
The speech code is a vehicle of language: it defines a set of forms used by a community to carry information. Such a code is necessary to support the linguistic interactions that allow humans to communicate. How then may a speech code be formed prior to the existence of linguistic interactions? Moreover, the human speech code is discrete and compositional, shared by all the individuals of a community but different across communities, and phoneme inventories are characterized by statistical regularities. How can a speech code with these properties form? We try to approach these questions in the paper, using the "methodology of the artificial". We build a society of artificial agents, and detail a mechanism that shows the formation of a discrete speech code without pre-supposing the existence of linguistic capacities or of coordinated interactions. The mechanism is based on a low-level model of sensory-motor interactions. We show that the integration of certain very simple and non language-specific neural devices leads to the formation of a speech code that has properties similar to the human speech code. This result relies on the self-organizing properties of a generic coupling between perception and production within agents, and on the interactions between agents. The artificial system helps us to develop better intuitions on how speech might have appeared, by showing how self-organization might have helped natural selection to find speech.
[ "['Pierre-Yves Oudeyer']" ]
null
null
0502315
null
null
http://arxiv.org/pdf/math/0502315v1
2005-02-15T16:26:36Z
2005-02-15T16:26:36Z
Strong Asymptotic Assertions for Discrete MDL in Regression and Classification
We study the properties of the MDL (or maximum penalized complexity) estimator for Regression and Classification, where the underlying model class is countable. We show in particular a finite bound on the Hellinger losses under the only assumption that there is a "true" model contained in the class. This implies almost sure convergence of the predictive distribution to the true one at a fast rate. It corresponds to Solomonoff's central theorem of universal induction, however with a bound that is exponentially larger.
[ "['Jan Poland' 'Marcus Hutter']" ]
null
null
0503026
null
null
http://arxiv.org/pdf/cs/0503026v1
2005-03-11T12:38:30Z
2005-03-11T12:38:30Z
On Generalized Computable Universal Priors and their Convergence
Solomonoff unified Occam's razor and Epicurus' principle of multiple explanations to one elegant, formal, universal theory of inductive inference, which initiated the field of algorithmic information theory. His central result is that the posterior of the universal semimeasure M converges rapidly to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal predictor in case of unknown mu. The first part of the paper investigates the existence and convergence of computable universal (semi)measures for a hierarchy of computability classes: recursive, estimable, enumerable, and approximable. For instance, M is known to be enumerable, but not estimable, and to dominate all enumerable semimeasures. We present proofs for discrete and continuous semimeasures. The second part investigates more closely the types of convergence, possibly implied by universality: in difference and in ratio, with probability 1, in mean sum, and for Martin-Loef random sequences. We introduce a generalized concept of randomness for individual sequences and use it to exhibit difficulties regarding these issues. In particular, we show that convergence fails (holds) on generalized-random sequences in gappy (dense) Bernoulli classes.
[ "['Marcus Hutter']" ]
null
null
0503071
null
null
http://arxiv.org/abs/cs/0503071v2
2005-09-30T02:05:50Z
2005-03-26T05:13:51Z
Consistency in Models for Distributed Learning under Communication Constraints
Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an independent and identically distributed (i.i.d.) sampling process amongst members of a network of simple learning agents. The agents are limited in their ability to communicate to a central fusion center and thus, the amount of information available for use in classification or regression is constrained. For several basic communication models in both the binary classification and regression frameworks, we question the existence of agent decision rules and fusion rules that result in a universally consistent ensemble. The answers to this question present new issues to consider with regard to universal consistency. Insofar as these models present a useful picture of distributed scenarios, this paper addresses the issue of whether or not the guarantees provided by Stone's Theorem in centralized environments hold in distributed settings.
[ "['Joel B. Predd' 'Sanjeev R. Kulkarni' 'H. Vincent Poor']" ]
null
null
0503072
null
null
http://arxiv.org/abs/cs/0503072v1
2005-03-26T05:42:06Z
2005-03-26T05:42:06Z
Distributed Learning in Wireless Sensor Networks
The problem of distributed or decentralized detection and estimation in applications such as wireless sensor networks has often been considered in the framework of parametric models, in which strong assumptions are made about a statistical description of nature. In certain applications, such assumptions are warranted and systems designed from these models show promise. However, in other scenarios, prior knowledge is at best vague and translating such knowledge into a statistical model is undesirable. Applications such as these pave the way for a nonparametric study of distributed detection and estimation. In this paper, we review recent work of the authors in which some elementary models for distributed learning are considered. These models are in the spirit of classical work in nonparametric statistics and are applicable to wireless sensor networks.
[ "['Joel B. Predd' 'Sanjeev R. Kulkarni' 'H. Vincent Poor']" ]
null
null
0504001
null
null
http://arxiv.org/pdf/cs/0504001v1
2005-03-31T23:04:28Z
2005-03-31T23:04:28Z
Probabilistic and Team PFIN-type Learning: General Properties
We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p_1 and p_2 and answers whether PFIN-type learning with the probability of success p_1 is equivalent to PFIN-type learning with the probability of success p_2. To prove our result, we analyze the topological structure of the probability hierarchy. We prove that it is well-ordered in descending ordering and order-equivalent to ordinal epsilon_0. This shows that the structure of the hierarchy is very complicated. Using similar methods, we also prove that, for PFIN-type learning, team learning and probabilistic learning are of the same power.
[ "['Andris Ambainis']" ]
null
null
0504042
null
null
http://arxiv.org/pdf/cs/0504042v1
2005-04-11T17:45:09Z
2005-04-11T17:45:09Z
The Bayesian Decision Tree Technique with a Sweeping Strategy
The uncertainty of classification outcomes is of crucial importance for many safety critical applications including, for example, medical diagnostics. In such applications the uncertainty of classification can be reliably estimated within a Bayesian model averaging technique that allows the use of prior information. Decision Tree (DT) classification models used within such a technique gives experts additional information by making this classification scheme observable. The use of the Markov Chain Monte Carlo (MCMC) methodology of stochastic sampling makes the Bayesian DT technique feasible to perform. However, in practice, the MCMC technique may become stuck in a particular DT which is far away from a region with a maximal posterior. Sampling such DTs causes bias in the posterior estimates, and as a result the evaluation of classification uncertainty may be incorrect. In a particular case, the negative effect of such sampling may be reduced by giving additional prior information on the shape of DTs. In this paper we describe a new approach based on sweeping the DTs without additional priors on the favorite shape of DTs. The performances of Bayesian DT techniques with the standard and sweeping strategies are compared on a synthetic data as well as on real datasets. Quantitatively evaluating the uncertainty in terms of entropy of class posterior probabilities, we found that the sweeping strategy is superior to the standard strategy.
[ "['V. Schetinin' 'J. E. Fieldsend' 'D. Partridge' 'W. J. Krzanowski'\n 'R. M. Everson' 'T. C. Bailey' 'A. Hernandez']" ]
null
null
0504043
null
null
http://arxiv.org/pdf/cs/0504043v1
2005-04-11T17:53:35Z
2005-04-11T17:53:35Z
Experimental Comparison of Classification Uncertainty for Randomised and Bayesian Decision Tree Ensembles
In this paper we experimentally compare the classification uncertainty of the randomised Decision Tree (DT) ensemble technique and the Bayesian DT technique with a restarting strategy on a synthetic dataset as well as on some datasets commonly used in the machine learning community. For quantitative evaluation of classification uncertainty, we use an Uncertainty Envelope dealing with the class posterior distribution and a given confidence probability. Counting the classifier outcomes, this technique produces feasible evaluations of the classification uncertainty. Using this technique in our experiments, we found that the Bayesian DT technique is superior to the randomised DT ensemble technique.
[ "['V. Schetinin' 'D. Partridge' 'W. J. Krzanowski' 'R. M. Everson'\n 'J. E. Fieldsend' 'T. C. Bailey' 'A. Hernandez']" ]
null
null
0504052
null
null
http://arxiv.org/pdf/cs/0504052v1
2005-04-13T13:22:49Z
2005-04-13T13:22:49Z
Learning Multi-Class Neural-Network Models from Electroencephalograms
We describe a new algorithm for learning multi-class neural-network models from large-scale clinical electroencephalograms (EEGs). This algorithm trains hidden neurons separately to classify all the pairs of classes. To find best pairwise classifiers, our algorithm searches for input variables which are relevant to the classification problem. Despite patient variability and heavily overlapping classes, a 16-class model learnt from EEGs of 65 sleeping newborns correctly classified 80.8% of the training and 80.1% of the testing examples. Additionally, the neural-network model provides a probabilistic interpretation of decisions.
[ "['Vitaly Schetinin' 'Joachim Schult' 'Burkhart Scheidt' 'Valery Kuriakin']" ]
null
null
0504054
null
null
http://arxiv.org/pdf/cs/0504054v1
2005-04-13T13:40:38Z
2005-04-13T13:40:38Z
Learning from Web: Review of Approaches
Knowledge discovery is defined as non-trivial extraction of implicit, previously unknown and potentially useful information from given data. Knowledge extraction from web documents deals with unstructured, free-format documents whose number is enormous and rapidly growing. The artificial neural networks are well suitable to solve a problem of knowledge discovery from web documents because trained networks are able more accurately and easily to classify the learning and testing examples those represent the text mining domain. However, the neural networks that consist of large number of weighted connections and activation units often generate the incomprehensible and hard-to-understand models of text classification. This problem may be also addressed to most powerful recurrent neural networks that employ the feedback links from hidden or output units to their input units. Due to feedback links, recurrent neural networks are able take into account of a context in document. To be useful for data mining, self-organizing neural network techniques of knowledge extraction have been explored and developed. Self-organization principles were used to create an adequate neural-network structure and reduce a dimensionality of features used to describe text documents. The use of these principles seems interesting because ones are able to reduce a neural-network redundancy and considerably facilitate the knowledge representation.
[ "['Vitaly Schetinin']" ]
null
null
0504063
null
null
http://arxiv.org/pdf/cs/0504063v1
2005-04-14T07:57:01Z
2005-04-14T07:57:01Z
Selection in Scale-Free Small World
In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information/all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web.
[ "['Zs. Palotai' 'Cs. Farkas' 'A. Lorincz']" ]
null
null
0504069
null
null
http://arxiv.org/pdf/cs/0504069v1
2005-04-14T10:47:38Z
2005-04-14T10:47:38Z
A Neural-Network Technique to Learn Concepts from Electroencephalograms
A new technique is presented developed to learn multi-class concepts from clinical electroencephalograms. A desired concept is represented as a neuronal computational model consisting of the input, hidden, and output neurons. In this model the hidden neurons learn independently to classify the electroencephalogram segments presented by spectral and statistical features. This technique has been applied to the electroencephalogram data recorded from 65 sleeping healthy newborns in order to learn a brain maturation concept of newborns aged between 35 and 51 weeks. The 39399 and 19670 segments from these data have been used for learning and testing the concept, respectively. As a result, the concept has correctly classified 80.1% of the testing segments or 87.7% of the 65 records.
[ "['Vitaly Schetinin' 'Joachim Schult']" ]
null
null
0504070
null
null
http://arxiv.org/pdf/cs/0504070v1
2005-04-14T10:49:55Z
2005-04-14T10:49:55Z
The Combined Technique for Detection of Artifacts in Clinical Electroencephalograms of Sleeping Newborns
In this paper we describe a new method combining the polynomial neural network and decision tree techniques in order to derive comprehensible classification rules from clinical electroencephalograms (EEGs) recorded from sleeping newborns. These EEGs are heavily corrupted by cardiac, eye movement, muscle and noise artifacts and as a consequence some EEG features are irrelevant to classification problems. Combining the polynomial network and decision tree techniques, we discover comprehensible classification rules whilst also attempting to keep their classification error down. This technique is shown to outperform a number of commonly used machine learning technique applied to automatically recognize artifacts in the sleep EEGs.
[ "['Vitaly Schetinin' 'Joachim Schult']" ]
null
null
0504078
null
null
http://arxiv.org/pdf/cs/0504078v1
2005-04-16T16:48:49Z
2005-04-16T16:48:49Z
Adaptive Online Prediction by Following the Perturbed Leader
When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of sqrt(complexity/current loss) renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative "Follow the Perturbed Leader" (FPL) algorithm from Kalai & Vempala (2003) (based on Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are new.
[ "['Marcus Hutter' 'Jan Poland']" ]
null
null
0504086
null
null
http://arxiv.org/pdf/cs/0504086v1
2005-04-19T15:01:25Z
2005-04-19T15:01:25Z
Componentwise Least Squares Support Vector Machines
This chapter describes componentwise Least Squares Support Vector Machines (LS-SVMs) for the estimation of additive models consisting of a sum of nonlinear components. The primal-dual derivations characterizing LS-SVMs for the estimation of the additive model result in a single set of linear equations with size growing in the number of data-points. The derivation is elaborated for the classification as well as the regression case. Furthermore, different techniques are proposed to discover structure in the data by looking for sparse components in the model based on dedicated regularization schemes on the one hand and fusion of the componentwise LS-SVMs training with a validation criterion on the other hand. (keywords: LS-SVMs, additive models, regularization, structure detection)
[ "['Kristiaan Pelckmans' 'Ivan Goethals' 'Jos De Brabanter'\n 'Johan A. K. Suykens' 'Bart De Moor']" ]
null
null
0505028
null
null
http://arxiv.org/pdf/cs/0505028v3
2005-08-16T12:43:07Z
2005-05-11T16:45:58Z
A linear memory algorithm for Baum-Welch training
Background: Baum-Welch training is an expectation-maximisation algorithm for training the emission and transition probabilities of hidden Markov models in a fully automated way. Methods and results: We introduce a linear space algorithm for Baum-Welch training. For a hidden Markov model with M states, T free transition and E free emission parameters, and an input sequence of length L, our new algorithm requires O(M) memory and O(L M T_max (T + E)) time for one Baum-Welch iteration, where T_max is the maximum number of states that any state is connected to. The most memory efficient algorithm until now was the checkpointing algorithm with O(log(L) M) memory and O(log(L) L M T_max) time requirement. Our novel algorithm thus renders the memory requirement completely independent of the length of the training sequences. More generally, for an n-hidden Markov model and n input sequences of length L, the memory requirement of O(log(L) L^(n-1) M) is reduced to O(L^(n-1) M) memory while the running time is changed from O(log(L) L^n M T_max + L^n (T + E)) to O(L^n M T_max (T + E)). Conclusions: For the large class of hidden Markov models used for example in gene prediction, whose number of states does not scale with the length of the input sequence, our novel algorithm can thus be both faster and more memory-efficient than any of the existing algorithms.
[ "['Istvan Miklos' 'Irmtraud M. Meyer']" ]
null
null
0505064
null
null
http://arxiv.org/abs/cs/0505064v1
2005-05-24T14:53:49Z
2005-05-24T14:53:49Z
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and an modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects.
[ "['P. C. McGuire' 'J. Fritsch' 'J. J. Steil' 'F. Roethling' 'G. A. Fink'\n 'S. Wachsmuth' 'G. Sagerer' 'H. Ritter']" ]
null
null
0505083
null
null
http://arxiv.org/pdf/cs/0505083v1
2005-05-30T21:12:00Z
2005-05-30T21:12:00Z
Defensive forecasting
We consider how to make probability forecasts of binary labels. Our main mathematical result is that for any continuous gambling strategy used for detecting disagreement between the forecasts and the actual labels, there exists a forecasting strategy whose forecasts are ideal as far as this gambling strategy is concerned. A forecasting strategy obtained in this way from a gambling strategy demonstrating a strong law of large numbers is simplified and studied empirically.
[ "['Vladimir Vovk' 'Akimichi Takemura' 'Glenn Shafer']" ]
null
null
0506004
null
null
http://arxiv.org/pdf/cs/0506004v4
2006-07-01T13:46:30Z
2005-06-01T14:03:20Z
Non-asymptotic calibration and resolution
We analyze a new algorithm for probability forecasting of binary observations on the basis of the available data, without making any assumptions about the way the observations are generated. The algorithm is shown to be well calibrated and to have good resolution for long enough sequences of observations and for a suitable choice of its parameter, a kernel on the Cartesian product of the forecast space $[0,1]$ and the data space. Our main results are non-asymptotic: we establish explicit inequalities, shown to be tight, for the performance of the algorithm.
[ "['Vladimir Vovk']" ]
null
null
0506007
null
null
http://arxiv.org/pdf/cs/0506007v2
2005-09-24T16:55:14Z
2005-06-02T13:26:43Z
Defensive forecasting for linear protocols
We consider a general class of forecasting protocols, called "linear protocols", and discuss several important special cases, including multi-class forecasting. Forecasting is formalized as a game between three players: Reality, whose role is to generate observations; Forecaster, whose goal is to predict the observations; and Skeptic, who tries to make money on any lack of agreement between Forecaster's predictions and the actual observations. Our main mathematical result is that for any continuous strategy for Skeptic in a linear protocol there exists a strategy for Forecaster that does not allow Skeptic's capital to grow. This result is a meta-theorem that allows one to transform any continuous law of probability in a linear protocol into a forecasting strategy whose predictions are guaranteed to satisfy this law. We apply this meta-theorem to a weak law of large numbers in Hilbert spaces to obtain a version of the K29 prediction algorithm for linear protocols and show that this version also satisfies the attractive properties of proper calibration and resolution under a suitable choice of its kernel parameter, with no assumptions about the way the data is generated.
[ "['Vladimir Vovk' 'Ilia Nouretdinov' 'Akimichi Takemura' 'Glenn Shafer']" ]