categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
0506022
null
null
http://arxiv.org/abs/cs/0506022v1
2005-06-08T09:07:23Z
2005-06-08T09:07:23Z
Asymptotics of Discrete MDL for Online Prediction
Minimum Description Length (MDL) is an important principle for induction and prediction, with strong relations to optimal Bayesian learning. This paper deals with learning non-i.i.d. processes by means of two-part MDL, where the underlying model class is countable. We consider the online learning framework, i.e. observations come in one by one, and the predictor is allowed to update his state of mind after each time step. We identify two ways of predicting by MDL for this setup, namely a static} and a dynamic one. (A third variant, hybrid MDL, will turn out inferior.) We will prove that under the only assumption that the data is generated by a distribution contained in the model class, the MDL predictions converge to the true values almost surely. This is accomplished by proving finite bounds on the quadratic, the Hellinger, and the Kullback-Leibler loss of the MDL learner, which are however exponentially worse than for Bayesian prediction. We demonstrate that these bounds are sharp, even for model classes containing only Bernoulli distributions. We show how these bounds imply regret bounds for arbitrary loss functions. Our results apply to a wide range of setups, namely sequence prediction, pattern classification, regression, and universal induction in the sense of Algorithmic Information Theory among others.
[ "['Jan Poland' 'Marcus Hutter']" ]
null
null
0506041
null
null
http://arxiv.org/pdf/cs/0506041v3
2005-09-02T14:27:18Z
2005-06-11T18:11:22Z
Competitive on-line learning with a convex loss function
We consider the problem of sequential decision making under uncertainty in which the loss caused by a decision depends on the following binary observation. In competitive on-line learning, the goal is to design decision algorithms that are almost as good as the best decision rules in a wide benchmark class, without making any assumptions about the way the observations are generated. However, standard algorithms in this area can only deal with finite-dimensional (often countable) benchmark classes. In this paper we give similar results for decision rules ranging over an arbitrary reproducing kernel Hilbert space. For example, it is shown that for a wide class of loss functions (including the standard square, absolute, and log loss functions) the average loss of the master algorithm, over the first $N$ observations, does not exceed the average loss of the best decision rule with a bounded norm plus $O(N^{-1/2})$. Our proof technique is very different from the standard ones and is based on recent results about defensive forecasting. Given the probabilities produced by a defensive forecasting algorithm, which are known to be well calibrated and to have good resolution in the long run, we use the expected loss minimization principle to find a suitable decision.
[ "['Vladimir Vovk']" ]
null
null
0506057
null
null
http://arxiv.org/pdf/cs/0506057v2
2005-07-21T02:43:12Z
2005-06-14T04:00:38Z
About one 3-parameter Model of Testing
This article offers a 3-parameter model of testing, with 1) the difference between the ability level of the examinee and item difficulty; 2) the examinee discrimination and 3) the item discrimination as model parameters.
[ "['Kromer Victor']" ]
null
null
0506075
null
null
http://arxiv.org/pdf/cs/0506075v1
2005-06-17T20:10:43Z
2005-06-17T20:10:43Z
Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales
We address the rating-inference problem, wherein rather than simply decide whether a review is "thumbs up" or "thumbs down", as in previous sentiment analysis work, one must determine an author's evaluation with respect to a multi-point scale (e.g., one to five "stars"). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, "three stars" is intuitively closer to "four stars" than to "one star". We first evaluate human performance at the task. Then, we apply a meta-algorithm, based on a metric labeling formulation of the problem, that alters a given n-ary classifier's output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem.
[ "['Bo Pang' 'Lillian Lee']" ]
null
null
0506085
null
null
http://arxiv.org/pdf/cs/0506085v1
2005-06-22T21:21:13Z
2005-06-22T21:21:13Z
On the Job Training
We propose a new framework for building and evaluating machine learning algorithms. We argue that many real-world problems require an agent which must quickly learn to respond to demands, yet can continue to perform and respond to new training throughout its useful life. We give a framework for how such agents can be built, describe several metrics for evaluating them, and show that subtle changes in system construction can significantly affect agent performance.
[ "['Jason E. Holt']" ]
null
null
0506095
null
null
http://arxiv.org/pdf/cs/0506095v1
2005-06-27T04:07:34Z
2005-06-27T04:07:34Z
Deriving a Stationary Dynamic Bayesian Network from a Logic Program with Recursive Loops
Recursive loops in a logic program present a challenging problem to the PLP framework. On the one hand, they loop forever so that the PLP backward-chaining inferences would never stop. On the other hand, they generate cyclic influences, which are disallowed in Bayesian networks. Therefore, in existing PLP approaches logic programs with recursive loops are considered to be problematic and thus are excluded. In this paper, we propose an approach that makes use of recursive loops to build a stationary dynamic Bayesian network. Our work stems from an observation that recursive loops in a logic program imply a time sequence and thus can be used to model a stationary dynamic Bayesian network without using explicit time parameters. We introduce a Bayesian knowledge base with logic clauses of the form $A leftarrow A_1,...,A_l, true, Context, Types$, which naturally represents the knowledge that the $A_i$s have direct influences on $A$ in the context $Context$ under the type constraints $Types$. We then use the well-founded model of a logic program to define the direct influence relation and apply SLG-resolution to compute the space of random variables together with their parental connections. We introduce a novel notion of influence clauses, based on which a declarative semantics for a Bayesian knowledge base is established and algorithms for building a two-slice dynamic Bayesian network from a logic program are developed.
[ "['Y. D. Shen' 'Q. Yang' 'J. H. You' 'L. Y. Yuan']" ]
null
null
0506101
null
null
http://arxiv.org/pdf/cs/0506101v1
2005-06-29T20:26:33Z
2005-06-29T20:26:33Z
Efficient Multiclass Implementations of L1-Regularized Maximum Entropy
This paper discusses the application of L1-regularized maximum entropy modeling or SL1-Max [9] to multiclass categorization problems. A new modification to the SL1-Max fast sequential learning algorithm is proposed to handle conditional distributions. Furthermore, unlike most previous studies, the present research goes beyond a single type of conditional distribution. It describes and compares a variety of modeling assumptions about the class distribution (independent or exclusive) and various types of joint or conditional distributions. It results in a new methodology for combining binary regularized classifiers to achieve multiclass categorization. In this context, Maximum Entropy can be considered as a generic and efficient regularized classification tool that matches or outperforms the state-of-the art represented by AdaBoost and SVMs.
[ "['Patrick Haffner' 'Steven Phillips' 'Rob Schapire']" ]
null
null
0507033
null
null
http://arxiv.org/pdf/cs/0507033v2
2005-11-14T08:18:49Z
2005-07-13T05:45:28Z
Multiresolution Kernels
We present in this work a new methodology to design kernels on data which is structured with smaller components, such as text, images or sequences. This methodology is a template procedure which can be applied on most kernels on measures and takes advantage of a more detailed "bag of components" representation of the objects. To obtain such a detailed description, we consider possible decompositions of the original bag into a collection of nested bags, following a prior knowledge on the objects' structure. We then consider these smaller bags to compare two objects both in a detailed perspective, stressing local matches between the smaller bags, and in a global or coarse perspective, by considering the entire bag. This multiresolution approach is likely to be best suited for tasks where the coarse approach is not precise enough, and where a more subtle mixture of both local and global similarities is necessary to compare objects. The approach presented here would not be computationally tractable without a factorization trick that we introduce before presenting promising results on an image retrieval task.
[ "['Marco Cuturi' 'Kenji Fukumizu']" ]
null
null
0507039
null
null
http://arxiv.org/abs/cs/0507039v1
2005-07-18T00:45:12Z
2005-07-18T00:45:12Z
Distributed Regression in Sensor Networks: Training Distributively with Alternating Projections
Wireless sensor networks (WSNs) have attracted considerable attention in recent years and motivate a host of new challenges for distributed signal processing. The problem of distributed or decentralized estimation has often been considered in the context of parametric models. However, the success of parametric methods is limited by the appropriateness of the strong statistical assumptions made by the models. In this paper, a more flexible nonparametric model for distributed regression is considered that is applicable in a variety of WSN applications including field estimation. Here, starting with the standard regularized kernel least-squares estimator, a message-passing algorithm for distributed estimation in WSNs is derived. The algorithm can be viewed as an instantiation of the successive orthogonal projection (SOP) algorithm. Various practical aspects of the algorithm are discussed and several numerical simulations validate the potential of the approach.
[ "['Joel B. Predd' 'Sanjeev R. Kulkarni' 'H. Vincent Poor']" ]
null
null
0507040
null
null
http://arxiv.org/pdf/cs/0507040v1
2005-07-18T08:10:10Z
2005-07-18T08:10:10Z
Pattern Recognition for Conditionally Independent Data
In this work we consider the task of relaxing the i.i.d assumption in pattern recognition (or classification), aiming to make existing learning algorithms applicable to a wider range of tasks. Pattern recognition is guessing a discrete label of some object based on a set of given examples (pairs of objects and labels). We consider the case of deterministically defined labels. Traditionally, this task is studied under the assumption that examples are independent and identically distributed. However, it turns out that many results of pattern recognition theory carry over a weaker assumption. Namely, under the assumption of conditional independence and identical distribution of objects, while the only assumption on the distribution of labels is that the rate of occurrence of each label should be above some positive threshold. We find a broad class of learning algorithms for which estimations of the probability of a classification error achieved under the classical i.i.d. assumption can be generalised to the similar estimates for the case of conditionally i.i.d. examples.
[ "['Daniil Ryabko']" ]
null
null
0507041
null
null
http://arxiv.org/pdf/cs/0507041v1
2005-07-18T12:34:53Z
2005-07-18T12:34:53Z
Monotone Conditional Complexity Bounds on Future Prediction Errors
We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor M from the true distribution m by the algorithmic complexity of m. Here we assume we are at a time t>1 and already observed x=x_1...x_t. We bound the future prediction performance on x_{t+1}x_{t+2}... by a new variant of algorithmic complexity of m given x, plus the complexity of the randomness deficiency of x. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.
[ "['Alexey Chernov' 'Marcus Hutter']" ]
null
null
0507044
null
null
http://arxiv.org/pdf/cs/0507044v1
2005-07-18T14:33:56Z
2005-07-18T14:33:56Z
Defensive Universal Learning with Experts
This paper shows how universal learning can be achieved with expert advice. To this aim, we specify an experts algorithm with the following characteristics: (a) it uses only feedback from the actions actually chosen (bandit setup), (b) it can be applied with countably infinite expert classes, and (c) it copes with losses that may grow in time appropriately slowly. We prove loss bounds against an adaptive adversary. From this, we obtain a master algorithm for "reactive" experts problems, which means that the master's actions may influence the behavior of the adversary. Our algorithm can significantly outperform standard experts algorithms on such problems. Finally, we combine it with a universal expert class. The resulting universal learner performs -- in a certain sense -- almost as well as any computable strategy, for any online decision problem. We also specify the (worst-case) convergence speed, which is very slow.
[ "['Jan Poland' 'Marcus Hutter']" ]
null
null
0507062
null
null
http://arxiv.org/pdf/cs/0507062v1
2005-07-26T05:00:27Z
2005-07-26T05:00:27Z
FPL Analysis for Adaptive Bandits
A main problem of "Follow the Perturbed Leader" strategies for online decision problems is that regret bounds are typically proven against oblivious adversary. In partial observation cases, it was not clear how to obtain performance guarantees against adaptive adversary, without worsening the bounds. We propose a conceptually simple argument to resolve this problem. Using this, a regret bound of O(t^(2/3)) for FPL in the adversarial multi-armed bandit problem is shown. This bound holds for the common FPL variant using only the observations from designated exploration rounds. Using all observations allows for the stronger bound of O(t^(1/2)), matching the best bound known so far (and essentially the known lower bound) for adversarial bandits. Surprisingly, this variant does not even need explicit exploration, it is self-stabilizing. However the sampling probabilities have to be either externally provided or approximated to sufficient accuracy, using O(t^2 log t) samples in each step.
[ "['Jan Poland']" ]
null
null
0508007
null
null
http://arxiv.org/pdf/cs/0508007v4
2010-12-27T08:29:34Z
2005-08-01T18:55:57Z
Regularity of Position Sequences
A person is given a numbered sequence of positions on a sheet of paper. The person is asked, "Which will be the next (or the next after that) position?" Everyone has an opinion as to how he or she would proceed. There are regular sequences for which there is general agreement on how to continue. However, there are less regular sequences for which this assessment is less certain. There are sequences for which every continuation is perceived to be arbitrary. I would like to present a mathematical model that reflects these opinions and perceptions with the aid of a valuation function. It is necessary to apply a rich set of invariant features of position sequences to ensure the quality of this model. All other properties of the model are arbitrary.
[ "['Manfred Harringer']" ]
null
null
0508027
null
null
http://arxiv.org/abs/cs/0508027v1
2005-08-03T16:09:00Z
2005-08-03T16:09:00Z
Expectation maximization as message passing
Based on prior work by Eckford, it is shown how expectation maximization (EM) may be viewed, and used, as a message passing algorithm in factor graphs.
[ "['J. Dauwels' 'S. Korl' 'H. -A. Loeliger']" ]
null
null
0508043
null
null
http://arxiv.org/pdf/cs/0508043v1
2005-08-05T10:16:16Z
2005-08-05T10:16:16Z
Sequential Predictions based on Algorithmic Complexity
This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff's universal prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is known about the closeness of their posteriors, which are the important quantities for prediction. We show that for deterministic computable environments, the "posterior" and losses of m converge, but rapid convergence could only be shown on-sequence; the off-sequence convergence can be slow. In probabilistic environments, neither the posterior nor the losses converge, in general.
[ "['Marcus Hutter']" ]
null
null
0508053
null
null
http://arxiv.org/pdf/cs/0508053v1
2005-08-10T19:35:57Z
2005-08-10T19:35:57Z
Measuring Semantic Similarity by Latent Relational Analysis
This paper introduces Latent Relational Analysis (LRA), a method for measuring semantic similarity. LRA measures similarity in the semantic relations between two pairs of words. When two pairs have a high degree of relational similarity, they are analogous. For example, the pair cat:meow is analogous to the pair dog:bark. There is evidence from cognitive science that relational similarity is fundamental to many cognitive and linguistic tasks (e.g., analogical reasoning). In the Vector Space Model (VSM) approach to measuring relational similarity, the similarity between two pairs is calculated by the cosine of the angle between the vectors that represent the two pairs. The elements in the vectors are based on the frequencies of manually constructed patterns in a large corpus. LRA extends the VSM approach in three ways: (1) patterns are derived automatically from the corpus, (2) Singular Value Decomposition is used to smooth the frequency data, and (3) synonyms are used to reformulate word pairs. This paper describes the LRA algorithm and experimentally compares LRA to VSM on two tasks, answering college-level multiple-choice word analogy questions and classifying semantic relations in noun-modifier expressions. LRA achieves state-of-the-art results, reaching human-level performance on the analogy questions and significantly exceeding VSM performance on both tasks.
[ "['Peter D. Turney']" ]
null
null
0508073
null
null
http://arxiv.org/pdf/cs/0508073v1
2005-08-16T16:27:25Z
2005-08-16T16:27:25Z
Universal Learning of Repeated Matrix Games
We study and compare the learning dynamics of two universal learning algorithms, one based on Bayesian learning and the other on prediction with expert advice. Both approaches have strong asymptotic performance guarantees. When confronted with the task of finding good long-term strategies in repeated 2x2 matrix games, they behave quite differently.
[ "['Jan Poland' 'Marcus Hutter']" ]
null
null
0508103
null
null
http://arxiv.org/pdf/cs/0508103v1
2005-08-23T20:21:56Z
2005-08-23T20:21:56Z
Corpus-based Learning of Analogies and Semantic Relations
We present an algorithm for learning from unlabeled text, based on the Vector Space Model (VSM) of information retrieval, that can solve verbal analogy questions of the kind found in the SAT college entrance exam. A verbal analogy has the form A:B::C:D, meaning "A is to B as C is to D"; for example, mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B, and the problem is to select the most analogous word pair, C:D, from a set of five choices. The VSM algorithm correctly answers 47% of a collection of 374 college-level analogy questions (random guessing would yield 20% correct; the average college-bound senior high school student answers about 57% correctly). We motivate this research by applying it to a difficult problem in natural language processing, determining semantic relations in noun-modifier pairs. The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the noun (printer) and the modifier (laser). We use a supervised nearest-neighbour algorithm that assigns a class to a given noun-modifier pair by finding the most analogous noun-modifier pair in the training data. With 30 classes of semantic relations, on a collection of 600 labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5% (random guessing: 3.3%). With 5 classes of semantic relations, the F value is 43.2% (random: 20%). The performance is state-of-the-art for both verbal analogies and noun-modifier relations.
[ "['Peter D. Turney' 'Michael L. Littman']" ]
null
null
0508319
null
null
http://arxiv.org/pdf/math/0508319v1
2005-08-17T10:13:04Z
2005-08-17T10:13:04Z
Combinations and Mixtures of Optimal Policies in Unichain Markov Decision Processes are Optimal
We show that combinations of optimal (stationary) policies in unichain Markov decision processes are optimal. That is, let M be a unichain Markov decision process with state space S, action space A and policies pi_j^*: S -> A (1leq jleq n) with optimal average infinite horizon reward. Then any combination pi of these policies, where for each state i in S there is a j such that pi(i)=pi_j^*(i), is optimal as well. Furthermore, we prove that any mixture of optimal policies, where at each visit in a state i an arbitrary action pi_j^*(i) of an optimal policy is chosen, yields optimal average reward, too.
[ "['Ronald Ortner']" ]
null
null
0509055
null
null
http://arxiv.org/pdf/cs/0509055v1
2005-09-19T04:57:26Z
2005-09-19T04:57:26Z
Learning Optimal Augmented Bayes Networks
Naive Bayes is a simple Bayesian classifier with strong independence assumptions among the attributes. This classifier, desipte its strong independence assumptions, often performs well in practice. It is believed that relaxing the independence assumptions of a naive Bayes classifier may improve the classification accuracy of the resulting structure. While finding an optimal unconstrained Bayesian Network (for most any reasonable scoring measure) is an NP-hard problem, it is possible to learn in polynomial time optimal networks obeying various structural restrictions. Several authors have examined the possibilities of adding augmenting arcs between attributes of a Naive Bayes classifier. Friedman, Geiger and Goldszmidt define the TAN structure in which the augmenting arcs form a tree on the attributes, and present a polynomial time algorithm that learns an optimal TAN with respect to MDL score. Keogh and Pazzani define Augmented Bayes Networks in which the augmenting arcs form a forest on the attributes (a collection of trees, hence a relaxation of the stuctural restriction of TAN), and present heuristic search methods for learning good, though not optimal, augmenting arc sets. The authors, however, evaluate the learned structure only in terms of observed misclassification error and not against a scoring metric, such as MDL. In this paper, we present a simple, polynomial time greedy algorithm for learning an optimal Augmented Bayes Network with respect to MDL score.
[ "['Vikas Hamine' 'Paul Helman']" ]
null
null
0510038
null
null
http://arxiv.org/abs/cs/0510038v4
2007-06-26T14:00:17Z
2005-10-14T19:26:34Z
Learning Unions of $ω(1)$-Dimensional Rectangles
We consider the problem of learning unions of rectangles over the domain $[b]^n$, in the uniform distribution membership query learning setting, where both b and n are "large". We obtain poly$(n, log b)$-time algorithms for the following classes: - poly$(n log b)$-way Majority of $O(frac{log(n log b)} {log log(n log b)})$-dimensional rectangles. - Union of poly$(log(n log b))$ many $O(frac{log^2 (n log b)} {(log log(n log b) log log log (n log b))^2})$-dimensional rectangles. - poly$(n log b)$-way Majority of poly$(n log b)$-Or of disjoint $O(frac{log(n log b)} {log log(n log b)})$-dimensional rectangles. Our main algorithmic tool is an extension of Jackson's boosting- and Fourier-based Harmonic Sieve algorithm [Jackson 1997] to the domain $[b]^n$, building on work of [Akavia, Goldwasser, Safra 2003]. Other ingredients used to obtain the results stated above are techniques from exact learning [Beimel, Kushilevitz 1998] and ideas from recent work on learning augmented $AC^{0}$ circuits [Jackson, Klivans, Servedio 2002] and on representing Boolean functions as thresholds of parities [Klivans, Servedio 2001].
[ "['Alp Atici' 'Rocco A. Servedio']" ]
null
null
0510080
null
null
http://arxiv.org/pdf/cs/0510080v1
2005-10-25T22:14:33Z
2005-10-25T22:14:33Z
When Ignorance is Bliss
It is commonly-accepted wisdom that more information is better, and that information should never be ignored. Here we argue, using both a Bayesian and a non-Bayesian analysis, that in some situations you are better off ignoring information if your uncertainty is represented by a set of probability measures. These include situations in which the information is relevant for the prediction task at hand. In the non-Bayesian analysis, we show how ignoring information avoids dilation, the phenomenon that additional pieces of information sometimes lead to an increase in uncertainty. In the Bayesian analysis, we show that for small sample sizes and certain prediction tasks, the Bayesian posterior based on a noninformative prior yields worse predictions than simply ignoring the given information.
[ "['Peter D. Grunwald' 'Joseph Y. Halpern']" ]
null
null
0511011
null
null
http://arxiv.org/pdf/cs/0511011v1
2005-11-02T23:44:34Z
2005-11-02T23:44:34Z
The Impact of Social Networks on Multi-Agent Recommender Systems
Awerbuch et al.'s approach to distributed recommender systems (DRSs) is to have agents sample products at random while randomly querying one another for the best item they have found; we improve upon this by adding a communication network. Agents can only communicate with their immediate neighbors in the network, but neighboring agents may or may not represent users with common interests. We define two network structures: in the ``mailing-list model,'' agents representing similar users form cliques, while in the ``word-of-mouth model'' the agents are distributed randomly in a scale-free network (SFN). In both models, agents tell their neighbors about satisfactory products as they are found. In the word-of-mouth model, knowledge of items propagates only through interested agents, and the SFN parameters affect the system's performance. We include a summary of our new results on the character and parameters of random subgraphs of SFNs, in particular SFNs with power-law degree distributions down to minimum degree 1. These networks are not as resilient as Cohen et al. originally suggested. In the case of the widely-cited ``Internet resilience'' result, high failure rates actually lead to the orphaning of half of the surviving nodes after 60% of the network has failed and the complete disintegration of the network at 90%. We show that given an appropriate network, the communication network reduces the number of sampled items, the number of messages sent, and the amount of ``spam.'' We conclude that in many cases DRSs will be useful for sharing information in a multi-agent learning system.
[ "['Hamilton Link' 'Jared Saia' 'Terran Lane' 'Randall A. LaViolette']" ]
null
null
0511015
null
null
http://arxiv.org/pdf/nlin/0511015v1
2005-11-09T14:41:00Z
2005-11-09T14:41:00Z
Combinatorial Approach to Object Analysis
We present a perceptional mathematical model for image and signal analysis. A resemblance measure is defined, and submitted to an innovating combinatorial optimization algorithm. Numerical Simulations are also presented
[ "['Rami Kanhouche']" ]
null
null
0511058
null
null
http://arxiv.org/pdf/cs/0511058v2
2006-01-24T23:27:14Z
2005-11-15T17:13:50Z
On-line regression competitive with reproducing kernel Hilbert spaces
We consider the problem of on-line prediction of real-valued labels, assumed bounded in absolute value by a known constant, of new objects from known labeled objects. The prediction algorithm's performance is measured by the squared deviation of the predictions from the actual labels. No stochastic assumptions are made about the way the labels and objects are generated. Instead, we are given a benchmark class of prediction rules some of which are hoped to produce good predictions. We show that for a wide range of infinite-dimensional benchmark classes one can construct a prediction algorithm whose cumulative loss over the first N examples does not exceed the cumulative loss of any prediction rule in the class plus O(sqrt(N)); the main differences from the known results are that we do not impose any upper bound on the norm of the considered prediction rules and that we achieve an optimal leading term in the excess loss of our algorithm. If the benchmark class is "universal" (dense in the class of continuous functions on each compact set), this provides an on-line non-stochastic analogue of universally consistent prediction in non-parametric statistics. We use two proof techniques: one is based on the Aggregating Algorithm and the other on the recently developed method of defensive forecasting.
[ "['Vladimir Vovk']" ]
null
null
0511075
null
null
http://arxiv.org/pdf/cs/0511075v1
2005-11-21T01:47:53Z
2005-11-21T01:47:53Z
Identifying Interaction Sites in "Recalcitrant" Proteins: Predicted Protein and Rna Binding Sites in Rev Proteins of Hiv-1 and Eiav Agree with Experimental Data
Protein-protein and protein nucleic acid interactions are vitally important for a wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses. We have developed machine learning approaches for predicting which amino acids of a protein participate in its interactions with other proteins and/or nucleic acids, using only the protein sequence as input. In this paper, we describe an application of classifiers trained on datasets of well-characterized protein-protein and protein-RNA complexes for which experimental structures are available. We apply these classifiers to the problem of predicting protein and RNA binding sites in the sequence of a clinically important protein for which the structure is not known: the regulatory protein Rev, essential for the replication of HIV-1 and other lentiviruses. We compare our predictions with published biochemical, genetic and partial structural information for HIV-1 and EIAV Rev and with our own published experimental mapping of RNA binding sites in EIAV Rev. The predicted and experimentally determined binding sites are in very good agreement. The ability to predict reliably the residues of a protein that directly contribute to specific binding events - without the requirement for structural information regarding either the protein or complexes in which it participates - can potentially generate new disease intervention strategies.
[ "['Michael Terribilini' 'Jae-Hyung Lee' 'Changhui Yan' 'Robert L. Jernigan'\n 'Susan Carpenter' 'Vasant Honavar' 'Drena Dobbs']" ]
null
null
0511087
null
null
http://arxiv.org/abs/cs/0511087v1
2005-11-25T10:59:35Z
2005-11-25T10:59:35Z
Robust Inference of Trees
This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m^4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for trees.
[ "['Marco Zaffalon' 'Marcus Hutter']" ]
null
null
0511088
null
null
http://arxiv.org/pdf/cs/0511088v1
2005-11-25T15:57:56Z
2005-11-25T15:57:56Z
Bounds on Query Convergence
The problem of finding an optimum using noisy evaluations of a smooth cost function arises in many contexts, including economics, business, medicine, experiment design, and foraging theory. We derive an asymptotic bound E[ (x_t - x*)^2 ] >= O(1/sqrt(t)) on the rate of convergence of a sequence (x_0, x_1, >...) generated by an unbiased feedback process observing noisy evaluations of an unknown quadratic function maximised at x*. The bound is tight, as the proof leads to a simple algorithm which meets it. We further establish a bound on the total regret, E[ sum_{i=1..t} (x_i - x*)^2 ] >= O(sqrt(t)) These bounds may impose practical limitations on an agent's performance, as O(eps^-4) queries are made before the queries converge to x* with eps accuracy.
[ "['Barak A. Pearlmutter']" ]
null
null
0511105
null
null
http://arxiv.org/pdf/cs/0511105v1
2005-11-30T14:15:17Z
2005-11-30T14:15:17Z
The Signed Distance Function: A New Tool for Binary Classification
From a geometric perspective most nonlinear binary classification algorithms, including state of the art versions of Support Vector Machine (SVM) and Radial Basis Function Network (RBFN) classifiers, and are based on the idea of reconstructing indicator functions. We propose instead to use reconstruction of the signed distance function (SDF) as a basis for binary classification. We discuss properties of the signed distance function that can be exploited in classification algorithms. We develop simple versions of such classifiers and test them on several linear and nonlinear problems. On linear tests accuracy of the new algorithm exceeds that of standard SVM methods, with an average of 50% fewer misclassifications. Performance of the new methods also matches or exceeds that of standard methods on several nonlinear problems including classification of benchmark diagnostic micro-array data sets.
[ "['Erik M. Boczko' 'Todd R. Young']" ]
null
null
0511108
null
null
http://arxiv.org/pdf/cs/0511108v1
2005-11-30T20:23:19Z
2005-11-30T20:23:19Z
Parameter Estimation of Hidden Diffusion Processes: Particle Filter vs. Modified Baum-Welch Algorithm
We propose a new method for the estimation of parameters of hidden diffusion processes. Based on parametrization of the transition matrix, the Baum-Welch algorithm is improved. The algorithm is compared to the particle filter in application to the noisy periodic systems. It is shown that the modified Baum-Welch algorithm is capable of estimating the system parameters with better accuracy than particle filters.
[ "['A. Benabdallah' 'G. Radons']" ]
null
null
0511159
null
null
http://arxiv.org/abs/cond-mat/0511159v2
2005-12-09T14:17:08Z
2005-11-07T13:48:01Z
Learning by message-passing in networks of discrete synapses
We show that a message-passing process allows to store in binary "material" synapses a number of random patterns which almost saturates the information theoretic bounds. We apply the learning algorithm to networks characterized by a wide range of different connection topologies and of size comparable with that of biological systems (e.g. $nsimeq10^{5}-10^{6}$). The algorithm can be turned into an on-line --fault tolerant-- learning protocol of potential interest in modeling aspects of synaptic plasticity and in building neuromorphic devices.
[ "['Alfredo Braunstein' 'Riccardo Zecchina']" ]
null
null
0512015
null
null
http://arxiv.org/pdf/cs/0512015v3
2007-05-17T18:57:12Z
2005-12-03T19:21:33Z
Joint fixed-rate universal lossy coding and identification of continuous-alphabet memoryless sources
The problem of joint universal source coding and identification is considered in the setting of fixed-rate lossy coding of continuous-alphabet memoryless sources. For a wide class of bounded distortion measures, it is shown that any compactly parametrized family of $R^d$-valued i.i.d. sources with absolutely continuous distributions satisfying appropriate smoothness and Vapnik--Chervonenkis learnability conditions, admits a joint scheme for universal lossy block coding and parameter estimation, such that when the block length $n$ tends to infinity, the overhead per-letter rate and the distortion redundancies converge to zero as $O(n^{-1}log n)$ and $O(sqrt{n^{-1}log n})$, respectively. Moreover, the active source can be determined at the decoder up to a ball of radius $O(sqrt{n^{-1} log n})$ in variational distance, asymptotically almost surely. The system has finite memory length equal to the block length, and can be thought of as blockwise application of a time-invariant nonlinear filter with initial conditions determined from the previous block. Comparisons are presented with several existing schemes for universal vector quantization, which do not include parameter estimation explicitly, and an extension to unbounded distortion measures is outlined. Finally, finite mixture classes and exponential families are given as explicit examples of parametric sources admitting joint universal compression and modeling schemes of the kind studied here.
[ "['Maxim Raginsky']" ]
null
null
0512018
null
null
http://arxiv.org/pdf/cs/0512018v2
2006-03-21T12:31:02Z
2005-12-05T06:57:39Z
DAMNED: A Distributed and Multithreaded Neural Event-Driven simulation framework
In a Spiking Neural Networks (SNN), spike emissions are sparsely and irregularly distributed both in time and in the network architecture. Since a current feature of SNNs is a low average activity, efficient implementations of SNNs are usually based on an Event-Driven Simulation (EDS). On the other hand, simulations of large scale neural networks can take advantage of distributing the neurons on a set of processors (either workstation cluster or parallel computer). This article presents DAMNED, a large scale SNN simulation framework able to gather the benefits of EDS and parallel computing. Two levels of parallelism are combined: Distributed mapping of the neural topology, at the network level, and local multithreaded allocation of resources for simultaneous processing of events, at the neuron level. Based on the causality of events, a distributed solution is proposed for solving the complex problem of scheduling without synchronization barrier.
[ "['Anthony Mouraud' 'Didier Puzenat' 'Hélène Paugam-Moisy']" ]
null
null
0512050
null
null
http://arxiv.org/pdf/cs/0512050v1
2005-12-13T13:25:57Z
2005-12-13T13:25:57Z
Preference Learning in Terminology Extraction: A ROC-based approach
A key data preparation step in Text Mining, Term Extraction selects the terms, or collocation of words, attached to specific concepts. In this paper, the task of extracting relevant collocations is achieved through a supervised learning algorithm, exploiting a few collocations manually labelled as relevant/irrelevant. The candidate terms are described along 13 standard statistical criteria measures. From these examples, an evolutionary learning algorithm termed Roger, based on the optimization of the Area under the ROC curve criterion, extracts an order on the candidate terms. The robustness of the approach is demonstrated on two real-world domain applications, considering different domains (biology and human resources) and different languages (English and French).
[ "['Jérôme Azé' 'Mathieu Roche' 'Yves Kodratoff' 'Michèle Sebag']" ]
null
null
0512053
null
null
http://arxiv.org/pdf/cs/0512053v1
2005-12-13T22:01:09Z
2005-12-13T22:01:09Z
Online Learning and Resource-Bounded Dimension: Winnow Yields New Lower Bounds for Hard Sets
We establish a relationship between the online mistake-bound model of learning and resource-bounded dimension. This connection is combined with the Winnow algorithm to obtain new results about the density of hard sets under adaptive reductions. This improves previous work of Fu (1995) and Lutz and Zhao (2000), and solves one of Lutz and Mayordomo's "Twelve Problems in Resource-Bounded Measure" (1999).
[ "['John M. Hitchcock']" ]
null
null
0512059
null
null
http://arxiv.org/pdf/cs/0512059v2
2006-01-25T17:36:52Z
2005-12-14T20:03:30Z
Competing with wild prediction rules
We consider the problem of on-line prediction competitive with a benchmark class of continuous but highly irregular prediction rules. It is known that if the benchmark class is a reproducing kernel Hilbert space, there exists a prediction algorithm whose average loss over the first N examples does not exceed the average loss of any prediction rule in the class plus a "regret term" of O(N^(-1/2)). The elements of some natural benchmark classes, however, are so irregular that these classes are not Hilbert spaces. In this paper we develop Banach-space methods to construct a prediction algorithm with a regret term of O(N^(-1/p)), where p is in [2,infty) and p-2 reflects the degree to which the benchmark class fails to be a Hilbert space.
[ "['Vladimir Vovk']" ]
null
null
0512063
null
null
http://arxiv.org/abs/cs/0512063v1
2005-12-15T14:51:36Z
2005-12-15T14:51:36Z
Complex Random Vectors and ICA Models: Identifiability, Uniqueness and Separability
In this paper the conditions for identifiability, separability and uniqueness of linear complex valued independent component analysis (ICA) models are established. These results extend the well-known conditions for solving real-valued ICA problems to complex-valued models. Relevant properties of complex random vectors are described in order to extend the Darmois-Skitovich theorem for complex-valued models. This theorem is used to construct a proof of a theorem for each of the above ICA model concepts. Both circular and noncircular complex random vectors are covered. Examples clarifying the above concepts are presented.
[ "['Jan Eriksson' 'Visa Koivunen']" ]
null
null
0601044
null
null
http://arxiv.org/pdf/cs/0601044v1
2006-01-11T15:39:16Z
2006-01-11T15:39:16Z
Genetic Programming, Validation Sets, and Parsimony Pressure
Fitness functions based on test cases are very common in Genetic Programming (GP). This process can be assimilated to a learning task, with the inference of models from a limited number of samples. This paper is an investigation on two methods to improve generalization in GP-based learning: 1) the selection of the best-of-run individuals using a three data sets methodology, and 2) the application of parsimony pressure in order to reduce the complexity of the solutions. Results using GP in a binary classification setup show that while the accuracy on the test sets is preserved, with less variances compared to baseline results, the mean tree size obtained with the tested methods is significantly reduced.
[ "['Christian Gagné' 'Marc Schoenauer' 'Marc Parizeau' 'Marco Tomassini']" ]
null
null
0601074
null
null
http://arxiv.org/abs/cs/0601074v2
2006-05-11T21:07:30Z
2006-01-17T00:08:05Z
Joint universal lossy coding and identification of i.i.d. vector sources
The problem of joint universal source coding and modeling, addressed by Rissanen in the context of lossless codes, is generalized to fixed-rate lossy coding of continuous-alphabet memoryless sources. We show that, for bounded distortion measures, any compactly parametrized family of i.i.d. real vector sources with absolutely continuous marginals (satisfying appropriate smoothness and Vapnik--Chervonenkis learnability conditions) admits a joint scheme for universal lossy block coding and parameter estimation, and give nonasymptotic estimates of convergence rates for distortion redundancies and variational distances between the active source and the estimated source. We also present explicit examples of parametric sources admitting such joint universal compression and modeling schemes.
[ "['Maxim Raginsky']" ]
null
null
0601087
null
null
http://arxiv.org/pdf/cs/0601087v1
2006-01-20T05:40:44Z
2006-01-20T05:40:44Z
Processing of Test Matrices with Guessing Correction
It is suggested to insert into test matrix 1s for correct responses, 0s for response refusals, and negative corrective elements for incorrect responses. With the classical test theory approach test scores of examinees and items are calculated traditionally as sums of matrix elements, organized in rows and columns. Correlation coefficients are estimated using correction coefficients. In item response theory approach examinee and item logits are estimated using maximum likelihood method and probabilities of all matrix elements.
[ "['Kromer Victor']" ]
null
null
0601089
null
null
http://arxiv.org/abs/cs/0601089v1
2006-01-20T17:46:45Z
2006-01-20T17:46:45Z
Distributed Kernel Regression: An Algorithm for Training Collaboratively
This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting.
[ "['Joel B. Predd' 'Sanjeev R. Kulkarni' 'H. Vincent Poor']" ]
null
null
0601115
null
null
http://arxiv.org/pdf/cs/0601115v2
2006-02-24T17:29:14Z
2006-01-27T16:52:09Z
Decision Making with Side Information and Unbounded Loss Functions
We consider the problem of decision-making with side information and unbounded loss functions. Inspired by probably approximately correct learning model, we use a slightly different model that incorporates the notion of side information in a more generic form to make it applicable to a broader class of applications including parameter estimation and system identification. We address sufficient conditions for consistent decision-making with exponential convergence behavior. In this regard, besides a certain condition on the growth function of the class of loss functions, it suffices that the class of loss functions be dominated by a measurable function whose exponential Orlicz expectation is uniformly bounded over the probabilistic model. Decay exponent, decay constant, and sample complexity are discussed. Example applications to method of moments, maximum likelihood estimation, and system identification are illustrated, as well.
[ "['Majid Fozunbal' 'Ton Kalker']" ]
null
null
0602053
null
null
http://arxiv.org/pdf/cs/0602053v1
2006-02-14T23:57:01Z
2006-02-14T23:57:01Z
How to Beat the Adaptive Multi-Armed Bandit
The multi-armed bandit is a concise model for the problem of iterated decision-making under uncertainty. In each round, a gambler must pull one of $K$ arms of a slot machine, without any foreknowledge of their payouts, except that they are uniformly bounded. A standard objective is to minimize the gambler's regret, defined as the gambler's total payout minus the largest payout which would have been achieved by any fixed arm, in hindsight. Note that the gambler is only told the payout for the arm actually chosen, not for the unchosen arms. Almost all previous work on this problem assumed the payouts to be non-adaptive, in the sense that the distribution of the payout of arm $j$ in round $i$ is completely independent of the choices made by the gambler on rounds $1, dots, i-1$. In the more general model of adaptive payouts, the payouts in round $i$ may depend arbitrarily on the history of past choices made by the algorithm. We present a new algorithm for this problem, and prove nearly optimal guarantees for the regret against both non-adaptive and adaptive adversaries. After $T$ rounds, our algorithm has regret $O(sqrt{T})$ with high probability (the tail probability decays exponentially). This dependence on $T$ is best possible, and matches that of the full-information version of the problem, in which the gambler is told the payouts for all $K$ arms after each round. Previously, even for non-adaptive payouts, the best high-probability bounds known were $O(T^{2/3})$, due to Auer, Cesa-Bianchi, Freund and Schapire. The expected regret of their algorithm is $O(T^{1/2}) for non-adaptive payouts, but as we show, $Omega(T^{2/3})$ for adaptive payouts.
[ "['Varsha Dani' 'Thomas P. Hayes']" ]
null
null
0602062
null
null
http://arxiv.org/pdf/cs/0602062v1
2006-02-17T08:57:44Z
2006-02-17T08:57:44Z
Learning rational stochastic languages
Given a finite set of words w1,...,wn independently drawn according to a fixed unknown distribution law P called a stochastic language, an usual goal in Grammatical Inference is to infer an estimate of P in some class of probabilistic models, such as Probabilistic Automata (PA). Here, we study the class of rational stochastic languages, which consists in stochastic languages that can be generated by Multiplicity Automata (MA) and which strictly includes the class of stochastic languages generated by PA. Rational stochastic languages have minimal normal representation which may be very concise, and whose parameters can be efficiently estimated from stochastic samples. We design an efficient inference algorithm DEES which aims at building a minimal normal representation of the target. Despite the fact that no recursively enumerable class of MA computes exactly the set of rational stochastic languages over Q, we show that DEES strongly identifies tis set in the limit. We study the intermediary MA output by DEES and show that they compute rational series which converge absolutely to one and which can be used to provide stochastic languages which closely estimate the target.
[ "['François Denis' 'Yann Esposito' 'Amaury Habrard']" ]
null
null
0602092
null
null
http://arxiv.org/pdf/cs/0602092v1
2006-02-27T05:22:15Z
2006-02-27T05:22:15Z
Inconsistent parameter estimation in Markov random fields: Benefits in the computation-limited setting
Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working under the restriction of limited computation, we analyze a joint method in which the emph{same convex variational relaxation} is used to construct an M-estimator for fitting parameters, and to perform approximate marginalization for the prediction step. The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator (i.e., an estimator that returns the ``wrong'' model even in the infinite data limit) can be provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. En route to this result, we analyze the asymptotic properties of M-estimators based on convex variational relaxations, and establish a Lipschitz stability property that holds for a broad class of variational methods. We show that joint estimation/prediction based on the reweighted sum-product algorithm substantially outperforms a commonly used heuristic based on ordinary sum-product.
[ "['Martin J. Wainwright']" ]
null
null
0602093
null
null
http://arxiv.org/pdf/cs/0602093v1
2006-02-27T10:08:26Z
2006-02-27T10:08:26Z
Rational stochastic languages
The goal of the present paper is to provide a systematic and comprehensive study of rational stochastic languages over a semiring K in {Q, Q +, R, R+}. A rational stochastic language is a probability distribution over a free monoid Sigma^* which is rational over K, that is which can be generated by a multiplicity automata with parameters in K. We study the relations between the classes of rational stochastic languages S rat K (Sigma). We define the notion of residual of a stochastic language and we use it to investigate properties of several subclasses of rational stochastic languages. Lastly, we study the representation of rational stochastic languages by means of multiplicity automata.
[ "['François Denis' 'Yann Esposito']" ]
null
null
0602183
null
null
http://arxiv.org/abs/cond-mat/0602183v1
2006-02-07T18:29:35Z
2006-02-07T18:29:35Z
Nonlinear parametric model for Granger causality of time series
We generalize a previously proposed approach for nonlinear Granger causality of time series, based on radial basis function. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in a physiological example and in the study of the feed-back loop in a model of excitatory and inhibitory neurons.
[ "['Daniele Marinazzo' 'Mario Pellicoro' 'Sebastiano Stramaglia']" ]
null
null
0602505
null
null
http://arxiv.org/abs/math/0602505v1
2006-02-22T16:29:05Z
2006-02-22T16:29:05Z
MDL Convergence Speed for Bernoulli Sequences
The Minimum Description Length principle for online sequence estimation/prediction in a proper learning setup is studied. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is finitely bounded, implying convergence with probability one, and (b) it additionally specifies the convergence speed. For MDL, in general one can only have loss bounds which are finite but exponentially larger than those for Bayes mixtures. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable Bernoulli classes. This implies a small bound (comparable to the one for Bayes mixtures) for certain important model classes. We discuss the application to Machine Learning tasks such as classification and hypothesis testing, and generalization to countable classes of i.i.d. models.
[ "['Jan Poland' 'Marcus Hutter']" ]
null
null
0603023
null
null
http://arxiv.org/pdf/cs/0603023v1
2006-03-07T08:44:29Z
2006-03-07T08:44:29Z
Metric State Space Reinforcement Learning for a Vision-Capable Mobile Robot
We address the problem of autonomously learning controllers for vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories. We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot. The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability. Together this allows learning from much less experience compared to previous methods.
[ "['Viktor Zhumatiy' 'Faustino Gomez' 'Marcus Hutter' 'Juergen Schmidhuber']" ]
null
null
0603090
null
null
http://arxiv.org/abs/cs/0603090v2
2006-07-28T13:41:39Z
2006-03-22T22:52:23Z
Topological Grammars for Data Approximation
A method of {it topological grammars} is proposed for multidimensional data approximation. For data with complex topology we define a {it principal cubic complex} of low dimension and given complexity that gives the best approximation for the dataset. This complex is a generalization of linear and non-linear principal manifolds and includes them as particular cases. The problem of optimal principal complex construction is transformed into a series of minimization problems for quadratic functionals. These quadratic functionals have a physically transparent interpretation in terms of elastic energy. For the energy computation, the whole complex is represented as a system of nodes and springs. Topologically, the principal complex is a product of one-dimensional continuums (represented by graphs), and the grammars describe how these continuums transform during the process of optimal complex construction. This factorization of the whole process onto one-dimensional transformations using minimization of quadratic energy functionals allow us to construct efficient algorithms.
[ "['A. N. Gorban' 'N. R. Sumner' 'A. Y. Zinovyev']" ]
null
null
0603110
null
null
http://arxiv.org/pdf/cs/0603110v1
2006-03-28T16:22:42Z
2006-03-28T16:22:42Z
Asymptotic Learnability of Reinforcement Problems with Arbitrary Dependence
We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions. The task for an agent is to attain the best possible asymptotic reward where the true generating environment is unknown but belongs to a known countable family of environments. We find some sufficient conditions on the class of environments under which an agent exists which attains the best asymptotic reward for any environment in the class. We analyze how tight these conditions are and how they relate to different probabilistic assumptions known in reinforcement learning and related fields, such as Markov Decision Processes and mixing conditions.
[ "['Daniil Ryabko' 'Marcus Hutter']" ]
null
null
0604010
null
null
http://arxiv.org/pdf/cs/0604010v2
2018-06-04T18:17:32Z
2006-04-05T10:29:48Z
Nearly optimal exploration-exploitation decision thresholds
While in general trading off exploration and exploitation in reinforcement learning is hard, under some formulations relatively simple solutions exist. In this paper, we first derive upper bounds for the utility of selecting different actions in the multi-armed bandit setting. Unlike the common statistical upper confidence bounds, these explicitly link the planning horizon, uncertainty and the need for exploration explicit. The resulting algorithm can be seen as a generalisation of the classical Thompson sampling algorithm. We experimentally test these algorithms, as well as $epsilon$-greedy and the value of perfect information heuristics. Finally, we also introduce the idea of bagging for reinforcement learning. By employing a version of online bootstrapping, we can efficiently sample from an approximate posterior distribution.
[ "['Christos Dimitrakakis']" ]
null
null
0604011
null
null
http://arxiv.org/pdf/cs/0604011v2
2006-04-06T12:23:30Z
2006-04-05T18:07:31Z
Semi-Supervised Learning -- A Statistical Physics Approach
We present a novel approach to semi-supervised learning which is based on statistical physics. Most of the former work in the field of semi-supervised learning classifies the points by minimizing a certain energy function, which corresponds to a minimal k-way cut solution. In contrast to these methods, we estimate the distribution of classifications, instead of the sole minimal k-way cut, which yields more accurate and robust results. Our approach may be applied to all energy functions used for semi-supervised learning. The method is based on sampling using a Multicanonical Markov chain Monte-Carlo algorithm, and has a straightforward probabilistic interpretation, which allows for soft assignments of points to classes, and also to cope with yet unseen class types. The suggested approach is demonstrated on a toy data set and on two real-life data sets of gene expression.
[ "['Gad Getz' 'Noam Shental' 'Eytan Domany']" ]
null
null
0604015
null
null
http://arxiv.org/pdf/cs/0604015v1
2006-04-06T00:08:24Z
2006-04-06T00:08:24Z
Revealing the Autonomous System Taxonomy: The Machine Learning Approach
Although the Internet AS-level topology has been extensively studied over the past few years, little is known about the details of the AS taxonomy. An AS "node" can represent a wide variety of organizations, e.g., large ISP, or small private business, university, with vastly different network characteristics, external connectivity patterns, network growth tendencies, and other properties that we can hardly neglect while working on veracious Internet representations in simulation environments. In this paper, we introduce a radically new approach based on machine learning techniques to map all the ASes in the Internet into a natural AS taxonomy. We successfully classify 95.3% of ASes with expected accuracy of 78.1%. We release to the community the AS-level topology dataset augmented with: 1) the AS taxonomy information and 2) the set of AS attributes we used to classify ASes. We believe that this dataset will serve as an invaluable addition to further understanding of the structure and evolution of the Internet.
[ "['Xenofontas Dimitropoulos' 'Dmitri Krioukov' 'George Riley' 'kc claffy']" ]
null
null
0604046
null
null
http://arxiv.org/pdf/cs/0604046v1
2006-04-11T14:00:22Z
2006-04-11T14:00:22Z
Concerning the differentiability of the energy function in vector quantization algorithms
The adaptation rule for Vector Quantization algorithms, and consequently the convergence of the generated sequence, depends on the existence and properties of a function called the energy function, defined on a topological manifold. Our aim is to investigate the conditions of existence of such a function for a class of algorithms examplified by the initial ''K-means'' and Kohonen algorithms. The results presented here supplement previous studies and show that the energy function is not always a potential but at least the uniform limit of a series of potential functions which we call a pseudo-potential. Our work also shows that a large number of existing vector quantization algorithms developped by the Artificial Neural Networks community fall into this category. The framework we define opens the way to study the convergence of all the corresponding adaptation rules at once, and a theorem gives promising insights in that direction. We also demonstrate that the ''K-means'' energy function is a pseudo-potential but not a potential in general. Consequently, the energy function associated to the ''Neural-Gas'' is not a potential in general.
[ "['Dominique Lepetz' 'Max Nemoz-Gaillard' 'Michael Aupetit']" ]
null
null
0604102
null
null
http://arxiv.org/pdf/cs/0604102v1
2006-04-25T19:32:03Z
2006-04-25T19:32:03Z
HCI and Educational Metrics as Tools for VLE Evaluation
The general set of HCI and Educational principles are considered and a classification system constructed. A frequency analysis of principles is used to obtain the most significant set. Metrics are devised to provide objective measures of these principles and a consistent testing regime devised. These principles are used to analyse Blackboard and Moodle.
[ "['Vita Hinze-Hoare']" ]
null
null
0604233
null
null
http://arxiv.org/pdf/math/0604233v1
2006-04-11T05:41:15Z
2006-04-11T05:41:15Z
Generalization error bounds in semi-supervised classification under the cluster assumption
We consider semi-supervised classification when part of the available data is unlabeled. These unlabeled data can be useful for the classification problem when we make an assumption relating the behavior of the regression function to that of the marginal distribution. Seeger (2000) proposed the well-known "cluster assumption" as a reasonable one. We propose a mathematical formulation of this assumption and a method based on density level sets estimation that takes advantage of it to achieve fast rates of convergence both in the number of unlabeled examples and the number of labeled examples.
[ "['Philippe Rigollet']" ]
null
null
0605009
null
null
http://arxiv.org/pdf/cs/0605009v1
2006-05-03T07:47:21Z
2006-05-03T07:47:21Z
On the Foundations of Universal Sequence Prediction
Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. We discuss in breadth how and in which sense universal (non-i.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. We show that Solomonoff's model possesses many desirable properties: Fast convergence and strong bounds, and in contrast to most classical continuous prior densities has no zero p(oste)rior problem, i.e. can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the old-evidence and updating problem. It even performs well (actually better) in non-computable environments.
[ "['Marcus Hutter']" ]
null
null
0605024
null
null
http://arxiv.org/pdf/cs/0605024v1
2006-05-06T16:56:43Z
2006-05-06T16:56:43Z
A Formal Measure of Machine Intelligence
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this measure formally captures the concept of machine intelligence in the broadest reasonable sense.
[ "['Shane Legg' 'Marcus Hutter']" ]
null
null
0605035
null
null
http://arxiv.org/pdf/cs/0605035v1
2006-05-08T22:05:24Z
2006-05-08T22:05:24Z
Query Chains: Learning to Rank from Implicit Feedback
This paper presents a novel approach for using clickthrough data to learn ranked retrieval functions for web search results. We observe that users searching the web often perform a sequence, or chain, of queries with a similar information need. Using query chains, we generate new types of preference judgments from search engine logs, thus taking advantage of user intelligence in reformulating queries. To validate our method we perform a controlled user study comparing generated preference judgments to explicit relevance judgments. We also implemented a real-world search engine to test our approach, using a modified ranking SVM to learn an improved ranking function from preference data. Our results demonstrate significant improvements in the ranking given by the search engine. The learned rankings outperform both a static ranking function, as well as one trained without considering query chains.
[ "['Filip Radlinski' 'Thorsten Joachims']" ]
null
null
0605036
null
null
http://arxiv.org/pdf/cs/0605036v1
2006-05-08T23:38:13Z
2006-05-08T23:38:13Z
Evaluating the Robustness of Learning from Implicit Feedback
This paper evaluates the robustness of learning from implicit feedback in web search. In particular, we create a model of user behavior by drawing upon user studies in laboratory and real-world settings. The model is used to understand the effect of user behavior on the performance of a learning algorithm for ranked retrieval. We explore a wide range of possible user behaviors and find that learning from implicit feedback can be surprisingly robust. This complements previous results that demonstrated our algorithm's effectiveness in a real-world search engine application.
[ "['Filip Radlinski' 'Thorsten Joachims']" ]
null
null
0605037
null
null
http://arxiv.org/pdf/cs/0605037v1
2006-05-09T01:53:22Z
2006-05-09T01:53:22Z
Minimally Invasive Randomization for Collecting Unbiased Preferences from Clickthrough Logs
Clickthrough data is a particularly inexpensive and plentiful resource to obtain implicit relevance feedback for improving and personalizing search engines. However, it is well known that the probability of a user clicking on a result is strongly biased toward documents presented higher in the result set irrespective of relevance. We introduce a simple method to modify the presentation of search results that provably gives relevance judgments that are unaffected by presentation bias under reasonable assumptions. We validate this property of the training data in interactive real world experiments. Finally, we show that using these unbiased relevance judgments learning methods can be guaranteed to converge to an ideal ranking given sufficient data.
[ "['Filip Radlinski' 'Thorsten Joachims']" ]
null
null
0605040
null
null
http://arxiv.org/pdf/cs/0605040v1
2006-05-09T10:39:03Z
2006-05-09T10:39:03Z
General Discounting versus Average Reward
Consider an agent interacting with an environment in cycles. In every interaction cycle the agent is rewarded for its performance. We compare the average reward U from cycle 1 to m (average value) with the future discounted reward V from cycle k to infinity (discounted value). We consider essentially arbitrary (non-geometric) discount sequences and arbitrary reward sequences (non-MDP environments). We show that asymptotically U for m->infinity and V for k->infinity are equal, provided both limits exist. Further, if the effective horizon grows linearly with k or faster, then existence of the limit of U implies that the limit of V exists. Conversely, if the effective horizon grows linearly with k or slower, then existence of the limit of V implies that the limit of U exists.
[ "['Marcus Hutter']" ]
null
null
0605042
null
null
http://arxiv.org/abs/astro-ph/0605042v1
2006-05-01T20:42:03Z
2006-05-01T20:42:03Z
How accurate are the time delay estimates in gravitational lensing?
We present a novel approach to estimate the time delay between light curves of multiple images in a gravitationally lensed system, based on Kernel methods in the context of machine learning. We perform various experiments with artificially generated irregularly-sampled data sets to study the effect of the various levels of noise and the presence of gaps of various size in the monitoring data. We compare the performance of our method with various other popular methods of estimating the time delay and conclude, from experiments with artificial data, that our method is least vulnerable to missing data and irregular sampling, within reasonable bounds of Gaussian noise. Thereafter, we use our method to determine the time delays between the two images of quasar Q0957+561 from radio monitoring data at 4 cm and 6 cm, and conclude that if only the observations at epochs common to both wavelengths are used, the time delay gives consistent estimates, which can be combined to yield 408pm 12 days. The full 6 cm dataset, which covers a longer monitoring period, yields a value which is 10% larger, but this can be attributed to differences in sampling and missing data.
[ "['Juan C. Cuevas-Tello' 'Peter Tino' 'Somak Raychaudhury']" ]
null
null
0605048
null
null
http://arxiv.org/pdf/cs/0605048v1
2006-05-11T03:27:12Z
2006-05-11T03:27:12Z
On Learning Thresholds of Parities and Unions of Rectangles in Random Walk Models
In a recent breakthrough, [Bshouty et al., 2005] obtained the first passive-learning algorithm for DNFs under the uniform distribution. They showed that DNFs are learnable in the Random Walk and Noise Sensitivity models. We extend their results in several directions. We first show that thresholds of parities, a natural class encompassing DNFs, cannot be learned efficiently in the Noise Sensitivity model using only statistical queries. In contrast, we show that a cyclic version of the Random Walk model allows to learn efficiently polynomially weighted thresholds of parities. We also extend the algorithm of Bshouty et al. to the case of Unions of Rectangles, a natural generalization of DNFs to ${0,...,b-1}^n$.
[ "['S. Roch']" ]
null
null
0605498
null
null
http://arxiv.org/pdf/math/0605498v1
2006-05-18T07:47:58Z
2006-05-18T07:47:58Z
Cross-Entropic Learning of a Machine for the Decision in a Partially Observable Universe
Revision of the paper previously entitled "Learning a Machine for the Decision in a Partially Observable Markov Universe" In this paper, we are interested in optimal decisions in a partially observable universe. Our approach is to directly approximate an optimal strategic tree depending on the observation. This approximation is made by means of a parameterized probabilistic law. A particular family of hidden Markov models, with input emph{and} output, is considered as a model of policy. A method for optimizing the parameters of these HMMs is proposed and applied. This optimization is based on the cross-entropic principle for rare events simulation developed by Rubinstein.
[ "['Frederic Dambreville']" ]
null
null
0606077
null
null
http://arxiv.org/pdf/cs/0606077v1
2006-06-16T16:33:23Z
2006-06-16T16:33:23Z
On Sequence Prediction for Arbitrary Measures
Suppose we are given two probability measures on the set of one-way infinite finite-alphabet sequences and consider the question when one of the measures predicts the other, that is, when conditional probabilities converge (in a certain sense) when one of the measures is chosen to generate the sequence. This question may be considered a refinement of the problem of sequence prediction in its most general formulation: for a given class of probability measures, does there exist a measure which predicts all of the measures in the class? To address this problem, we find some conditions on local absolute continuity which are sufficient for prediction and which generalize several different notions which are known to be sufficient for prediction. We also formulate some open questions to outline a direction for finding the conditions on classes of measures for which prediction is possible.
[ "['Daniil Ryabko' 'Marcus Hutter']" ]
null
null
0606093
null
null
http://arxiv.org/pdf/cs/0606093v1
2006-06-22T04:31:51Z
2006-06-22T04:31:51Z
Predictions as statements and decisions
Prediction is a complex notion, and different predictors (such as people, computer programs, and probabilistic theories) can pursue very different goals. In this paper I will review some popular kinds of prediction and argue that the theory of competitive on-line learning can benefit from the kinds of prediction that are now foreign to it.
[ "['Vladimir Vovk']" ]
null
null
0606100
null
null
http://arxiv.org/pdf/cs/0606100v4
2011-10-11T10:21:45Z
2006-06-23T10:19:40Z
The generating function of the polytope of transport matrices $U(r,c)$ as a positive semidefinite kernel of the marginals $r$ and $c$
This paper has been withdrawn by the author due to a crucial error in the proof of Lemma 5.
[ "['Marco Cuturi']" ]
null
null
0606315
null
null
http://arxiv.org/pdf/math/0606315v1
2006-06-13T17:05:02Z
2006-06-13T17:05:02Z
Bayesian Regression of Piecewise Constant Functions
We derive an exact and efficient Bayesian regression algorithm for piecewise constant functions of unknown segment number, boundary location, and levels. It works for any noise and segment level prior, e.g. Cauchy which can handle outliers. We derive simple but good estimates for the in-segment variance. We also propose a Bayesian regression curve as a better way of smoothing data without blurring boundaries. The Bayesian approach also allows straightforward determination of the evidence, break probabilities and error estimates, useful for model selection and significance and robustness studies. We discuss the performance on synthetic and real-world examples. Many possible extensions will be discussed.
[ "['Marcus Hutter']" ]
null
null
0606643
null
null
http://arxiv.org/pdf/math/0606643v3
2006-07-18T05:18:07Z
2006-06-26T13:03:11Z
Entropy And Vision
In vector quantization the number of vectors used to construct the codebook is always an undefined problem, there is always a compromise between the number of vectors and the quantity of information lost during the compression. In this text we present a minimum of Entropy principle that gives solution to this compromise and represents an Entropy point of view of signal compression in general. Also we present a new adaptive Object Quantization technique that is the same for the compression and the perception.
[ "['Rami Kanhouche']" ]
null
null
0607047
null
null
http://arxiv.org/pdf/cs/0607047v1
2006-07-11T13:52:39Z
2006-07-11T13:52:39Z
PAC Classification based on PAC Estimates of Label Class Distributions
A standard approach in pattern classification is to estimate the distributions of the label classes, and then to apply the Bayes classifier to the estimates of the distributions in order to classify unlabeled examples. As one might expect, the better our estimates of the label class distributions, the better the resulting classifier will be. In this paper we make this observation precise by identifying risk bounds of a classifier in terms of the quality of the estimates of the label class distributions. We show how PAC learnability relates to estimates of the distributions that have a PAC guarantee on their $L_1$ distance from the true distribution, and we bound the increase in negative log likelihood risk in terms of PAC bounds on the KL-divergence. We give an inefficient but general-purpose smoothing method for converting an estimated distribution that is good under the $L_1$ metric into a distribution that is good under the KL-divergence.
[ "['Nick Palmer' 'Paul W. Goldberg']" ]
null
null
0607067
null
null
http://arxiv.org/pdf/cs/0607067v1
2006-07-13T15:52:04Z
2006-07-13T15:52:04Z
Competing with stationary prediction strategies
In this paper we introduce the class of stationary prediction strategies and construct a prediction algorithm that asymptotically performs as well as the best continuous stationary strategy. We make mild compactness assumptions but no stochastic assumptions about the environment. In particular, no assumption of stationarity is made about the environment, and the stationarity of the considered strategies only means that they do not depend explicitly on time; we argue that it is natural to consider only stationary strategies even for highly non-stationary environments.
[ "['Vladimir Vovk']" ]
null
null
0607085
null
null
http://arxiv.org/pdf/cs/0607085v2
2008-11-07T16:21:18Z
2006-07-18T07:21:51Z
Using Pseudo-Stochastic Rational Languages in Probabilistic Grammatical Inference
In probabilistic grammatical inference, a usual goal is to infer a good approximation of an unknown distribution P called a stochastic language. The estimate of P stands in some class of probabilistic models such as probabilistic automata (PA). In this paper, we focus on probabilistic models based on multiplicity automata (MA). The stochastic languages generated by MA are called rational stochastic languages; they strictly include stochastic languages generated by PA; they also admit a very concise canonical representation. Despite the fact that this class is not recursively enumerable, it is efficiently identifiable in the limit by using the algorithm DEES, introduced by the authors in a previous paper. However, the identification is not proper and before the convergence of the algorithm, DEES can produce MA that do not define stochastic languages. Nevertheless, it is possible to use these MA to define stochastic languages. We show that they belong to a broader class of rational series, that we call pseudo-stochastic rational languages. The aim of this paper is twofold. First we provide a theoretical study of pseudo-stochastic rational languages, the languages output by DEES, showing for example that this class is decidable within polynomial time. Second, we have carried out a lot of experiments in order to compare DEES to classical inference algorithms such as ALERGIA and MDI. They show that DEES outperforms them in most cases.
[ "['Amaury Habrard' 'Francois Denis' 'Yann Esposito']" ]
null
null
0607096
null
null
http://arxiv.org/pdf/cs/0607096v1
2006-07-20T14:52:08Z
2006-07-20T14:52:08Z
Logical settings for concept learning from incomplete examples in First Order Logic
We investigate here concept learning from incomplete examples. Our first purpose is to discuss to what extent logical learning settings have to be modified in order to cope with data incompleteness. More precisely we are interested in extending the learning from interpretations setting introduced by L. De Raedt that extends to relational representations the classical propositional (or attribute-value) concept learning from examples framework. We are inspired here by ideas presented by H. Hirsh in a work extending the Version space inductive paradigm to incomplete data. H. Hirsh proposes to slightly modify the notion of solution when dealing with incomplete examples: a solution has to be a hypothesis compatible with all pieces of information concerning the examples. We identify two main classes of incompleteness. First, uncertainty deals with our state of knowledge concerning an example. Second, generalization (or abstraction) deals with what part of the description of the example is sufficient for the learning purpose. These two main sources of incompleteness can be mixed up when only part of the useful information is known. We discuss a general learning setting, referred to as "learning from possibilities" that formalizes these ideas, then we present a more specific learning setting, referred to as "assumption-based learning" that cope with examples which uncertainty can be reduced when considering contextual information outside of the proper description of the examples. Assumption-based learning is illustrated on a recent work concerning the prediction of a consensus secondary structure common to a set of RNA sequences.
[ "['Dominique Bouthinon' 'Henry Soldano' 'Véronique Ventos']" ]
null
null
0607110
null
null
http://arxiv.org/pdf/cs/0607110v1
2006-07-25T15:57:56Z
2006-07-25T15:57:56Z
A Theory of Probabilistic Boosting, Decision Trees and Matryoshki
We present a theory of boosting probabilistic classifiers. We place ourselves in the situation of a user who only provides a stopping parameter and a probabilistic weak learner/classifier and compare three types of boosting algorithms: probabilistic Adaboost, decision tree, and tree of trees of ... of trees, which we call matryoshka. "Nested tree," "embedded tree" and "recursive tree" are also appropriate names for this algorithm, which is one of our contributions. Our other contribution is the theoretical analysis of the algorithms, in which we give training error bounds. This analysis suggests that the matryoshka leverages probabilistic weak classifiers more efficiently than simple decision trees.
[ "['Etienne Grossmann']" ]
null
null
0607120
null
null
http://arxiv.org/pdf/cs/0607120v1
2006-07-27T18:23:45Z
2006-07-27T18:23:45Z
Expressing Implicit Semantic Relations without Supervision
We present an unsupervised learning algorithm that mines large text corpora for patterns that express implicit semantic relations. For a given input word pair X:Y with some unspecified semantic relations, the corresponding output list of patterns <P1,...,Pm> is ranked according to how well each pattern Pi expresses the relations between X and Y. For example, given X=ostrich and Y=bird, the two highest ranking output patterns are "X is the largest Y" and "Y such as the X". The output patterns are intended to be useful for finding further pairs with the same relations, to support the construction of lexicons, ontologies, and semantic networks. The patterns are sorted by pertinence, where the pertinence of a pattern Pi for a word pair X:Y is the expected relational similarity between the given pair and typical pairs for Pi. The algorithm is empirically evaluated on two tasks, solving multiple-choice SAT word analogy questions and classifying semantic relations in noun-modifier pairs. On both tasks, the algorithm achieves state-of-the-art results, performing significantly better than several alternative pattern ranking algorithms, based on tf-idf.
[ "['Peter D. Turney']" ]
null
null
0607134
null
null
http://arxiv.org/pdf/cs/0607134v1
2006-07-27T22:11:07Z
2006-07-27T22:11:07Z
Leading strategies in competitive on-line prediction
We start from a simple asymptotic result for the problem of on-line regression with the quadratic loss function: the class of continuous limited-memory prediction strategies admits a "leading prediction strategy", which not only asymptotically performs at least as well as any continuous limited-memory strategy but also satisfies the property that the excess loss of any continuous limited-memory strategy is determined by how closely it imitates the leading strategy. More specifically, for any class of prediction strategies constituting a reproducing kernel Hilbert space we construct a leading strategy, in the sense that the loss of any prediction strategy whose norm is not too large is determined by how closely it imitates the leading strategy. This result is extended to the loss functions given by Bregman divergences and by strictly proper scoring rules.
[ "['Vladimir Vovk']" ]
null
null
0607136
null
null
http://arxiv.org/pdf/cs/0607136v1
2006-07-28T21:45:41Z
2006-07-28T21:45:41Z
Competing with Markov prediction strategies
Assuming that the loss function is convex in the prediction, we construct a prediction strategy universal for the class of Markov prediction strategies, not necessarily continuous. Allowing randomization, we remove the requirement of convexity.
[ "['Vladimir Vovk']" ]
null
null
0607138
null
null
http://arxiv.org/pdf/cs/0607138v1
2006-07-30T10:44:48Z
2006-07-30T10:44:48Z
A Foundation to Perception Computing, Logic and Automata
In this report, a novel approach to intelligence and learning is introduced, this approach is based on what we call 'perception logic'. Based on this logic, a computing mechanism and automata are introduced. Multi-resolution analysis of perceptual information is given, in which learning is accomplished in at most O(log(N))epochs, where N is the number of samples, and the convergence is guarnteed. This approach combines the favors of computational modeles in the sense that they are structured and mathematically well-defined, and the adaptivity of soft computing approaches, in addition to the continuity and real-time response of dynamical systems.
[ "['Mohamed A. Belal']" ]
null
null
0608033
null
null
http://arxiv.org/pdf/cs/0608033v1
2006-08-06T16:10:05Z
2006-08-06T16:10:05Z
A Study on Learnability for Rigid Lambek Grammars
We present basic notions of Gold's "learnability in the limit" paradigm, first presented in 1967, a formalization of the cognitive process by which a native speaker gets to grasp the underlying grammar of his/her own native language by being exposed to well formed sentences generated by that grammar. Then we present Lambek grammars, a formalism issued from categorial grammars which, although not as expressive as needed for a full formalization of natural languages, is particularly suited to easily implement a natural interface between syntax and semantics. In the last part of this work, we present a learnability result for Rigid Lambek grammars from structured examples.
[ "['Roberto Bonato']" ]
null
null
0608100
null
null
http://arxiv.org/abs/cs/0608100v1
2006-08-25T14:35:11Z
2006-08-25T14:35:11Z
Similarity of Semantic Relations
There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.
[ "['Peter D. Turney']" ]
null
null
0608522
null
null
http://arxiv.org/pdf/math/0608522v2
2007-06-27T08:23:07Z
2006-08-21T18:35:42Z
Graph Laplacians and their convergence on random neighborhood graphs
Given a sample from a probability measure with support on a submanifold in Euclidean space one can construct a neighborhood graph which can be seen as an approximation of the submanifold. The graph Laplacian of such a graph is used in several machine learning methods like semi-supervised learning, dimensionality reduction and clustering. In this paper we determine the pointwise limit of three different graph Laplacians used in the literature as the sample size increases and the neighborhood size approaches zero. We show that for a uniform measure on the submanifold all graph Laplacians have the same limit up to constants. However in the case of a non-uniform measure on the submanifold only the so called random walk graph Laplacian converges to the weighted Laplace-Beltrami operator.
[ "['Matthias Hein' 'Jean-Yves Audibert' 'Ulrike von Luxburg']" ]
null
null
0608713
null
null
http://arxiv.org/pdf/math/0608713v1
2006-08-29T12:35:53Z
2006-08-29T12:35:53Z
Occam's hammer: a link between randomized learning and multiple testing FDR control
We establish a generic theoretical tool to construct probabilistic bounds for algorithms where the output is a subset of objects from an initial pool of candidates (or more generally, a probability distribution on said pool). This general device, dubbed "Occam's hammer'', acts as a meta layer when a probabilistic bound is already known on the objects of the pool taken individually, and aims at controlling the proportion of the objects in the set output not satisfying their individual bound. In this regard, it can be seen as a non-trivial generalization of the "union bound with a prior'' ("Occam's razor''), a familiar tool in learning theory. We give applications of this principle to randomized classifiers (providing an interesting alternative approach to PAC-Bayes bounds) and multiple testing (where it allows to retrieve exactly and extend the so-called Benjamini-Yekutieli testing procedure).
[ "['Gilles Blanchard' 'François Fleuret']" ]
null
null
0609007
null
null
http://arxiv.org/pdf/cs/0609007v1
2006-09-03T21:30:03Z
2006-09-03T21:30:03Z
A Massive Local Rules Search Approach to the Classification Problem
An approach to the classification problem of machine learning, based on building local classification rules, is developed. The local rules are considered as projections of the global classification rules to the event we want to classify. A massive global optimization algorithm is used for optimization of quality criterion. The algorithm, which has polynomial complexity in typical case, is used to find all high--quality local rules. The other distinctive feature of the algorithm is the integration of attributes levels selection (for ordered attributes) with rules searching and original conflicting rules resolution strategy. The algorithm is practical; it was tested on a number of data sets from UCI repository, and a comparison with the other predicting techniques is presented.
[ "['Vladislav Malyshkin' 'Ray Bakhramov' 'Andrey Gorodetsky']" ]
null
null
0609045
null
null
http://arxiv.org/pdf/cs/0609045v1
2006-09-09T11:31:01Z
2006-09-09T11:31:01Z
Metric entropy in competitive on-line prediction
Competitive on-line prediction (also known as universal prediction of individual sequences) is a strand of learning theory avoiding making any stochastic assumptions about the way the observations are generated. The predictor's goal is to compete with a benchmark class of prediction rules, which is often a proper Banach function space. Metric entropy provides a unifying framework for competitive on-line prediction: the numerous known upper bounds on the metric entropy of various compact sets in function spaces readily imply bounds on the performance of on-line prediction strategies. This paper discusses strengths and limitations of the direct approach to competitive on-line prediction via metric entropy, including comparisons to other approaches.
[ "['Vladimir Vovk']" ]
null
null
0609049
null
null
http://arxiv.org/pdf/cs/0609049v2
2007-05-08T07:34:29Z
2006-09-11T09:35:57Z
Scanning and Sequential Decision Making for Multi-Dimensional Data - Part I: the Noiseless Case
We investigate the problem of scanning and prediction ("scandiction", for short) of multidimensional data arrays. This problem arises in several aspects of image and video processing, such as predictive coding, for example, where an image is compressed by coding the error sequence resulting from scandicting it. Thus, it is natural to ask what is the optimal method to scan and predict a given image, what is the resulting minimum prediction loss, and whether there exist specific scandiction schemes which are universal in some sense. Specifically, we investigate the following problems: First, modeling the data array as a random field, we wish to examine whether there exists a scandiction scheme which is independent of the field's distribution, yet asymptotically achieves the same performance as if this distribution was known. This question is answered in the affirmative for the set of all spatially stationary random fields and under mild conditions on the loss function. We then discuss the scenario where a non-optimal scanning order is used, yet accompanied by an optimal predictor, and derive bounds on the excess loss compared to optimal scanning and prediction. This paper is the first part of a two-part paper on sequential decision making for multi-dimensional data. It deals with clean, noiseless data arrays. The second part deals with noisy data arrays, namely, with the case where the decision maker observes only a noisy version of the data, yet it is judged with respect to the original, clean data.
[ "['Asaf Cohen' 'Neri Merhav' 'Tsachy Weissman']" ]
null
null
0609071
null
null
http://arxiv.org/pdf/cs/0609071v2
2007-02-14T06:51:03Z
2006-09-13T03:44:08Z
A kernel method for canonical correlation analysis
Canonical correlation analysis is a technique to extract common features from a pair of multivariate data. In complex situations, however, it does not extract useful features because of its linearity. On the other hand, kernel method used in support vector machine is an efficient approach to improve such a linear method. In this paper, we investigate the effectiveness of applying kernel method to canonical correlation analysis.
[ "['Shotaro Akaho']" ]
null
null
0609093
null
null
http://arxiv.org/pdf/cs/0609093v1
2006-09-16T14:43:27Z
2006-09-16T14:43:27Z
PAC Learning Mixtures of Axis-Aligned Gaussians with No Separation Assumption
We propose and analyze a new vantage point for the learning of mixtures of Gaussians: namely, the PAC-style model of learning probability distributions introduced by Kearns et al. Here the task is to construct a hypothesis mixture of Gaussians that is statistically indistinguishable from the actual mixture generating the data; specifically, the KL-divergence should be at most epsilon. In this scenario, we give a poly(n/epsilon)-time algorithm that learns the class of mixtures of any constant number of axis-aligned Gaussians in n-dimensional Euclidean space. Our algorithm makes no assumptions about the separation between the means of the Gaussians, nor does it have any dependence on the minimum mixing weight. This is in contrast to learning results known in the ``clustering'' model, where such assumptions are unavoidable. Our algorithm relies on the method of moments, and a subalgorithm developed in previous work by the authors (FOCS 2005) for a discrete mixture-learning problem.
[ "['Jon Feldman' \"Ryan O'Donnell\" 'Rocco A. Servedio']" ]
null
null
0609140
null
null
http://arxiv.org/pdf/cs/0609140v2
2008-10-15T14:08:17Z
2006-09-25T19:06:59Z
Motion Primitives for Robotic Flight Control
We introduce a simple framework for learning aggressive maneuvers in flight control of UAVs. Having inspired from biological environment, dynamic movement primitives are analyzed and extended using nonlinear contraction theory. Accordingly, primitives of an observed movement are stably combined and concatenated. We demonstrate our results experimentally on the Quanser Helicopter, in which we first imitate aggressive maneuvers and then use them as primitives to achieve new maneuvers that can fly over an obstacle.
[ "['Baris E. Perk' 'J. J. E. Slotine']" ]
null
null
0609153
null
null
http://arxiv.org/pdf/cs/0609153v1
2006-09-27T18:42:44Z
2006-09-27T18:42:44Z
Mining Generalized Graph Patterns based on User Examples
There has been a lot of recent interest in mining patterns from graphs. Often, the exact structure of the patterns of interest is not known. This happens, for example, when molecular structures are mined to discover fragments useful as features in chemical compound classification task, or when web sites are mined to discover sets of web pages representing logical documents. Such patterns are often generated from a few small subgraphs (cores), according to certain generalization rules (GRs). We call such patterns "generalized patterns"(GPs). While being structurally different, GPs often perform the same function in the network. Previously proposed approaches to mining GPs either assumed that the cores and the GRs are given, or that all interesting GPs are frequent. These are strong assumptions, which often do not hold in practical applications. In this paper, we propose an approach to mining GPs that is free from the above assumptions. Given a small number of GPs selected by the user, our algorithm discovers all GPs similar to the user examples. First, a machine learning-style approach is used to find the cores. Second, generalizations of the cores in the graph are computed to identify GPs. Evaluation on synthetic data, generated using real cores and GRs from biological and web domains, demonstrates effectiveness of our approach.
[ "['Pavel Dmitriev' 'Carl Lagoze']" ]
null
null
0609461
null
null
http://arxiv.org/pdf/math/0609461v1
2006-09-16T07:00:36Z
2006-09-16T07:00:36Z
Cross-Entropy method: convergence issues for extended implementation
The cross-entropy method (CE) developed by R. Rubinstein is an elegant practical principle for simulating rare events. The method approximates the probability of the rare event by means of a family of probabilistic models. The method has been extended to optimization, by considering an optimal event as a rare event. CE works rather good when dealing with deterministic function optimization. Now, it appears that two conditions are needed for a good convergence of the method. First, it is necessary to have a family of models sufficiently flexible for discriminating the optimal events. Indirectly, it appears also that the function to be optimized should be deterministic. The purpose of this paper is to consider the case of partially discriminating model family, and of stochastic functions. It will be shown on simple examples that the CE could fail when relaxing these hypotheses. Alternative improvements of the CE method are investigated and compared on random examples in order to handle this issue.
[ "['Frederic Dambreville']" ]
null
null
0610033
null
null
http://arxiv.org/abs/cs/0610033v1
2006-10-06T04:45:32Z
2006-10-06T04:45:32Z
A kernel for time series based on global alignments
We propose in this paper a new family of kernels to handle times series, notably speech data, within the framework of kernel methods which includes popular algorithms such as the Support Vector Machine. These kernels elaborate on the well known Dynamic Time Warping (DTW) family of distances by considering the same set of elementary operations, namely substitutions and repetitions of tokens, to map a sequence onto another. Associating to each of these operations a given score, DTW algorithms use dynamic programming techniques to compute an optimal sequence of operations with high overall score. In this paper we consider instead the score spanned by all possible alignments, take a smoothed version of their maximum and derive a kernel out of this formulation. We prove that this kernel is positive definite under favorable conditions and show how it can be tuned effectively for practical applications as we report encouraging results on a speech recognition task.
[ "['Marco Cuturi' 'Jean-Philippe Vert' 'Oystein Birkenes' 'Tomoko Matsui']" ]
null
null
0610040
null
null
http://arxiv.org/pdf/q-bio/0610040v1
2006-10-21T06:33:24Z
2006-10-21T06:33:24Z
Metric learning pairwise kernel for graph inference
Much recent work in bioinformatics has focused on the inference of various types of biological networks, representing gene regulation, metabolic processes, protein-protein interactions, etc. A common setting involves inferring network edges in a supervised fashion from a set of high-confidence edges, possibly characterized by multiple, heterogeneous data sets (protein sequence, gene expression, etc.). Here, we distinguish between two modes of inference in this setting: direct inference based upon similarities between nodes joined by an edge, and indirect inference based upon similarities between one pair of nodes and another pair of nodes. We propose a supervised approach for the direct case by translating it into a distance metric learning problem. A relaxation of the resulting convex optimization problem leads to the support vector machine (SVM) algorithm with a particular kernel for pairs, which we call the metric learning pairwise kernel (MLPK). We demonstrate, using several real biological networks, that this direct approach often improves upon the state-of-the-art SVM for indirect inference with the tensor product pairwise kernel.
[ "['Jean-Philippe Vert' 'Jian Qiu' 'William Stafford Noble']" ]
null
null
0610051
null
null
http://arxiv.org/abs/physics/0610051v1
2006-10-09T18:41:57Z
2006-10-09T18:41:57Z
Structural Inference of Hierarchies in Networks
One property of networks that has received comparatively little attention is hierarchy, i.e., the property of having vertices that cluster together in groups, which then join to form groups of groups, and so forth, up through all levels of organization in the network. Here, we give a precise definition of hierarchical structure, give a generic model for generating arbitrary hierarchical structure in a random graph, and describe a statistically principled way to learn the set of hierarchical features that most plausibly explain a particular real-world network. By applying this approach to two example networks, we demonstrate its advantages for the interpretation of network data, the annotation of graphs with edge, vertex and community properties, and the generation of generic null models for further hypothesis testing.
[ "['Aaron Clauset' 'Cristopher Moore' 'M. E. J. Newman']" ]
null
null
0610126
null
null
http://arxiv.org/abs/cs/0610126v1
2006-10-20T16:37:11Z
2006-10-20T16:37:11Z
Fitness Uniform Optimization
In evolutionary algorithms, the fitness of a population increases with time by mutating and recombining individuals and by a biased selection of more fit individuals. The right selection pressure is critical in ensuring sufficient optimization progress on the one hand and in preserving genetic diversity to be able to escape from local optima on the other hand. Motivated by a universal similarity relation on the individuals, we propose a new selection scheme, which is uniform in the fitness values. It generates selection pressure toward sparsely populated fitness regions, not necessarily toward higher fitness, as is the case for all other selection schemes. We show analytically on a simple example that the new selection scheme can be much more effective than standard selection schemes. We also propose a new deletion scheme which achieves a similar result via deletion and show how such a scheme preserves genetic diversity more effectively than standard approaches. We compare the performance of the new schemes to tournament selection and random deletion on an artificial deceptive problem and a range of NP-hard problems: traveling salesman, set covering and satisfiability.
[ "['Marcus Hutter' 'Shane Legg']" ]
null
null
0610155
null
null
http://arxiv.org/pdf/cs/0610155v1
2006-10-27T07:08:51Z
2006-10-27T07:08:51Z
Nonlinear Estimators and Tail Bounds for Dimension Reduction in $l_1$ Using Cauchy Random Projections
For dimension reduction in $l_1$, the method of {em Cauchy random projections} multiplies the original data matrix $mathbf{A} inmathbb{R}^{ntimes D}$ with a random matrix $mathbf{R} in mathbb{R}^{Dtimes k}$ ($kllmin(n,D)$) whose entries are i.i.d. samples of the standard Cauchy C(0,1). Because of the impossibility results, one can not hope to recover the pairwise $l_1$ distances in $mathbf{A}$ from $mathbf{B} = mathbf{AR} in mathbb{R}^{ntimes k}$, using linear estimators without incurring large errors. However, nonlinear estimators are still useful for certain applications in data stream computation, information retrieval, learning, and data mining. We propose three types of nonlinear estimators: the bias-corrected sample median estimator, the bias-corrected geometric mean estimator, and the bias-corrected maximum likelihood estimator. The sample median estimator and the geometric mean estimator are asymptotically (as $kto infty$) equivalent but the latter is more accurate at small $k$. We derive explicit tail bounds for the geometric mean estimator and establish an analog of the Johnson-Lindenstrauss (JL) lemma for dimension reduction in $l_1$, which is weaker than the classical JL lemma for dimension reduction in $l_2$. Asymptotically, both the sample median estimator and the geometric mean estimators are about 80% efficient compared to the maximum likelihood estimator (MLE). We analyze the moments of the MLE and propose approximating the distribution of the MLE by an inverse Gaussian.
[ "['Ping Li' 'Trevor J. Hastie' 'Kenneth W. Church']" ]
null
null
0610158
null
null
http://arxiv.org/pdf/cs/0610158v1
2006-10-27T16:02:34Z
2006-10-27T16:02:34Z
Considering users' behaviours in improving the responses of an information base
In this paper, our aim is to propose a model that helps in the efficient use of an information system by users, within the organization represented by the IS, in order to resolve their decisional problems. In other words we want to aid the user within an organization in obtaining the information that corresponds to his needs (informational needs that result from his decisional problems). This type of information system is what we refer to as economic intelligence system because of its support for economic intelligence processes of the organisation. Our assumption is that every EI process begins with the identification of the decisional problem which is translated into an informational need. This need is then translated into one or many information search problems (ISP). We also assumed that an ISP is expressed in terms of the user's expectations and that these expectations determine the activities or the behaviors of the user, when he/she uses an IS. The model we are proposing is used for the conception of the IS so that the process of retrieving of solution(s) or the responses given by the system to an ISP is based on these behaviours and correspond to the needs of the user.
[ "['Babajide Afolabi' 'Odile Thiery']" ]
null
null
0610170
null
null
http://arxiv.org/pdf/cs/0610170v1
2006-10-30T16:44:58Z
2006-10-30T16:44:58Z
Low-complexity modular policies: learning to play Pac-Man and a new framework beyond MDPs
In this paper we propose a method that learns to play Pac-Man. We define a set of high-level observation and action modules. Actions are temporally extended, and multiple action modules may be in effect concurrently. A decision of the agent is represented as a rule-based policy. For learning, we apply the cross-entropy method, a recent global optimization algorithm. The learned policies reached better score than the hand-crafted policy, and neared the score of average human players. We argue that learning is successful mainly because (i) the policy space includes the combination of individual actions and thus it is sufficiently rich, (ii) the search is biased towards low-complexity policies and low complexity solutions can be found quickly if they exist. Based on these principles, we formulate a new theoretical framework, which can be found in the Appendix as supporting material.
[ "['Istvan Szita' 'Andras Lorincz']" ]