ABSTRACT

Aug 3(Thu) Aug 4(Fri)
August 3(Thu)

Jun SakumaUniversity of Tsukuba

Differentially Private Chi-squared Test by Unit Circle Mechanism(ICML'17)

This study develops differentially private mechanisms for χ2 test of independence. While existing works put their effort into properly controlling the type-I error, in addition to that, we investigate the type-II error of differentially private mechanisms. Based on the analysis, we present unit circle mechanism: a novel differentially private mechanism based on the geometrical property of the test statistics. Compared to existing output perturbation mechanisms, our mechanism improves the dominated term of the type-II error from O(1) to O(exp(-√N)) where N is the sample size. Furthermore, we introduce novel procedures for multiple χ2 tests by incorporating the unit circle mechanism into the sparse vector technique and the exponential mechanism. These procedures can control the family-wise error rate
(FWER) properly, which has never been attained by existing mechanisms.


Tomoya SakaiThe University of Tokyo

Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data(ICML'17)

Collecting a large amount of labeled data, i.e., human annotated data is laborious and costly. In contrast, unlabeled data can be collected cheaply. This fact leads us to semi-supervised classification, which leverages unlabeled data to obtain better classification accuracy with the small size of labeled data. However, most of the existing methods require strong assumptions on the data distribution, e.g., the data has cluster structure. Thus, on real-world data in which such restrictive assumptions are rarely satisfied, the performance of the existing methods degenerates. In this talk, we present a novel semi-supervised classification method that does not rely on the distributional assumptions.


Weihua HuThe University of Tokyo

Learning Discrete Representations via Information Maximizing Self-Augmented Training(ICML'17)

Learning discrete representations of data is a central machine learning task because of the compactness of the representations and ease of interpretation. The task includes clustering and hash learning as special cases. Deep neural networks are promising to be used because they can model the non-linearity of data and scale to large datasets. However, their model complexity is huge, and therefore, we need to carefully regularize the networks in order to learn useful representations that exhibit intended invariance for applications of interest. To this end, we propose a method called Information Maximizing Self-Augmented Training (IMSAT). In IMSAT, we use data augmentation to impose the invariance on discrete representations. More specifically, we encourage the predicted representations of augmented data points to be close to those of the original data points in an end-to-end fashion. At the same time, we maximize the information-theoretic dependency between data and their predicted discrete representations. Extensive experiments on benchmark datasets show that IMSAT produces state-of-the-art results for both clustering and unsupervised hash learning.


Emtiyaz KhanRIKEN

Conjugate-Computation Variational Inference: Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models(AISTATS'17)

Approximate Bayesian Inference in large and complex models (such as deep generative model and spatiotemporal models) is computationally challenging. Modern methods rely on stochastic-gradient algorithms and can scale to huge datasets, but they do not always lead to computationally efficient and modular updates. In this work, we present the Conjugate-computation Variational Inference (CVI) algorithm. CVI is derived using proximal-gradient methods. We show that CVI is a generalization of existing SG methods and message passing algorithm (e.g, VMP and SVI). Overall, CVI enables efficient, modular, scalable, and convergent natural-gradient updates for models that contain probabilistic graphical models and deep generative models.


Yusuke KobayashiUniversity of Tsukuba

A Weighted Linear Matroid Parity Algorithm(STOC'17)

The matroid parity problem was introduced in 1970s as a common generalization of matching and matroid intersection problems.This problem requires an exponential number of oracle calls for general matroids, but Lovasz (1980) gave a polynomial algorithm for the problem on linearly represented matroids. While matching and matroid intersection algorithms have been successfully extended to their weighted version, no polynomial algorithms have been known for the weighted linear matroid parity problem for more than three decades. In this work, we present a first polynomial algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primal-dual approach based on the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem. This is joint work with Satoru Iwata.


Kaito FujiiThe University of Tokyo

Budgeted stream-based active learning via adaptive submodular maximization(NIPS'16)

Active learning enables us to reduce the annotation cost by adaptively selecting unlabeled instances to be labeled. For pool-based active learning, several effective methods with theoretical guarantees have been developed through maximizing some utility function satisfying adaptive submodularity. In contrast, there have been few methods for stream-based active learning based on adaptive submodularity. In this work, we propose a new class of utility functions, policy-adaptive submodular functions, which includes many existing adaptive submodular functions appearing in real world problems. We provide a general framework based on policy-adaptive submodularity that makes it possible to convert existing pool-based methods to stream-based methods and give theoretical guarantees on their performance.


Yuichi YoshidaNational Institute of Informatics

Minimizing Quadratic Functions in Constant Time (coauthored with Kohei Hayashi)(NIPS'16)

A sampling-based optimization method for quadratic functions is proposed. Our method approximately solves the following n-dimensional quadratic minimization problem in constant time, which is independent of n: z*=min_{v∈R^n}<v, Av>+n<bv, diag(d)v>+n<b,v>, where A ∈ R^{n×n} is a matrix and d, b ∈ R^n are vectors. Our theoretical analysis specifies the number of samples k(δ,ϵ) such that the approximated solution z satisfies |z-z*|=O(ϵn^2) with probability 1-δ. The empirical performance (accuracy and runtime) is positively confirmed by numerical experiments.


Man-Kwun ChiuNational Institute of Informatics

High Dimensional Consistent Digital Segments(SoCG'17)

We consider the problem of digitalizing Euclidean line segments from Rd to Zd. Christ et al. (DCG, 2012) showed how to construct a set of consistent digital segments (CDS) for d = 2: a collection of segments connecting any two points in Z2 that satisfies the natural extension of the Euclidean axioms to Zd. In this paper we study the construction of CDSs in higher dimensions. We extend their result to higher dimensions. Specifically we show that any total order can be used to create a set of consistent digital rays CDR in Zd (a set of rays emanating from a fixed point p that satisfies the extension of the Euclidean axioms). Then we use the same approach to create CDS in Zd, and observe that it only works in some cases. We fully characterize for which total orders the construction is consistent (and thus gives a CDS). In particular, this particular positively answers the question posed by Christ et al.


Naoto OhsakaThe University of Tokyo

Portfolio Optimization for Influence Spread(WWW'17)

Motivated by viral marketing, stochastic diffusion processes that model influence spread on a network have been studied intensively. The primary interest in such models has been to find a seed set of a fixed size that maximizes the expected size of the cascade from it. Practically, however, it is not desirable to have the risk of ending with a small cascade, even if the expected size of the cascade is large. To address this issue, we adopt conditional value at risk (CVaR) as a risk measure, and propose an algorithm that computes a portfolio over seed sets with a provable guarantee on its CVaR. Using real-world social networks, we demonstrate that the portfolio computed by our algorithm has a significantly better CVaR than seed sets computed by other baseline methods.


Wataru InaribaThe University of Tokyo
Random-Radius Ball Method for Estimating Closeness Centrality(AAAI'17)
In the analysis of real-world complex networks, identifying important vertices is one of the most fundamental operations. A variety of centrality measures have been proposed and extensively studied in various research areas. Many of distance-based centrality measures embrace some issues in treating disconnected networks, which are resolved by the recently emerged harmonic centrality. This paper focuses on a family of centrality measures including the harmonic centrality and its variants, and addresses their computational difficulty on very large graphs by presenting a new estimation algorithm named the random-radius ball (RRB) method. The RRB method is easy to implement, and a theoretical analysis, which includes the time complexity and error bounds, is also provided. The effectiveness of the RRB method over existing algorithms is demonstrated through experiments on real-world networks.


Satoshi HaraNational Institute of Informatics
Enumerate Lasso Solutions for Feature Selection(AAAI'17)
We propose an algorithm for enumerating solutions to the Lasso regression problem. In ordinary Lasso regression, one global optimum is obtained and the resulting features are interpreted as task-relevant features. However, this can overlook possibly relevant features not selected by the Lasso. With the proposed method, we can enumerate many possible feature sets for human inspection, thus recording all the important features. We prove that by enumerating solutions, we can recover a true feature set exactly under less restrictive conditions compared with the ordinary Lasso. Finally, in the gene expression and the text data, we demonstrate that the proposed method can enumerate a wide variety of meaningful feature sets, which are overlooked by the global optima.


Tasuku SomaThe University of Tokyo
Regret Ratio Minimization in Multi-objective Submodular Function Maximization(AAAI'17)
Submodular function maximization has numerous applications in machine learning and artificial intelligence. Many real applications require multiple submodular objective functions to be maximized, and it is not known in advance which of the objective functions is regarded to be important by a user. In such cases, it is desirable to have a small family of representative solutions that would satisfy any user’s preference. A traditional approach for solving such a problem is to enumerate the Pareto optimal solutions. However, owing to the massive number of Pareto optimal solutions (possibly exponentially many), it is difficult for a user to select a solution. In this paper, we propose practical methods with theoretical guarantees for finding a small family of representative solutions, based on the notion of regret ratio. This is joint work with Yuichi Yoshida.


Shinji ItoNEC Data Science Research Laboratories
Large-Scale Price Optimization via Network Flow(NIPS'16)
This paper deals with price optimization, which is to find the best pricing strategy that maximizes revenue or profit, on the basis of demand forecasting models. Though recent advances in regression technologies have made it possible to reveal price-demand relationship of a number of multiple products, most existing price optimization methods, such as mixed integer programming formulation, cannot handle tens or hundreds of products because of their high computational costs. To cope with this problem, this paper proposes a novel approach based on network flow algorithms. We reveal a connection between supermodularity of the revenue and cross elasticity of demand. On the basis of this connection, we propose an efficient algorithm that employs network flow algorithms. The proposed algorithm can handle hundreds or thousands of products, and returns an exact optimal solution under an assumption regarding cross elasticity of demand. Even in case in which the assumption does not hold, the proposed algorithm can efficiently find approximate solutions as good as can other state-of-the-art methods, as empirical results show.


Akihiro YabeNEC Data Science Research Laboratories
Robust Quadratic Programming for Price Optimization(IJCAI'17)
The goal of price optimization is to maximize total revenue by adjusting the prices of products, on the basis of predicted sales numbers that are functions of pricing strategies. Recent advances in demand modeling using machine learning raise a new challenge in price optimization, i.e., how to manage statistical errors in estimation. In this paper, we show that uncertainty in recently-proposed prescriptive price optimization frameworks can be represented by a matrix normal distribution. For this particular uncertainty, we propose novel robust quadratic programming algorithms for conservative lower-bound maximization. We offer an asymptotic probabilistic guarantee of conservativeness of our formulation. Our experiments on both artificial and actual price data show that our robust price optimization allows users to determine best risk-return trade-offs and to explore safe, profitable price strategies.


Masakazu IshihataHokkaido University
Statistical Emerging Pattern Mining with Multiple Testing Correction(KDD'17)
Emerging patterns are patterns whose support significantly differs between two databases. We study the problem of listing emerging patterns with a multiple testing guarantee. Recently, Terada et al. proposed the Limitless Arity Multiple-testing Procedure (LAMP) that controls the family-wise error rate (FWER) in statistical association mining. LAMP reduces the number of ``untestable'' hypotheses without compromising its statistical power. Still, FWER is restrictive, and as a result, its statistical power is inherently unsatisfying when the number of patterns is large.
On the other hand, the false discovery rate (FDR) is less restrictive than FWER, and thus controlling FDR yields a larger number of significant patterns. We propose two emerging pattern mining methods: the first one controls FWER, and the second one controls FDR. The effectiveness of the methods is verified in computer simulations with real-world datasets.


Naoki MarumoNTT Communication Science Laboratories
SVD-Based Screening for the Graphical Lasso(IJCAI'17)
The graphical lasso is the most popular approach to estimating the inverse covariance matrix of high-dimension data. However, the graphical lasso is infeasible due to its high computation cost for large size of datasets. This paper proposes Sting, a fast approach to the graphical lasso. In order to reduce the computation cost, it efficiently identifies blocks in the estimated matrix that have nonzero elements before entering the iterations by exploiting the singular value decomposition of data matrix. In addition, it selectively updates elements of the estimated matrix expected to have nonzero values.
Theoretically, it guarantees to converge to the same result as the original algorithm of the graphical lasso. Experiments show that our approach is faster than existing approaches.


August 4(Fri)

Tomoharu IwataNTT Communication Science Laboratories

Multi-view Anomaly Detection via Robust Probabilistic Latent Variable Models(NIPS'16)

We propose probabilistic latent variable models for multi-view anomaly detection, which is the task of finding instances that have inconsistent views given multi-view data. With the proposed model, all views of a non-anomalous instance are assumed to be generated from a single latent vector. On the other hand, an anomalous instance is assumed to have multiple latent vectors, and its different views are generated from different latent vectors. By inferring the number of latent vectors used for each instance with Dirichlet process priors, we obtain multi-view anomaly scores. The proposed model can be seen as a robust extension of probabilistic canonical correlation analysis for noisy multi-view data. We present Bayesian inference procedures for the proposed model based on a stochastic EM algorithm. The effectiveness of the proposed model is demonstrated in terms of performance when detecting multi-view anomalies.


Atsutoshi KumagaiNTT Secure Platform Laboratories

Learning Latest Classifiers without Additional Labeled Data(IJCAI'17)

In various applications such as spam mail classification, the performance of classifiers deteriorates over time. Although retraining classifiers using labeled data helps to maintain the performance, continuously preparing labeled data is quite expensive. In this talk, we propose a method to learn classifiers by using newly obtained unlabeled data, which are easy to prepare, as well as labeled data collected beforehand. A major reason for the performance deterioration is the emergence of new features that do not appear in the training phase. Another major reason is the change of the distribution between the training and test phases. The proposed method learns the latest classifiers that overcome both problems. With the proposed method, the conditional distribution of new features given existing features is learned using the unlabeled data. In addition, the proposed method estimates the density ratio between training and test distributions by using the labeled and unlabeled data. We approximate the classification error of a classifier, which exploits new features as well as existing features, at the test phase by incorporating both the conditional distribution of new features and the density ratio, simultaneously. By minimizing the approximated error while integrating out new feature values, we obtain a classifier that exploits new features and fits on the test phase. The effectiveness of the proposed method is demonstrated with experiments using synthetic and real-world data sets.


Sho YokoiTohoku University

Learning Co-Substructures by Kernel Dependence Maximization(IJCAI'17)

For modeling paired items in a dataset, we propose a new machine learning task of extracting a strongly associated substructure pair (co-substructure) from each input item pair. We call this task dependent co-substructure extraction (DCSE), and formalize it as a dependence maximization problem. We propose adopting the Hilbert--Schmidt independence criterion (HSIC) as an objective function to cope with the data sparsity problem and the Metropolis--Hastings (MH) algorithm to boost search efficiency. We report the results of empirical evaluations, in which the proposed method is applied for acquiring and predicting narrative event pairs, an active task in the field of natural language processing.


Yasushi KawaseTokyo Institute of Technology

Near-feasible stable matchings with budget constraints(IJCAI'17)

This paper deals with two-sided matching with budget constraints where one side (firm or hospital) can make monetary transfers (offer wages) to the other (worker or doctor). In a standard model, while multiple doctors can be matched to a single hospital, a hospital has a maximum quota: the number of doctors assigned to a hospital cannot exceed a certain limit. In our model, a hospital instead has a fixed budget: the total amount of wages allocated by each hospital to doctors is constrained. With budget constraints, stable matchings may fail to exist and checking for the existence is hard. To deal with the nonexistence of stable matchings, we extend the "matching with contracts" model of Hatfield and Milgrom, so that it handles near-feasible matchings that exceed each budget of the hospitals by a certain amount. We then propose two novel mechanisms that efficiently return such a near-feasible matching that is stable with respect to the actual amount of wages allocated by each hospital.


Mahito SugiyamaNational Institute of Informatics

Tensor Balancing on Statistical Manifold(ICML'17)

We solve tensor balancing, rescaling an Nth order nonnegative tensor by multiplying N tensors of order N - 1 so that every fiber sums to one. This generalizes a fundamental process of matrix balancing used to compare matrices in a wide range of applications from biology to economics. We present an efficient balancing algorithm with quadratic convergence using Newton's method and show in numerical experiments that the proposed algorithm is several orders of magnitude faster than existing ones. To theoretically prove the correctness of the algorithm, we model tensors as probability distributions in a statistical manifold and realize tensor balancing as projection onto a submanifold. The key to our algorithm is that the gradient of the manifold, used as a Jacobian matrix in Newton's method, can be analytically obtained using the Moebius inversion formula, the essential of combinatorial mathematics. Our model is not limited to tensor balancing, but has a wide applicability as it includes various statistical and machine learning models such as weighted DAGs and Boltzmann machines.


Hanna SumitaNational Institute of Informatics
Online Optimization of Video-Ad Allocation(IJCAI'17)

In this talk, we study the video advertising in the context of internet advertising. Video advertising is a rapidly growing industry, but its computational aspects have not yet been investigated. A difference between video advertising and traditional display advertising is that the former requires more time to be viewed. In contrast to a traditional display advertisement, a video advertisement has no influence over a user unless the user watches it for a certain amount of time. Previous studies have not considered the length of video advertisements, and time spent by users to watch them. Motivated by this observation, we formulate a new online optimization problem for optimizing the allocation of video advertisements, and we develop a nearly (1-1/e)-competitive algorithm for finding an envy-free allocation of video advertisements.


Masato AbeNational Institute of Informatics
Optimizing mating encounters by sexually dimorphic movements(Journal of the Royal Society Interface)
Big data of movements in animals including human has been accumulated and the empirical movement patterns have been revealed. At the same time, it is crucial to theoretically clear what movement patterns bring profits to individuals for understanding of movement patterns.
In this presentation, I will give a talk about the optimizing of mutual searches which correspond to a situation where male and female search for each other. This problem is not limited with animal individuals and can apply to molecular motions of enzyme and coenzyme and search algorithms in searching robots. We found that the pairs of individuals with high and low diffusivity are optimal in searching for each other in a certain condition.


Makoto YamadaRIKEN
Convex Factorization Machine for Toxicogenomics Prediction(KDD'17)
We introduce the convex factorization machine (CFM), which is a convex variant of the widely used Factorization Machines (FMs). Specifically, we employ a linear+quadratic model and regularize the linear term with the L2-regularizer and the quadratic term with the trace norm regularizer. Then, we formulate the CFM optimization as a semidefinite programming problem and propose an efficient optimization procedure with Hazan's algorithm. A key advantage of CFM over existing FMs is that it can find a globally optimal solution, while FMs may get a poor locally optimal solution since the objective function of FMs is non-convex. In addition, the proposed algorithm is simple yet effective and can be implemented easily. Finally, CFM is a general factorization method and can also be used for other factorization problems, including multi-view matrix factorization and tensor completion problems, in various domains including toxicogenomics and bioinformatics. Through synthetic and traditionally used movielens datasets, we first show that the proposed CFM achieves results competitive to FMs. We then show in a toxicogenomics prediction task that CFM predicts the toxic outcomes of a collection of drugs better than a state-of-the-art tensor factorization method.


Masaaki ImaizumiThe Institute of Statistical Mathematics
Tensor Decomposition with Smoothness(ICML'17)
We analyze a data structure as a multi-dimensional array which is referred as tensor data. Real data tensors are usually high dimensional but their intrinsic information is preserved in low-dimensional space, which motivates to use tensor decompositions such as Tucker decomposition. Often, real data tensors are not only low dimensional, but also smooth, meaning that the adjacent elements are similar or continuously changing, which typically appear as spatial or temporal data. To incorporate the smoothness property, we propose the smoothed Tucker decomposition (STD). STD leverages the smoothness by the sum of a few basis functions, which reduces the number of parameters. The objective function is formulated as a convex problem and, to solve that, an algorithm based on the alternating direction method of multipliers is derived. We theoretically show that, under the smoothness assumption, STD achieves a better error bound. The theoretical result and performances of STD are numerically verified.


Daisuke HatanoNational Institute of Informatics

Computing Least Cores of Supermodular Cooperative Games(AAAI'17)

One of the goals of a cooperative game is to compute a value division to the players from which they have no incentive to deviate. This concept is formalized as the notion of the core. To obtain a value division that motivates players to cooperate to a greater extent or that is more robust under noise, the notions of the strong least core and the weak least core have been considered. In this paper, we characterize the strong and the weak least cores of supermodular cooperative games using the theory of minimizing crossing submodular functions. We then apply our characterizations to two representative supermodular cooperative games, namely, the induced subgraph game generalized to hypergraphs and the airport game. For these games, we derive explicit forms of the strong and weak least core values, and provide polynomial-time algorithms that compute value divisions in the strong and weak least cores.


Yuto YamaguchiIndeed

When Does Label Propagation Fail? A View from a Network Generative Model(IJCAI'17)

What kinds of data does Label Propagation (LP) work best on? Can we justify the solution of LP from a theoretical standpoint? LP is a semi-supervised learning algorithm that is widely used to predict unobserved node labels on a network. Despite its importance, its theoretical properties remain mostly unexplored. In this paper, we answer the above questions by interpreting LP from a statistical viewpoint. As our main result, we identify the network generative model behind the discretized version of LP (DLP), and we show that under specific conditions the solution of DLP is equal to the maximum a posteriori estimate of that generative model. Our main result reveals the critical limitations of LP.


Takanori MaeharaRIKEN

Exact Computation of Influence Spread by Binary Decision Diagrams(WWW'17)

Evaluating influence spread in social networks is a fundamental procedure to estimate the word-of-mouth effect in viral marketing.
There are enormous studies about this topic; however, under the standard stochastic cascade models, the exact computation of influence spread is known to be #P-hard. Thus, the existing studies have used Monte-Carlo simulation-based approximations to avoid exact computation.
We propose the first algorithm to compute influence spread exactly under the independent cascade model. The algorithm first constructs binary decision diagrams (BDDs) for all possible realizations of influence spread, then computes influence spread by dynamic programming on the constructed BDDs. To construct the BDDs efficiently, we designed a new frontier-based search-type procedure.
The constructed BDDs can also be used to solve other influence-spread related problems, such as random sampling without rejection, conditional influence spread evaluation, dynamic probability update, and gradient computation for probability optimization problems.
We conducted computational experiments to evaluate the proposed algorithm. The algorithm successfully computed influence spread on real-world networks with a hundred edges in a reasonable time, which is quite impossible by the naive algorithm. We also conducted an experiment to evaluate the accuracy of the Monte-Carlo simulation-based approximation by comparing exact influence spread obtained by the proposed algorithm.


Yasuhiro FujiwaraNTT Software Innovation Center
Scaling Locally Linear Embedding(SIGMOD'17)
Locally Linear Embedding (LLE) is a popular approach to dimensionality reduction as it can effectively represent nonlinear structures of high-dimensional data. Although LLE is used in many applications, its computation cost is significantly high. In this paper, we proposed Ripple, an efficient algorithm for LLE. Experiments show that Ripple is significantly faster than the original approach of LLE by guaranteeing the same results of dimensionality reduction.


Tomohiro SonobeNational Institute of Informatics
Coarsening Massive Influence Networks for Scalable Diffusion Analysis(SIGMOD'17)
Fueled by the increasing popularity of online social networks, social influence analysis has attracted a great deal of research attention in the past decade. The diffusion process is often modeled using influence graphs, and there has been a line of research that involves algorithmic problems in influence graphs. However, the vast size of today’s real-world networks raises a serious issue with regard to computational efficiency.
In this paper, we propose a new algorithm for reducing influence graphs. Given an input influence graph, the proposed algorithm produces a vertex-weighted influence graph, which is compact and approximates the diffusion properties of the input graph. The central strategy of influence graph reduction is coarsening, which has the potential to greatly reduce the number of edges by merging a vertex set into a single weighted vertex. We present general frameworks using our compact graphs that accelerate existing algorithms for influence maximization and influence estimation problems, which are motivated by practical applications, such as viral marketing. Using these frameworks, we can quickly obtain solutions that have accuracy guarantees under a reasonable assumption. Experiments with real-world networks demonstrate that the proposed algorithm can scale to billion-edge graphs and reduce the graph size to up to 4%.


Suguru TamakiKyoto University
Beating Brute Force for Systems of Polynomial Equations over Finite Fields(SODA'17)
Solving systems of multivariate polynomial equations over finite fields is a fundamental problem in mathematics, science and engineering. Let q be the order of the underlying field and n be the number of variables.Then, the problem can be solved by the brute force search over q^n possible solutions.
We present the first algorithm that solves the problem in time exponentially faster than q^n in the worst- case. Our algorithm can count the number of solutions as well.The running time of the algorithm is roughly of the form q^{n(1-1/O(d))}, where d is the maximum degree of polynomials.
We also consider a generalization of the problem, where each equation is defined by a certain type of arithmetic circuit, and give an algorithm for the generalized problem.


Shuichi HiraharaThe University of Tokyo
On the average-case complexity of MCSP and its variants(CCC'17)
The minimum circuit size problem (MCSP) has been studied for a long time, but many basic questions about MCSP remain open, including its NP-completeness. In this talk, based on average-case complexity, we provide new evidence about the hardness of MCSP. Specifically, we show that popular average-case conjectures such as Random 3SAT and The Planted Clique Problem are reducible to MKTP, which is a variant of MCSP. This reduction gives a first piece of evidence that MKTP is not in coNP. In addition, we show that, assuming the existence of one-way functions, there is a (weak form of) reduction from worst-case complexity to average-case complexity for MCSP.


Copyright © Kawarabayashi Large Graph Project