patternMinor
Is it possible to train a neural network to solve NP-complete problems?
Viewed 0 times
neuralpossibletraincompletesolveproblemsnetwork
Problem
I'm sorry if the question is not relevant, i have tried to search for articles about it but i couldn't find satisfying answers.
I'm starting to learn about machine learning, neural networks etc ... and i was wondering if making a neural network that takes a graph as input, and output the answer of an np-complete problem (e.g. the graph is hamiltonian / the graph has independant set superior to k, and other decision problems) would work ?
I haven't heard of any np complete problems being solved like this, so i guess it does not work, are there theoretical results stating that a neural network cannot learn np-complete language or something like this ?
I'm starting to learn about machine learning, neural networks etc ... and i was wondering if making a neural network that takes a graph as input, and output the answer of an np-complete problem (e.g. the graph is hamiltonian / the graph has independant set superior to k, and other decision problems) would work ?
I haven't heard of any np complete problems being solved like this, so i guess it does not work, are there theoretical results stating that a neural network cannot learn np-complete language or something like this ?
Solution
To answer your question, I would to point you to the field of computational learning theory (CLT), which applies complexity theoretic approaches to analyse machine learning.
An important concept in CLT is probably approximately correct (PAC) learning: in simple terms, a problem is PAC learnable if there exists an efficient algorithm which learns the data using a polynomial number of samples from the underlying distribution of the problem with an (polynomially small) error of $\epsilon$ and (polynomially small) failure probability $\delta$.
Unfortunately, there is a big disconnect between results in CLT and results in applied machine learning, so you are unlikely to find result proving or disproving the learnability of NP complete problems using deep neural networks as it is still an active area to research.
Here are some resources to computational learning theory:
https://www.cs.ox.ac.uk/people/varun.kanade/teaching/CLT-HT2018/lectures/lecture01.pdf
Section 3 defines and gradually introduces the concept of PAC learning
An important concept in CLT is probably approximately correct (PAC) learning: in simple terms, a problem is PAC learnable if there exists an efficient algorithm which learns the data using a polynomial number of samples from the underlying distribution of the problem with an (polynomially small) error of $\epsilon$ and (polynomially small) failure probability $\delta$.
Unfortunately, there is a big disconnect between results in CLT and results in applied machine learning, so you are unlikely to find result proving or disproving the learnability of NP complete problems using deep neural networks as it is still an active area to research.
Here are some resources to computational learning theory:
- Wikipedia: https://en.wikipedia.org/wiki/Computational_learning_theory
- The original paper on CLT by Leslie Valiant: http://web.mit.edu/6.435/www/Valiant84.pdf
- The hardness of training a particular neural network is NP complete: https://people.csail.mit.edu/rivest/pubs/BR93.pdf
- Very good set of notes on CLT:
https://www.cs.ox.ac.uk/people/varun.kanade/teaching/CLT-HT2018/lectures/lecture01.pdf
Section 3 defines and gradually introduces the concept of PAC learning
- Section 5.1 of these notes show how 3-COLOURABILITY can be phrased in the PAC learning framework:
- In Section 5 as well, the notes show how "3-TERM-DNF is not efficiently PAC learnable (take II) unless RP = NP"
Context
StackExchange Computer Science Q#128190, answer score: 3
Revisions (0)
No revisions yet.