PyBrain -- the Python Machine Learning Library =============================================== INSTALLATION ------------ Quick answer: make sure you have SciPy installed, then python setup.py install Longer answer: (if the above was any trouble) we keep more detailed installation instructions (including those for the dependencies) up-to-date in a wiki at: http://wiki.github.com/pybrain/pybrain/installation DOCUMENTATION ------------- Please read docs/documentation.pdf or browse docs/html/* featuring: quickstart, tutorials, API, etc. If you have matplotlib, the scripts in examples/* may be instructive as well.
PyBrain - Another Python Machine Learning Library.
Overview
Comments
-
python3.5.2
opened by hhuhhu 4Does pybrain support python3.5.2? The simple 'import pybrain ' abort. like below. I install it just with 'pip install pybrain' D:\Anaconda3.5.2\python.exe F:/gitProjects/vnpy_future/pre_code/cnn/rnn.py Traceback (most recent call last): File "F:/gitProjects/vnpy_future/pre_code/cnn/rnn.py", line 7, in
import pybrain File "D:\Anaconda3.5.2\lib\site-packages\pybrain_init_.py", line 1, in from structure.init import * ImportError: No module named 'structure' -
Port most of the code to Python3 compatible.
opened by wernight 4The code should still work on Python2.
Import and print are the main changes.
Not everything might be ported. The most part that was left intact are the
range(). In Python2 it returns a list and in python it returns an iterator. This should speed up and in most cases should work without further changes.See http://www.diveinto.org/python3/porting-code-to-python-3-with-2to3.html#xrange
-
PyPi package update
opened by wernight 3There seem to have been many changes since 2009 (over 4 years ago). The version number on GitHub is almost the same yet it's probably worth making another release.
PyPi allows installing simply for a user or system and other things. Not that git clone isn't good in many cases.
-
IndexError after recurrent network copy
opened by wernight 3Steps:
>>> from pybrain.tools.shortcuts import buildNetwork >>> net = buildNetwork(2, 4, 1, recurrent=True) >>> net.activate((1, 1)) ... array([ 0.02202066]) >>> net.copy() >>> net.activate((1, 1)) ... IndexError: index out of boundsThis seem to be only when
recurrent=True. -
KeyError in sortModules
opened by ghost 3I have an issue with the sortModules method throwing a KeyError.
Following the tutorial example, I created a script with the following:
#! /usr/bin/env python # -*- coding: utf-8 -*- import sys import scipy import numpy as np print "\nPython version: %s" % sys.version print "Numpy version: %s" % np.version.version print "Scipy version: %s" % scipy.version.version from pybrain.structure import FeedForwardNetwork from pybrain.structure import LinearLayer, SigmoidLayer from pybrain.structure import FullConnection # Create network nn = FeedForwardNetwork() # Set network parameters INPUT_NDS = 2 HIDDEN_NDS = 3 OUTPUT_NDS = 1 # Create Feed Forward Network layers inLayer = LinearLayer(INPUT_NDS) hiddenLayer = SigmoidLayer(HIDDEN_NDS) outLayer = LinearLayer(OUTPUT_NDS) # Fully connect all layers in_to_hidden = FullConnection(inLayer, hiddenLayer) hidden_to_out = FullConnection(hiddenLayer, outLayer) # Add the connected layers to the network nn.addConnection(in_to_hidden) nn.addConnection(hidden_to_out) # Sort modules to prepare the NN for use nn.sortModules()Which gives me: Python version: 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] Numpy version: 1.3.0 Scipy version: 0.7.0 Traceback (most recent call last): File "/tmp/py7317Q6c", line 46, in
nn.sortModules() File "/usr/local/lib/python2.6/dist-packages/PyBrain-0.3-py2.6.egg/pybrain/structure/networks/network.py", line 224, in sortModules self._topologicalSort() File "/usr/local/lib/python2.6/dist-packages/PyBrain-0.3-py2.6.egg/pybrain/structure/networks/network.py", line 188, in _topologicalSort graph[c.inmod].append(c.outmod) KeyError: <LinearLayer 'LinearLayer-3'> I have the latest version of pybrain installed, so this seems strange. Especially as I when use the shortcut:
from pybrain.tools.shortcuts import buildNetwork -
serialization using pickle freezes network causes strange caching behaviour
opened by bgbg 3This is a duplicate of my Stackoverflow.com question.
I fail to properly serialize/deserialize PyBrain networks using either pickle or cPickle.
See the following example:
from pybrain.datasets import SupervisedDataSet from pybrain.tools.shortcuts import buildNetwork from pybrain.supervised.trainers import BackpropTrainer import cPickle as pickle import numpy as np #generate some data np.random.seed(93939393) data = SupervisedDataSet(2, 1) for x in xrange(10): y = x * 3 z = x + y + 0.2 * np.random.randn() data.addSample((x, y), (z,)) #build a network and train it net1 = buildNetwork( data.indim, 2, data.outdim ) trainer1 = BackpropTrainer(net1, dataset=data, verbose=True) for i in xrange(4): trainer1.trainEpochs(1) print '\tvalue after %d epochs: %.2f'%(i, net1.activate((1, 4))[0])This is the output of the above code:
Total error: 201.501998476 value after 0 epochs: 2.79 Total error: 152.487616382 value after 1 epochs: 5.44 Total error: 120.48092561 value after 2 epochs: 7.56 Total error: 97.9884043452 value after 3 epochs: 8.41As you can see, network total error decreases as the training progresses. You can also see that the predicted value approaches the expected value of 12.
Now we will do a similar exercise, but will include serialization/deserialization:
print 'creating net2' net2 = buildNetwork(data.indim, 2, data.outdim) trainer2 = BackpropTrainer(net2, dataset=data, verbose=True) trainer2.trainEpochs(1) print '\tvalue after %d epochs: %.2f'%(1, net2.activate((1, 4))[0]) #So far, so good. Let's test pickle pickle.dump(net2, open('testNetwork.dump', 'w')) net2 = pickle.load(open('testNetwork.dump')) trainer2 = BackpropTrainer(net2, dataset=data, verbose=True) print 'loaded net2 using pickle, continue training' for i in xrange(1, 4): trainer2.trainEpochs(1) print '\tvalue after %d epochs: %.2f'%(i, net2.activate((1, 4))[0])This is the output of this block:
creating net2 Total error: 176.339378639 value after 1 epochs: 5.45 loaded net2 using pickle, continue training Total error: 123.392181859 value after 1 epochs: 5.45 Total error: 94.2867637623 value after 2 epochs: 5.45 Total error: 78.076711114 value after 3 epochs: 5.45As you can see, it seems that the training has some effect on the network (the reported total error value continues to decrease), however the output value of the network freezes on a value that was relevant for the first training iteration.
Is there any caching mechanism that I need to be aware of that causes this erroneous behaviour? Are there better ways to serialize/deserialize pybrain networks?
Relevant version numbers:
- Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)]
- Numpy 1.5.1
- cPickle 1.71
- pybrain 0.3
-
Hierarchy change: take Black-box optimization out of RL
opened by schaul 3Although it technically fits there, it is a bit confusing. I think the split should be along the difference of ontogenetic/phylogenetic with on one side optimization, evolution, pso, etc. (coevolution methods should fit here, but how about multi-objective optimization?) and on the other side policy gradients, and other RL algos.
0.3 Discussion In progress -
splitWithProportion returns same type instead of SupervisedDataSet
opened by borakrc 2When we call splitWithProportion on a ClassificationDataSet object, return type is (SupervisedDataSet, SupervisedDataSet) instead of (ClassificationDataSet, ClassificationDataSet). While this modification fixes this issue, it can be improved by calling the constructor using kwargs. Didn't modify sub-classes in order to prevent repetition of lines 106-112. I've done this modification because when we split a sub-class of SupervisedDataSet, we should get a 2-tuple of sub-class object. Not a 2-tuple of SupervisedDataSet.
-
ImportanceDataSet with BackpropTrainer results in IndexError
opened by kkleidal 2I have a dataset which I am clustering using a gaussian mixture model, and then I want to train a neural network for each of the clusters. I want to use all the points in my dataset weighted based on the probability they are in the cluster for which the net is being trained.
Originally, I was not weighting the training data and it worked fine:
''' Create and train a neural net on the training data, given the actual labels ''' def create_neural_net(training, labels, weights=None, T=10, silent=False): input_units = len(training[0]) output_units = len(labels[0]) n = len(training) net = FeedForwardNetwork() layer_in = SoftmaxLayer(input_units) layer_hidden = SigmoidLayer(1000) layer_hidden2 = SigmoidLayer(50) layer_out = LinearLayer(output_units) net.addInputModule(layer_in) net.addModule(layer_hidden) net.addModule(layer_hidden2) net.addOutputModule(layer_out) net.addConnection(FullConnection(layer_in, layer_hidden)) net.addConnection(FullConnection(layer_hidden, layer_hidden2)) net.addConnection(FullConnection(layer_hidden2, layer_out)) net.sortModules() training_data = SupervisedDataSet(input_units, output_units) for i in xrange(n): # print len(training[i]) # prints 148 # print len(labels[i]) # prints 13 training_data.appendLinked(training[i], labels[i]) trainer = BackpropTrainer(net, training_data) for i in xrange(T): if not silent: print "Training %d" % (i + 1) error = trainer.train() if not silent: print net.activate(training[0]), labels[0] if not silent: print "Training iteration %d. Error: %f." % (i + 1, error) return netBut now when I try to weight the data points:
''' Create and train a neural net on the training data, given the actual labels ''' def create_neural_net(training, labels, weights=None, T=10, silent=False): input_units = len(training[0]) output_units = len(labels[0]) n = len(training) net = FeedForwardNetwork() layer_in = SoftmaxLayer(input_units) layer_hidden = SigmoidLayer(1000) layer_hidden2 = SigmoidLayer(50) layer_out = LinearLayer(output_units) net.addInputModule(layer_in) net.addModule(layer_hidden) net.addModule(layer_hidden2) net.addOutputModule(layer_out) net.addConnection(FullConnection(layer_in, layer_hidden)) net.addConnection(FullConnection(layer_hidden, layer_hidden2)) net.addConnection(FullConnection(layer_hidden2, layer_out)) net.sortModules() training_data = ImportanceDataSet(input_units, output_units) for i in xrange(n): # print len(training[i]) # prints 148 # print len(labels[i]) # prints 13 training_data.addSample(training[i], labels[i], importance=(weights[i] if weights is not None else None)) trainer = BackpropTrainer(net, training_data) for i in xrange(T): if not silent: print "Training %d" % (i + 1) error = trainer.train() if not silent: print net.activate(training[0]), labels[0] if not silent: print "Training iteration %d. Error: %f." % (i + 1, error) return netI get the following error:
Traceback (most recent call last): File "clustering_experiment.py", line 281, in <module> total_model = get_model(training, training_labels, num_clusters=NUM_CLUSTERS , T=NUM_ITERS_NEURAL_NET) File "clustering_experiment.py", line 177, in get_model neural_nets.append(neural_net_plugin.create_neural_net(tra.tolist(), val.tol ist(), T=T, silent=True)) File "/home/neural_net_plugin.py", line 43, in create_neural_net error = trainer.train() File "/usr/local/lib/python2.7/dist-packages/PyBrain-0.3.1-py2.7.egg/pybrain/s upervised/trainers/backprop.py", line 61, in train e, p = self._calcDerivs(seq) File "/usr/local/lib/python2.7/dist-packages/PyBrain-0.3.1-py2.7.egg/pybrain/s upervised/trainers/backprop.py", line 92, in _calcDerivs outerr = target - self.module.outputbuffer[offset] IndexError: index 162 is out of bounds for axis 0 with size 1 -
Fixes to Python3.x
opened by herodrigues 2Changes
All changes I've done were backported from Python3 to Python2 (at least until Python2.7).
- Capturing the currently raised exception (it doesn't work in Python 2.5 and earlier)
- map function returns an iterator
- Tuple parameters removed
import exceptionsremoved as now it's a built-in modulefrom itertools import izipremoved, now it uses just zip
TODO
- I didn't change the files containing weave library. In fact, I really don't know if this library is already supported in latest scipy versions. I couldn't find any recent references to that. Just found "old" news saying that it is not supported yet such as this and this. Maybe it's time to consider using Cython instead.
- RL-Glue imports are also unchanged because its current Python codec have no support for Py3 yet. However, I changed Python codec source for RL-Glue to run in Py2 and Py3 (in fact, I just changed minor things such as print function and exception statements). By the way, if you guys want to try it, I've uploaded on my Github. Another thing to point out is that no one is maintaining RL-Glue code anymore.
I didn't do any tests and I just tried to run the examples in the Pybrain docs and everything worked fine.
-
Add Randlov bicycle RL example.
opened by chrisdembia 2I have written part of the RL bicycle problem introduced by Randlov and Alstrom as an example in PyBrain. Hopefully you all would like to include it in PyBrain!
Here's their paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.3038&rep=rep1&type=pdf
I include some plotting, so you can view the learning.
Please let me know what improvements I should make.
-
cannot import name 'random' from 'scipy'
opened by noeldum 0I am using scipy 1.9.1 and I get the traceback below when using the buildNetwork function.
Traceback (most recent call last): File "/home/nono/Desktop/tmp/neural/./main.py", line 3, in <module> from pybrain.tools.shortcuts import buildNetwork File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/__init__.py", line 1, in <module> from pybrain.structure.__init__ import * File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/structure/__init__.py", line 2, in <module> from pybrain.structure.modules.__init__ import * File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/structure/modules/__init__.py", line 3, in <module> from pybrain.structure.modules.gaussianlayer import GaussianLayer File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/structure/modules/gaussianlayer.py", line 3, in <module> from scipy import random ImportError: cannot import name 'random' from 'scipy' (/usr/local/lib/python3.10/dist-packages/scipy-1.9.1-py3.10-linux-x86_64.egg/scipy/__init__.py)Looks like an old reference where things have changed in scipy and not updated in pybrain probably.Is pybrain still maintained? The last release is from 2015.
-
library with this error
opened by Ickwarw 1this error is being presented when I use this pybrain library
this is my code: from pybrain.structure import FeedForwardNetwork from pybrain.structure import LinearLayer, SigmoidLayer, BiasUnit from pybrain.structure import FullConnection
rneural = FeedForwardNetwork()
CE = LinearLayer(4) CO = SigmoidLayer(6) CS = SigmoidLayer(1) b1 = BiasUnit() b2 = BiasUnit()
rneural.addModule(CE) rneural.addModule(CO) rneural.addModule(CS) rneural.addModule(b1) rneural.addModule(b2)
EO = FullConnection(CE, CO) OS = FullConnection(CO, CS) bO = FullConnection(b1, CO) bS = FullConnection(b2, CS)
rneural.sortModule() print(rneural)
when I run:
python3 rneural.py Traceback (most recent call last): File "/home/warwick/Desktop/scriptsinpython/ai/rneural.py", line 1, in from pybrain.structure import FeedForwardNetwork File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/init.py", line 1, in from pybrain.structure.init import * File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/structure/init.py", line 2, in from pybrain.structure.modules.init import * File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/structure/modules/init.py", line 2, in from pybrain.structure.modules.gate import GateLayer, DoubleGateLayer, MultiplicationLayer, SwitchLayer File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/structure/modules/gate.py", line 10, in from pybrain.tools.functions import sigmoid, sigmoidPrime File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/tools/functions.py", line 4, in from scipy.linalg import inv, det, svd, logm, expm2 ImportError: cannot import name 'expm2' from 'scipy.linalg' (/home/warwick/environments/my_env/lib/python3.10/site-packages/scipy/linalg/init.py)
I've tried several solutions but the only one I haven't tried is to downgrade python3.10, I think it's not the most correct solution if anyone knows how to fix this
thanks
-
docs: Fix a few typos
opened by timgates42 0There are small typos in:
- pybrain/rl/environments/flexcube/viewer.py
- pybrain/rl/environments/ode/tasks/ccrl.py
- pybrain/rl/environments/ode/tasks/johnnie.py
- pybrain/rl/environments/shipsteer/viewer.py
- pybrain/structure/modules/lstm.py
- pybrain/tests/runtests.py
- pybrain/tools/rlgluebridge.py
Fixes:
- Should read
suggestedrather thansuggestet. - Should read
specificrather thanspezific. - Should read
heightrather thanhight. - Should read
whetherrather thanwether. - Should read
methodrather thanmethode.
Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md
-
Pybrain: 'SupervisedDataSet' object has no attribute '_convertToOneOfMany' error
opened by ghost 0I'm working on speech recognition using raspberry pi while I was running the code of the build model using pybrain features I got the error:'SupervisedDataSet' object has no attribute '_convertToOneOfMany' ? If anyone has any pointers to get me back on the right path and that would be very much appreciated. ` def createRGBdataSet(inputSet, numOfSamples, numOfPoints): alldata = ClassificationDataSet(numOfPoints, 1, nb_classes=3) # Iter through all 3 groups and add the samples with appropriate class label for i in range(0, 3numOfSamples): input = inputSet[i] if (i < numOfSamples): alldata.addSample(input, [0]) elif (i >= numOfSamples and i < numOfSamples2): alldata.addSample(input, [1]) else: alldata.addSample(input, [2]) return alldata
Split the dataset into 75% training and 25% test data.
def splitData(alldata): tstdata, trndata = alldata.splitWithProportion( 0.25 ) trndata._convertToOneOfMany() tstdata._convertToOneOfMany() return trndata, tstdata `
-
I am having a problem with my code, please help!
opened by ghost 0I'm working on speech recognition using raspberry pi while I was running the code of the build model using pybrain features I got the error:'SupervisedDataSet' object has no attribute '_convertToOneOfMany' ? If anyone has any pointers to get me back on the right path and that would be very much appreciated.
def createRGBdataSet(inputSet, numOfSamples, numOfPoints): alldata = ClassificationDataSet(numOfPoints, 1, nb_classes=3) # Iter through all 3 groups and add the samples with appropriate class label for i in range(0, 3*numOfSamples): input = inputSet[i] if (i < numOfSamples): alldata.addSample(input, [0]) elif (i >= numOfSamples and i < numOfSamples*2): alldata.addSample(input, [1]) else: alldata.addSample(input, [2]) return alldata # Split the dataset into 75% training and 25% test data. def splitData(alldata): tstdata, trndata = alldata.splitWithProportion( 0.25 ) trndata._convertToOneOfMany() tstdata._convertToOneOfMany() return trndata, tstdata
Releases(0.3.3)
-
0.3.3(Jan 9, 2015)
Free like Freedom
This is all very much a work in progress! More to come! ( We're working on it though! Stay tuned!) Installation Open an Anaconda Prompt (in Windows, o
This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction".
TreePartNet This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction". Depende
Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks
Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks Work accepted at NeurIPS'21 [paper, video]. If you use this code in
DeepLab-ResNet rebuilt in TensorFlow
DeepLab-ResNet-TensorFlow This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset. Fr
LSTMs (Long Short Term Memory) RNN for prediction of price trends
Price Prediction with Recurrent Neural Networks LSTMs BTC-USD price prediction with deep learning algorithm. Artificial Neural Networks specifically L
Winning Solution in NTIRE19 Challenges on Video Restoration and Enhancement (CVPR19 Workshops) - Video Restoration with Enhanced Deformable Convolutional Networks. EDVR has been merged into BasicSR and this repo is a mirror of BasicSR.
EDVR has been merged into BasicSR. This GitHub repo is a mirror of BasicSR. Recommend to use BasicSR, and open issues, pull requests, etc in BasicSR.
Using deep learning to predict gene structures of the coding genes in DNA sequences of Arabidopsis thaliana
DeepGeneAnnotator: A tool to annotate the gene in the genome The master thesis of the "Using deep learning to predict gene structures of the coding ge
Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)
🔉 Sound-guided Semantic Image Manipulation (CVPR2022) Official Pytorch Implementation Sound-guided Semantic Image Manipulation IEEE/CVF Conference on
基于PaddleClas实现垃圾分类,并转换为inference格式用PaddleHub服务端部署
百度网盘链接及提取码: 链接:https://pan.baidu.com/s/1HKpgakNx1hNlOuZJuW6T1w 提取码:wylx 一个垃圾分类项目带你玩转飞桨多个产品(1) 基于PaddleClas实现垃圾分类,导出inference模型并利用PaddleHub Serving进行服务
Cycle Consistent Adversarial Domain Adaptation (CyCADA)
Cycle Consistent Adversarial Domain Adaptation (CyCADA) A pytorch implementation of CyCADA. If you use this code in your research please consider citi
Understanding the Generalization Benefit of Model Invariance from a Data Perspective
Understanding the Generalization Benefit of Model Invariance from a Data Perspective This is the code for our NeurIPS2021 paper "Understanding the Gen
Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch
Memory Efficient Attention This is unofficial implementation of Self-attention Does Not Need O(n^2) Memory for Jax and PyTorch. Implementation is almo
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems
The SLIDE package contains the source code for reproducing the main experiments in this paper. Dataset The Datasets can be downloaded in Amazon-
Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021
Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021
Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling
Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling Code for the paper: Greg Ver Steeg and Aram Galstyan. "Hamiltonian Dynamics with N
MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network
MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network This repository is the official implementation of MatchGAN: A S
Automatic Number Plate Recognition using Contours and Convolution Neural Networks (CNN)
Cite our paper if you find this project useful https://www.ijariit.com/manuscripts/v7i4/V7I4-1139.pdf Abstract Image processing technology is used in
利用Tensorflow实现基于CNN的中文短文本分类
Text Classification with CNN 使用卷积神经网络进行中文文本分类 CNN做句子分类的论文可以参看: Convolutional Neural Networks for Sentence Classification 还可以去读dennybritz大牛的博客:Implemen
Unofficial PyTorch code for BasicVSR
Dependencies and Installation The code is based on BasicSR, Please install the BasicSR framework first. Pytorch=1.51 Training cd ./code CUDA_VISIBLE_
Checkout some cool self-projects you can try your hands on to curb your boredom this December!
SoC-Winter Checkout some cool self-projects you can try your hands on to curb your boredom this December! These are short projects that you can do you