- 浏览: 4332230 次
- 性别:
- 来自: 湛江
博客专栏
-
SQLite源码剖析
浏览量:79402
-
WIN32汇编语言学习应用...
浏览量:68349
-
神奇的perl
浏览量:101473
-
lucene等搜索引擎解析...
浏览量:281118
-
深入lucene3.5源码...
浏览量:14595
-
VB.NET并行与分布式编...
浏览量:65542
-
silverlight 5...
浏览量:31309
-
算法下午茶系列
浏览量:45189
文章分类
最新评论
-
yoyo837:
counters15 写道目前只支持IE吗?插件的东西是跨浏览 ...
Silverlight 5 轻松开启绚丽的网页3D世界 -
shuiyunbing:
直接在前台导出方式:excel中的单元格样式怎么处理,比如某行 ...
Flex导出Excel -
di1984HIT:
写的很好~
lucene入门-索引网页 -
rjguanwen:
在win7 64位操作系统下,pygtk的Entry无法输入怎 ...
pygtk-entry -
ldl_xz:
http://www.9958.pw/post/php_exc ...
PHPExcel常用方法汇总(转载)
继续定义单元神经元
net.inputs{i}.range
This property defines the range of each element of the ith network input.
It can be set to any Ri x 2 matrix, where Ri is the number of elements in the input (net.inputs{i}.size), and each element in column 1 is less than the element next to it in column 2.
Each jth row defines the minimum and maximum values of the jth input element, in that order:
net.inputs{i}(j,:)
Uses. Some initialization functions use input ranges to find appropriate initial values for input weight matrices.
Side Effects. Whenever the number of rows in this property is altered, the input size, processedSize, and processedRange change to remain consistent. The sizes of any weights coming from this input and the dimensions of the weight matrices also change.
>> net.inputs{1}.range=[0 1;0 1]
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1 output
biases: {2x1 cell} containing 2 biases
inputWeights: {2x1 cell} containing 1 input weight
layerWeights: {2x2 cell} containing 1 layer weight
functions:
adaptFcn: (none)
divideFcn: (none)
gradientFcn: (none)
initFcn: (none)
performFcn: (none)
plotFcns: {}
trainFcn: (none)
parameters:
adaptParam: (none)
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: (none)
weight and bias values:
IW: {2x1 cell} containing 1 input weight matrix
LW: {2x2 cell} containing 1 layer weight matrix
b: {2x1 cell} containing 2 bias vectors
other:
name: ''
userdata: (user information)
>>
======
net.layers{i}.size
This property defines the number of neurons in the ith layer. It can be set to 0 or a positive integer.
Side Effects. Whenever this property is altered, the sizes of any input weights going to the layer (net.inputWeights{i,:}.size), any layer weights going to the layer (net.layerWeights{i,:}.size) or coming from the layer (net.inputWeights{i,:}.size), and the layer's bias (net.biases{i}.size), change.
The dimensions of the corresponding weight matrices (net.IW{i,:}, net.LW{i,:}, net.LW{:,i}), and biases (net.b{i}) also change.
Changing this property also changes the size of the layer's output (net.outputs{i}.size) and target (net.targets{i}.size) if they exist.
Finally, when this property is altered, the dimensions of the layer's neurons (net.layers{i}.dimension) are set to the same value. (This results in a one-dimensional arrangement of neurons. If another arrangement is required, set the dimensions property directly instead of using size.
=======
>> net.layers{1}.size=2
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1 output
biases: {2x1 cell} containing 2 biases
inputWeights: {2x1 cell} containing 1 input weight
layerWeights: {2x2 cell} containing 1 layer weight
functions:
adaptFcn: (none)
divideFcn: (none)
gradientFcn: (none)
initFcn: (none)
performFcn: (none)
plotFcns: {}
trainFcn: (none)
parameters:
adaptParam: (none)
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: (none)
weight and bias values:
IW: {2x1 cell} containing 1 input weight matrix
LW: {2x2 cell} containing 1 layer weight matrix
b: {2x1 cell} containing 2 bias vectors
other:
name: ''
userdata: (user information)
>>
=====
net.layers{i}.initFcn
This property defines which of the layer initialization functions are used to initialize the ith layer, if the network initialization function (net.initFcn) is initlay. If the network initialization is set to initlay, then the function indicated by this property is used to initialize the layer's weights and biases.
For a list of functions, type
help nninit
=====
>> net.layers{1}.initFcn='initnw'
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1 output
biases: {2x1 cell} containing 2 biases
inputWeights: {2x1 cell} containing 1 input weight
layerWeights: {2x2 cell} containing 1 layer weight
functions:
adaptFcn: (none)
divideFcn: (none)
gradientFcn: (none)
initFcn: (none)
performFcn: (none)
plotFcns: {}
trainFcn: (none)
parameters:
adaptParam: (none)
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: (none)
weight and bias values:
IW: {2x1 cell} containing 1 input weight matrix
LW: {2x2 cell} containing 1 layer weight matrix
b: {2x1 cell} containing 2 bias vectors
other:
name: ''
userdata: (user information)
>>
>> net.layers{2}.size=1
>> net.layers{2}.initFcn='initnw'
>> net.layers{2}.transferFcn='hardlim'
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1 output
biases: {2x1 cell} containing 2 biases
inputWeights: {2x1 cell} containing 1 input weight
layerWeights: {2x2 cell} containing 1 layer weight
functions:
adaptFcn: (none)
divideFcn: (none)
gradientFcn: (none)
initFcn: (none)
performFcn: (none)
plotFcns: {}
trainFcn: (none)
parameters:
adaptParam: (none)
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: (none)
weight and bias values:
IW: {2x1 cell} containing 1 input weight matrix
LW: {2x2 cell} containing 1 layer weight matrix
b: {2x1 cell} containing 2 bias vectors
other:
name: ''
userdata: (user information)
>>
=
net.layers{i}.transferFcn
This function defines which of the transfer functions is used to calculate the ith layer's output, given the layer's net input, during simulation and training.
For a list of functions type: help nntransfer
=
>> net.adapFcn='trans'
===
net.adaptFcn
This property defines the function to be used when the network adapts. It can be set to the name of any network adapt function. The network adapt function is used to perform adaption whenever adapt is called.
[net,Y,E,Pf,Af] = adapt(NET,P,T,Pi,Ai)
For a list of functions type help nntrain.
Side Effects. Whenever this property is altered, the network's adaption parameters (net.adaptParam) are set to contain the parameters and default values of the new function.
===
>> net.adaptFcn='trains'
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1 output
biases: {2x1 cell} containing 2 biases
inputWeights: {2x1 cell} containing 1 input weight
layerWeights: {2x2 cell} containing 1 layer weight
functions:
adaptFcn: 'trains'
divideFcn: (none)
gradientFcn: (none)
initFcn: (none)
performFcn: (none)
plotFcns: {}
trainFcn: (none)
parameters:
adaptParam: .passes
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: (none)
weight and bias values:
IW: {2x1 cell} containing 1 input weight matrix
LW: {2x2 cell} containing 1 layer weight matrix
b: {2x1 cell} containing 2 bias vectors
other:
name: ''
userdata: (user information)
>>
==========
net.performFcn
This property defines the function used to measure the network's performance. You can set it to the name of any of the performance functions. The performance function is used to calculate network performance during training whenever train is called.
[net,tr] = train(NET,P,T,Pi,Ai)
For a list of functions, type
help nnperformance
Side Effects. Whenever this property is altered, the network's performance parameters (net.performParam) are set to contain the parameters and default values of the new function
==========
net.performFcn='mse'
======
======
>> net.trainFcn='trainlm'
发表评论
-
R语言与数据分析
2015-05-15 20:58 2064当今计算机系统要处理的数据类型变得多种多样,并且为了深入理 ... -
机器学习实践指南:案例应用解析
2014-04-17 19:53 1005试读及购买链接 《机器 ... -
matlab-矩阵合并
2013-06-10 13:56 3126a = 1 2 3 2 -
人工智能与数据分析所需要的知识
2013-04-30 18:27 292想较好得在数据分析和人工智能相关领域发展,最好具备以下基础: ... -
麦哈普的AI乐园【myhaspl@qq.com】我的另一个博客(机器学习、数据分析、智能计算的原创)
2013-04-28 10:52 11http://blog.csdn.net/u0102556 ... -
R-并行计算
2013-04-28 10:50 6061啊。。。找了一下,R 居然真的有办法可以多cpu平行运算!! ... -
谱聚类
2013-04-11 10:44 26641. 谱聚类 给你博客园上若干个博客,让你将它 ... -
对变化建模-用差分方程-动力系统及常数解
2013-04-09 15:24 1385差分表示在一个时间周期里考察对象的变化量。 差分表示在一个时 ... -
逻辑斯蒂映射-伪随机数
2013-04-04 15:28 3310逻辑斯蒂映射的形式为 x_(n+1)=ax_n( ... -
matlab-多项式乘除法及式子和导数
2013-03-21 15:06 4580>> a=[22 12 4 54] ... -
matlab-数组-元胞数据与结构数组
2013-03-20 17:45 3224y、z是元胞数组,num2cell完成由数值数组到元胞数组的 ... -
矩阵-范数
2013-03-13 17:30 1846>> a a = 12 33 ... -
向量-范数
2013-03-13 16:06 2264>> b=a(3,:) b = 22 ... -
矩阵-求逆
2013-02-27 15:51 2459设R是一个交换环,A是 ... -
lisp-猜数字算法与全局函数、变量
2013-01-30 17:55 1608* (defvar *big* 100) *BIG* ... -
开源 Lisp 相关项目
2013-01-19 22:38 3849IOLib 项目 (http://common-lisp.n ... -
四分位数求法
2012-11-22 20:18 2793四分位数间距:是上四分位数与下四分位数之差,用四分位数间距可反 ... -
matlab-神经网络-自定义多层感知器解决异或(1)
2012-10-09 22:41 5133>> net=network net = ... -
matlab-模态对话框
2012-10-05 16:59 3481modal dialog box with the comm ... -
matlab-gui activex
2012-10-05 16:45 2845为查看方法,在click事件中加上keyboard方法 ...
相关推荐
用多层感知器解决异或分类问题,用plot函数绘出向量分布和分类线。
% 用两层感知器实现异或XOR % 第一层是随机层,即权重何偏差随机确定,以第一层的输出作为第二层的输入
本代码使用newp建立两层感知器,用第一层的输出作为第二层的输入,每一步都有详细的说明,程序比较精简,只有20行代码,就实现了多层感知器解决异或的问题,经测试,正确率100%。 例:q=[1 1 0; 1 0 1]; >> a=...
使用两层感知器网络解决异或问题的程序,使用MATLAB编写,希望大家一起讨论,很简单的神经网络基础问题,可以帮助大家理解神经网络的算法。
BP神经网络解决异或逻辑的两种方法,matlab的源代码程序文件,这是初学BP神经网络会碰到的一个比较棘手的问题,本代码提供了两种不同的方法实现BP神经网络解决异或逻辑,可能比较基础,方法也不是很牛逼,纯自己瞎玩...
运行成功,希望对大家有所帮助,应用神经网络解决异或问题,注释很详细,大家也可以改变隐层个数或学习率看看结果
用matlab自带的感知器实现异或操作,第一层分别用两个感知器和三个感知器
matlab开发-多层感知神经网络模型与反向传播算法。Simulink多层感知器神经网络模型及反向传播算法。
这个例子是感知器人工神经网络的异或门实现,这是多层感知器神经网络的一个典型例子。
适合在校大学生初步实验使用,将此程序直接粘贴到Matlab的m文件中,便可直接运行,得到结果,本人已在博客中发表“与”运算实验,神经网络网址如https://blog.csdn.net/weixin_41529093/article/details/86713335,...
1、应用单层感知器做的一个简单的分类 2、单层单输出感知器解决一个简单的分类问题 3、单层单输出感知器实现“或”功能 4、单层双输出感知器实现多个神经元的分类 5、多层感知器完成异或功能
BP神经网络解决异或问题BP神经网络解决异或问题
使用BP神经网络算法实现的解决异或问题,用C语言实现的
解决疑惑问题简单的神经网络,根据自定义迭代次数和自定义的学习效率解决(0,1)以及(0,1,0)的输入问题:基本的方式严格按照神经网络标准进行,是合格的python代码。
多层感知器神经网络,通过多次联系,实现自学习的过程,对异或学习非常有效,资源中xor.dat为异或学习样本
自己编写的BP神经网络解决异或问题代码,该代码注释了自己的编程体会,使用了最少的隐含层神经元解决异或问题,很适合新手对BP神经网络的理解。
线性神经网络-异或问题 人工神经网络 深度学习