module Mlp: sig .. end
Non object oriented Multi-Layer-Perceptrons
type t
type of Mlps
type transfer =
| |
Sigmoid |
| |
Linear |
| |
Zero |
Type of transfer-functions
val weight_init : float Pervasives.ref
Initial weights are taken from a range of +- weight_init
val make_net : int -> int -> int list -> transfer list -> t
create a net of the given size and the given layout of transfer functions
val make_classifier : int -> int -> int list -> t
create a net that can be used as a binary classifier with outputs of 1 or 0. All transfer functions will be Sigmoids
val make_approximator : int -> int -> int list -> t
create a net that can be used as a function approximator. All transfer functions except the last one will be Sigmoids, the last one is a linear function
val copy : t -> t
copy a given MLP
val get_in_size : t -> int
get the size of the input space
val get_out_size : t -> int
get the size of the output space
val get_depth : t -> int
get the number of layers
val conc : t -> t -> t
concatenate two MLPs to form a new one. Leaves the old ones intact
val conc_in_place : t -> t -> t
concatenate two MLPs. Note: the new net is dependent on the old ones. Changes in either net (e.g. through training) will be seen in the other nets
val evaluate : t -> float array -> float array
evaluate a MLP for the given input
val train : float -> ?decay:float -> unit -> t -> float array * float array -> t
train a MLP and return a new one. Due to the necessary copying of the net, this might be slow for large nets
val train_in_place : float -> ?decay:float -> unit -> t -> float array * float array -> unit
train a MLP. This changes the MLP you presented, but is faster
val squared_error : t -> float array * float array -> float
simple and often used error function on mlps