Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • 2024-05
  • An approximate solution is sought in the

    2018-11-02

    An approximate solution is sought in the form of an output of an artificial neural network of the given architecture:
    whose weights are determined when minimizing the error functional and for our case, F(x, y, α) = –α(y – cosx). Test points () are chosen to be random and distributed uniformly over the examined intervals of variation of the value x and the parameter α; their choice is repeated after several (3–5) iterations of the optimization algorithm. We shall define a new random choice of test points at some step as test point regeneration. The quality of the obtained solution is assessed from the exact analytical solution of Eq. (2) with an initial condition , which takes the form
    In the present work, we have examined two types of models corresponding to various basic functions with the varying number of estrogen related receptor in the network. The first case involved choosing universal sigmoids in the form and the second one asymmetric Gaussians in the form that were known to satisfy the initial condition. The error functional was optimized according to the algorithm combining RPop and the cloud method [2]; the points were randomly regenerated in each three steps, and the cloud consisted of three particles. Additionally, we studied model construction algorithms using the complementary data on the sought-for solution, estimated the effect of such a refinement for various types of basis functions and various numbers of neurons in the network. Matches of the sought-for solutions and the ones already found by the explicit Euler method for the values of the parameter α equal to 5 and 50 were treated as such data. Apparently, for α = 5, the equation is no longer stiff and has a sufficiently accurate solution (see Fig. 1, b). A ‘bad’ solution for α = 50 allows examining how the model reacts to inaccurate data. New information is introduced into the model by adding the following complementary summand to the minimizing functional: where f() is a pointwise Euler solution; a weight δ1 may be varied, and accuracy of all acquired data or any special conditions should be taken into account. The above described data for α = 50 was used in the first modification of the model, while the data for α = 5 was also taken into account for the second modification. Some results of the simulation experiments of the error functional for two types of neural networks are listed in Table 1. We examined networks with a varying number of neurons and with the number of iterations equaling 200. Evidently, with no complementary data, the option with the basis functions satisfying the initial condition, i.e. the Gaussians, proved to be preferable. Introducing the complementary data increases the accuracy only when universal basis functions, i.e. sigmoids, are used, and vice versa, for the network with the functions adjusted to the initial condition, there is, generally, an increase in error. The effect of using the data for networks with a great number of neurons (n = 50) is particularly pronounced. As for the great error in the column δ1 = 0 (which means there is no complementary data), for the perceptron network it is explained by the the functional being characteristically sensitive to a steep rise in solution. In this case, increasing the number of neurons does not change the situation. With no or little data, a great number of neurons just slows down the training and impairs the result. Importantly, the results are improved with a sufficient increase in the number of iterations.
    Modification of a neural network model using special test point regeneration Let us continue to refine the model with the complementary data using a new test point regeneration procedure which is selecting the test points at each iteration by a specific rule. Let us introduce the parameter taking the values 0, 0.3, 0.5, 0.7, 1.0 (and generally speaking, any between 0 and 1) and reflecting the proportion of points fixed from one iteration to another. For example, = 0 means a complete regeneration, i.e. all points are randomly selected anew (are evenly distributed over the examined interval) before each iteration, = 1 means that the points are fixed at the first iteration and do not change. For intermediate values of the parameter the following rule is used: ۰m points of the total m test points with the highest values of the error functional are fixed, while the rest are regenerated randomly. In all cases, at the first iteration the points are selected to be randomly and evenly distributed over the examined interval.