Social cognitive optimization

Social cognitive optimization (SCO) is a population-based metaheuristic optimization algorithm which was developed in 2002.[1] This algorithm is based on the social cognitive theory, and the key point of the ergodicity is the process of individual learning of a set of agents with their own memory and their social learning with the knowledge points in the social sharing library. It has been used for solving continuous optimization,[2][3] integer programming,[4] and combinatorial optimization problems. It has been incorporated into the NLPSolver extension of Calc in Apache OpenOffice.

Algorithm

Let f ( x ) {\displaystyle f(x)} be a global optimization problem, where x {\displaystyle x} is a state in the problem space S {\displaystyle S} . In SCO, each state is called a knowledge point, and the function f {\displaystyle f} is the goodness function.

In SCO, there are a population of N c {\displaystyle N_{c}} cognitive agents solving in parallel, with a social sharing library. Each agent holds a private memory containing one knowledge point, and the social sharing library contains a set of N L {\displaystyle N_{L}} knowledge points. The algorithm runs in T iterative learning cycles. By running as a Markov chain process, the system behavior in the tth cycle only depends on the system status in the (t − 1)th cycle. The process flow is in follows:

  • [1. Initialization]:Initialize the private knowledge point x i {\displaystyle x_{i}} in the memory of each agent i {\displaystyle i} , and all knowledge points in the social sharing library X {\displaystyle X} , normally at random in the problem space S {\displaystyle S} .
  • [2. Learning cycle]: At each cycle t {\displaystyle t} ( t = 1 , , T ) {\displaystyle (t=1,\ldots ,T)}
    • [2.1. Observational learning] For each agent i {\displaystyle i} ( i = 1 , , N c ) {\displaystyle (i=1,\ldots ,N_{c})}
      • [2.1.1. Model selection]:Find a high-quality model point x M {\displaystyle x_{M}} in X ( t ) {\displaystyle X(t)} , normally realized using tournament selection, which returns the best knowledge point from randomly selected τ B {\displaystyle \tau _{B}} points.
      • [2.1.2. Quality Evaluation]:Compare the private knowledge point x i ( t ) {\displaystyle x_{i}(t)} and the model point x M {\displaystyle x_{M}} ,and return the one with higher quality as the base point x B a s e {\displaystyle x_{Base}} ,and another as the reference point x R e f {\displaystyle x_{Ref}}
      • [2.1.3. Learning]:Combine x B a s e {\displaystyle x_{Base}} and x R e f {\displaystyle x_{Ref}} to generate a new knowledge point x i ( t + 1 ) {\displaystyle x_{i}(t+1)} . Normally x i ( t + 1 ) {\displaystyle x_{i}(t+1)} should be around x B a s e {\displaystyle x_{Base}} ,and the distance with x B a s e {\displaystyle x_{Base}} is related to the distance between x R e f {\displaystyle x_{Ref}} and x B a s e {\displaystyle x_{Base}} , and boundary handling mechanism should be incorporated here to ensure that x i ( t + 1 ) S {\displaystyle x_{i}(t+1)\in S} .
      • [2.1.4. Knowledge sharing]:Share a knowledge point, normally x i ( t + 1 ) {\displaystyle x_{i}(t+1)} , to the social sharing library X {\displaystyle X} .
      • [2.1.5. Individual update]:Update the private knowledge of agent i {\displaystyle i} , normally replace x i ( t ) {\displaystyle x_{i}(t)} by x i ( t + 1 ) {\displaystyle x_{i}(t+1)} . Some Monte Carlo types might also be considered.
    • [2.2. Library Maintenance]:The social sharing library using all knowledge points submitted by agents to update X ( t ) {\displaystyle X(t)} into X ( t + 1 ) {\displaystyle X(t+1)} . A simple way is one by one tournament selection: for each knowledge point submitted by an agent, replace the worse one among τ W {\displaystyle \tau _{W}} points randomly selected from X ( t ) {\displaystyle X(t)} .
  • [3. Termination]:Return the best knowledge point found by the agents.

SCO has three main parameters, i.e., the number of agents N c {\displaystyle N_{c}} , the size of social sharing library N L {\displaystyle N_{L}} , and the learning cycle T {\displaystyle T} . With the initialization process, the total number of knowledge points to be generated is N L + N c ( T + 1 ) {\displaystyle N_{L}+N_{c}*(T+1)} , and is not related too much with N L {\displaystyle N_{L}} if T {\displaystyle T} is large.

Compared to traditional swarm algorithms, e.g. particle swarm optimization, SCO can achieving high-quality solutions as N c {\displaystyle N_{c}} is small, even as N c = 1 {\displaystyle N_{c}=1} . Nevertheless, smaller N c {\displaystyle N_{c}} and N L {\displaystyle N_{L}} might lead to premature convergence. Some variants [5] were proposed to guaranteed the global convergence. One can also make a hybrid optimization method using SCO combined with other optimizers. For example, SCO was hybridized with differential evolution to obtain better results than individual algorithms on a common set of benchmark problems.[6]

References

  1. ^ Xie, Xiao-Feng; Zhang, Wen-Jun; Yang, Zhi-Lian (2002). Social cognitive optimization for nonlinear programming problems. International Conference on Machine Learning and Cybernetics (ICMLC), Beijing, China: 779-783.
  2. ^ Xie, Xiao-Feng; Zhang, Wen-Jun (2004). Solving engineering design problems by social cognitive optimization. Genetic and Evolutionary Computation Conference (GECCO), Seattle, WA, USA: 261-262.
  3. ^ Xu, Gang-Gang; Han, Luo-Cheng; Yu, Ming-Long; Zhang, Ai-Lan (2011). Reactive power optimization based on improved social cognitive optimization algorithm. International Conference on Mechatronic Science, Electric Engineering and Computer (MEC), Jilin, China: 97-100.
  4. ^ Fan, Caixia (2010). Solving integer programming based on maximum entropy social cognitive optimization algorithm. International Conference on Information Technology and Scientific Management (ICITSM), Tianjing, China: 795-798.
  5. ^ Sun, Jia-ze; Wang, Shu-yan; chen, Hao (2014). A guaranteed global convergence social cognitive optimizer. Mathematical Problems in Engineering: Art. No. 534162.
  6. ^ Xie, Xiao-Feng; Liu, J.; Wang, Zun-Jing (2014). "A cooperative group optimization system". Soft Computing. 18 (3): 469–495. arXiv:1808.01342. doi:10.1007/s00500-013-1069-8. S2CID 5393223.