|
|
|
Search published articles |
|
|
Showing 6 results for Dynamic Programming
Mohammad Modarres, Ehsan Bolandifar, Volume 1, Issue 1 (5-2008)
Abstract
Abstract We extend the concept of dynamic pricing by integrating it with “overselling with opportunistic cancellation” option, within the framework of dynamic policy. Under this strategy, to sell a stock of perishable product (or capacity) two prices are offered to customers at any given time period. Customers are categorized as high-paying and low-paying ones. The seller deliberately oversells its capacity if high paying customers show up, even when the capacity is already fully booked by low-paying customers. In that case, the sale to some low-paying customers is canceled, although an appropriate compensation must be provided. A dynamic programming approach is applied to formulate and solve this problem. We develop two models for continuous and periodic pricing, depending on the frequency of price changing. The advantage of this system over dynamic pricing model is investigated through some numerical examples. We also study some structural properties of the optimal policies.
Mohammadi Limaei, Lohmander, Obersteiner, Volume 2, Issue 1 (4-2010)
Abstract
The optimal harvesting policy is calculated as a function of the entering stock, the price state, the harvesting cost, and the rate of interest in the capital market. In order to determine the optimal harvest schedule, the growth function and stumpage price process are estimated for the Swedish mixed species forests. The stumpage price is assumed to follow a stochastic Markov process. A stochastic dynamic programming technique and traditional deterministic methods are used to obtain the optimal decisions. The expected present value of all future profits is maximized. The results of adaptive optimization are compared with results obtained by the traditional deterministic approach. The results show a significant increase in the expected economic values via optimal adaptive decisions.
Lalwani, Kumar, Spedicato, Gupta, Volume 3, Issue 1 (4-2012)
Abstract
We present an application of ABS algorithms for multiple sequence alignment (MSA). The Markov decision process (MDP) based model leads to a linear programming problem (LPP), whose solution is linked to a suggested alignment. The important features of our work include the facility of alignment of multiple sequences simultaneously and no limit for the length of the sequences. Our goal here is to avoid the excessive computing time, needed by dynamic programming based algorithms for alignment of a large number of sequences. In an attempt to demonstrate the integration of the ABS approach with complex mathematical frameworks, we apply the ABS implicit LX algorithm to elucidate the LPP, constructed with the assistance of MDP. The MDP applied for MSA is a pragmatic approach and entails a scope for future work. Programming is done in the MATLAB environment
Samimi, Aghaie, Shahriari, Volume 3, Issue 2 (9-2012)
Abstract
We deal
with the relationship termination problem in the context of individual-level customer
relationship management (CRM) and use a Markov decision process to determine
the most appropriate occasion for termination of the relationship with a
seemingly unprofitable customer. As a particular case, the
beta-geometric/beta-binomial model is considered as the basis to define
customer behavior and it is explained how to compute customer lifetime value
when one needs to take account of the firm’s choice as to whether to continue
or terminate relationship with unprofitable customers. By numerical examples provided
by simulation, it is shown how a stochastic dynamic programming approach can be
adopted in order to obtain a more precise estimation of the customer lifetime
value as a key criterion for resource allocation in CRM.
Dr Yahia Zare Mehrjerdi, Mitra Moubed, Volume 6, Issue 1 (3-2015)
Abstract
This paper proposes a robust model for optimizing collaborative reverse supply chains. The primary idea is to develop a collaborative framework that can achieve the best solutions in the uncertain environment. Firstly, we model the exact problem in the form of a mixed integer nonlinear programming. To regard uncertainty, the robust optimization is employed that searches for an optimum answer with nearly all possible deviations in mind. In order to allow the decision maker to vary the protection level, we used the "budget of uncertainty" approach. To solve the np-hard problem, we suggest a hybrid heuristic algorithm combining dynamic programming, ant colony optimization and tabu search. To confirm the performance of the algorithm, two validity tests are done firstly by comparing with the previously solved problems and next by solving a sample problem with more than 900 combinations of parameters and comparing the results with the nominal case. In conclusion, the results of different combinations and prices of robustness are compared and some directions for future researches are suggested finally.
Dr. S Mohammadi Limaei, Dr. Peter Lohmander, Volume 8, Issue 1 (4-2017)
Abstract
We present a stochastic dynamic programming approach with Markov chains for optimal control of the forest sector. The forest is managed via continuous cover forestry and the complete system is sustainable. Forest industry production, logistic solutions and harvest levels are optimized based on the sequentially revealed states of the markets. Adaptive full system optimization is necessary for consistent results. The stochastic dynamic programming problem of the complete forest industry sector is solved. The raw material stock levels and the product prices are state variables. In each state and at each stage, a quadratic programming profit maximization problem is solved, as a subproblem within the STDP algorithm.
|
|