Hello,
I'd like to raise an entry-level (?) question/topic in operations research. I have no formal training in this field and would appreciate any input from people with relevant expertise/experience. Perhaps this is not the right spot to post it, but given that statistical data and probabilistic modelling are involved, I thought it might be not too far off...
Imagine the following scenario:
I have a store that supplies goods in a specific (technical) field. The store caters for both retail and wholesale. It serves a variety of different customers, with no formal insight about their demand patterns. For all we know, it's completely random and no patterns are known. But a comprehensive database of all transactions (buy, sell, goods in, goods out etc.) exists, going back several years.
Every week, I need to decide what goods to order and in what amounts. I'd like to use the entirety of data available to me (on a rolling basis), to optimise the store performance. I'd like to minimise the probability that any given (specific) customer demand won't be able to be fulfilled from the store's local stock (minimise backorder), whilst keeping inventory costs minimal. Obviously there's a tradeoff, so part of the question is how to optimise... Minimise the product "disappointment probability" x inventory cost?
For discussion simplicity, let's assume I only have one item in my store. If I knew how to optimise its stock level (how many units to order every week), I could theoretically do the same for hundreds of different items.
Let's also assume that all I have is a general purpose spreadsheet software and I don't wish to invest in dedicated software (there is an ERP software in place but let's ignore it for discussion sake).
My database includes comprehensive information about my item:
- Every sale, including date, who the customer was (acc. #), what the unit sell price was, how many were supplied from store stock and how many went on backorder.
- Every purchase (by specific supplier), including date ordered and date goods-in, quantity ordered and unit price.
Let's assume storage space is infinite. Obviously there's overhead, but let's assume it's fixed, stable and low. My main inventory cost is in sunk purchase cost. Since space is unlimited and there are no special storage requirements (like unit volume, weight, hazard / special handling), let's assume the only cost in holding the inventory is interest on the purchase. The item I have is non-perishable and has infinite shelf life (simplifying assumption).
I guess my question is: Where do I start in thinking about this / trying to come up with a method or algorithm to deal with it? Can you recommend a paper, a book, a method?
Thank you,
Ronen.
PS
I researched the topic online a little and looked in Google scholar. I found several references, but then I realised I have no way of telling which ones are "good" / worthwhile investing time in studying. And then I remembered Elsmar!
I'd like to raise an entry-level (?) question/topic in operations research. I have no formal training in this field and would appreciate any input from people with relevant expertise/experience. Perhaps this is not the right spot to post it, but given that statistical data and probabilistic modelling are involved, I thought it might be not too far off...
Imagine the following scenario:
I have a store that supplies goods in a specific (technical) field. The store caters for both retail and wholesale. It serves a variety of different customers, with no formal insight about their demand patterns. For all we know, it's completely random and no patterns are known. But a comprehensive database of all transactions (buy, sell, goods in, goods out etc.) exists, going back several years.
Every week, I need to decide what goods to order and in what amounts. I'd like to use the entirety of data available to me (on a rolling basis), to optimise the store performance. I'd like to minimise the probability that any given (specific) customer demand won't be able to be fulfilled from the store's local stock (minimise backorder), whilst keeping inventory costs minimal. Obviously there's a tradeoff, so part of the question is how to optimise... Minimise the product "disappointment probability" x inventory cost?
For discussion simplicity, let's assume I only have one item in my store. If I knew how to optimise its stock level (how many units to order every week), I could theoretically do the same for hundreds of different items.
Let's also assume that all I have is a general purpose spreadsheet software and I don't wish to invest in dedicated software (there is an ERP software in place but let's ignore it for discussion sake).
My database includes comprehensive information about my item:
- Every sale, including date, who the customer was (acc. #), what the unit sell price was, how many were supplied from store stock and how many went on backorder.
- Every purchase (by specific supplier), including date ordered and date goods-in, quantity ordered and unit price.
Let's assume storage space is infinite. Obviously there's overhead, but let's assume it's fixed, stable and low. My main inventory cost is in sunk purchase cost. Since space is unlimited and there are no special storage requirements (like unit volume, weight, hazard / special handling), let's assume the only cost in holding the inventory is interest on the purchase. The item I have is non-perishable and has infinite shelf life (simplifying assumption).
I guess my question is: Where do I start in thinking about this / trying to come up with a method or algorithm to deal with it? Can you recommend a paper, a book, a method?
Thank you,
Ronen.
PS
I researched the topic online a little and looked in Google scholar. I found several references, but then I realised I have no way of telling which ones are "good" / worthwhile investing time in studying. And then I remembered Elsmar!
Last edited: