Item Infomation

Full metadata record
DC FieldValueLanguage
dc.contributor.authorFuning, Li-
dc.contributor.authorSebastian, Lang-
dc.contributor.authorBingyuan, Hong-
dc.date.accessioned2023-05-16T03:51:22Z-
dc.date.available2023-05-16T03:51:22Z-
dc.date.issued2023-
dc.identifier.urihttps://link.springer.com/article/10.1007/s10845-023-02094-4-
dc.identifier.urihttps://dlib.phenikaa-uni.edu.vn/handle/PNK/8455-
dc.descriptionCC BYvi
dc.description.abstractAs an essential scheduling problem with several practical applications, the parallel machine scheduling problem (PMSP) with family setups constraints is difficult to solve and proven to be NP-hard. To this end, we present a deep reinforcement learning (DRL) approach to solve a PMSP considering family setups, aiming at minimizing the total tardiness. The PMSP is first modeled as a Markov decision process, where we design a novel variable-length representation of states and actions, so that the DRL agent can calculate a comprehensive priority for each job at each decision time point and then select the next job directly according to these priorities. Meanwhile, the variable-length state matrix and action vector enable the trained agent to solve instances of any scales. To handle the variable-length sequence and simultaneously ensure the calculated priority is a global priority among all jobs, we employ a recvi
dc.language.isoenvi
dc.publisherSpringervi
dc.subjectNP-hardvi
dc.subjectPMSPvi
dc.titleA two-stage RNN-based deep reinforcement learning approach for solving the parallel machine scheduling problem with due dates and family setupsvi
dc.typeBookvi
Appears in CollectionsOER - Kinh tế và Quản lý

Files in This Item: