College of Science and Engineering
Permanent URI for this community
The College of Science and Engineering spans the broad spectrum of science, technology, engineering and mathematics to solve real world issues through applied solutions.
Science and Engineering researchers seek to discover new understandings in fields as diverse as groundwater hydrology, forensic science and medical devices, while our teaching offers training in areas of biological sciences, chemical and physical sciences, computer science, information technology, engineering, mathematics and the environment.
It’s where curiosity meets ingenuity and takes us from atomic structures to the scale of entire oceans, forests and beyond.
Working across many areas of expertise Science and Engineering embraces our most pressing questions like climate change, food security, strong economies or environmental problems for better health and living standards in Australia and around the world.
Browse
Browsing College of Science and Engineering by Issue Date
Results Per Page
Sort Options
Item Use of Dynamic Programming for Reliability Engineers(Institute of Electrical and Electronics Engineers Computer Society (IEEE Publishing), 1973) Kulshrestha, D K; Gupta, M CThis paper aims to obtain the optimum cost allocation to a number of components connected in series (no redundancy) with a view to maximize the system reliability subject to a given total cost of the system. The reliability of each component is a function of its cost. The technique of Dynamic Programming has been employed to achieve the results.Item Magnetoacoustic Oscillations of a Plasma Containing Two Species of Ions(Institute of Electrical and Electronics Engineers Computer Society (IEEE Publishing), 1978) Jessup, B L; McCarthy, A LNumerical calculations of linear magnetoacoustic resonant phenomena in a plasma containing two species of ions have been made for a cylindrical plasma with a model which includes the effects of collisional damping and radial non-uniformities in temperature and number density. At sufficiently high temperatures two frequencies are predicted at which magnetoacoustic resonances for the first radial mode will occur. These are expected from considerations of the effects of the ion-ion hybrid resonance.Item Cyclotron Analysis of Australian Atmospheric Contamination before and after the 1974 French Nuclear Tests in the Pacific(Institute of Electrical and Electronics Engineers Computer Society (IEEE Publishing), 1979) Chaudhri, M Anwar; Lee, M M; Rouse, J L; Spicer, B MAtmospheric particulates collected around East Coast Australian cities and Port Moresby, just before and after the French Atomic Test Series of 1974 in the Pacific, have been analysed by proton activation using the Melbourne University Cyclotron. A number of elements, namely S, Ca, Ti, Cr, Fe, Ni, Cu, Zn, Se and Hg, ranging in concentrations from .001 ug/m3 to up to 3.27 ug/m3 have been detected. The changes observed in the concentrations of these elements in the two sets of samples, taken just before and just after the Atomic Tests, are attributed to Synoptic rather than Nuclear Fall-Out effects.Item Yields of Cyclotron Produced Medical Isotopes: A Comparison of Theoretical Potential and Experimental Results(Institute of Electrical and Electronics Engineers Computer Society (IEEE Publishing), 1979) Chaudhri, M AnwarExperimentally obtained yields of most of the medical radioisotopes, produced with cyclotrons through different nuclear reactions at various bombarding energies in laboratories around the world, are presented. These yields are compared with those calculated using experimentally measured cross sections (where available) at similar bombarding conditions. Where experimental cross sections are unavailable, empirically constructed excitation functions have been used. The information provided in this paper would be a valuable aid in selecting the most suitable nuclear reaction and bombarding conditions for producing a particular radioisotope and in assessing various losses of the isotope during chemical processing of the irradiated target.Item The 7Li(p,n)7 Be Reaction as a Source of Fast Neutrons for Smaller Compact Cyclotrons(Institute of Electrical and Electronics Engineers Computer Society (IEEE Publishing), 1979) Chaudhri, M Anwar; Templer, J.; Rouse, J LThe usefulness of the 7Li(p,n)7Be reaction as a fast neutron source for applications, such as neutron therapy etc., using smaller compact cyclotrons (proton energies of up to 15-18 MeV) has been investigated by measuring thin and thick target neutron spectra, absolute cross sections and angular distributions of various neutron groups produced in this reaction at 10.45 MeV. Our results indicate that the forward direction is still the preferred one for obtaining the most suitable fast neutron beam for biomedical application, and that moderately thick, rather than infinitely thick, target would provide higher mean energy. Moreover, it has also been shown that the 7Li(p,n)7 Be reaction is more suited for producing neutron beams for therapy than proton and deuteron induced reactions on Be at corresponding energies, and that a therapeutically useful neutron beam can be produced even with smaller compact cyclotrons.Item An algorithm for solving S-games and differential S-games(Institute of Electrical and Electronic Engineers, 1982) Filar, Jerzy A; Raghavan, Thirukkannamangai Eachambadi SWe present an algorithm for solving S-Games. Our algorithm can be used to compute approximately the value of the game as well as €-optimal strategies of the two players. For games with similar structure to S-games which do not necessarily possess a value, the algorithm can sometimes be used as a heuristic procedure for determining the existence of a minimax solution. Further, it is shown that a certain simple class of differential games (we call them "differential S-games") can be viewed as static games and solved by the above procedure.Item Gain/variability tradeoffs in undiscounted Markov Decision Processes(Institute of Electrical and Electronic Engineers, 1985) Filar, Jerzy A; Lee, Huey-MiinWe consider a finite state/action Markov Decision Process over the infinite time horizon, and with the limiting average reward criterion. However, we are interested not only in maximizing the above reward criterion but also in minimizing "the variability" of the stream of rewards. The latter notion is formalized in two alternative ways: one in terms of measuring absolute deviations from the "optimal" reward, and the other in terms of a "long-run variance" of a policy. In both cases we formulate a bi-objective optimization problem and show that efficient (i.e., "nondominated") deterministic stationary policies exist and can be computed by finite algorithms. In addition, in the former case we give an algorithm for computing a finite set of "critical efficient policies" which in a sense constitutes one complete set of reasonable responses by a decision-maker sensitive to the variability of rewards. However, the analysis of this case is intended primarily as a "sensitivity analysis" technique rather than a complete theoretical treatment of the gain/variability tradeoffs.Item Player aggregation in the traveling inspector model(Institute of Electrical and Electronic Engineers, 1985) Filar, Jerzy AWe consider a model of dynamic inspection/surveillance of a number of facilities in different geographical locations. The inspector in this process travels from one facility to another and performs an inspection at each facility he visits. His aim is to devise an inspection/ travel schedule which minimizes the losses to society (or to his employer) resulting both from undetected violations of the regulations and from the costs of the policing operation. This model is formulated as a non-cooperative, single-controller, stochastic game. The existence of stationary Nash equilibria is established as a consequence of aggregating all the inspectees into a single “aggregated inspectee”. It is shown that such player aggregation causes no loss of generality under very mild assumptions. A notion of an “optimal Nash equilibrium” for the inspector is introduced and proven to be well-defined in this context. The issue of the inspector’s power to “enforce” such an equilibrium is also discussed.Item The completely mixed stochastic game(American Mathematical Society, 1985) Filar, Jerzy AWe consider a zero-sum stochastic game with finitely many states and actions. Further we assume that the transition probabilities depend on the actions of only one player (player II, in our case), and that the game is completely mixed. That is, every optimal stationary strategy for either player assigns a positive probability to every action in every state. For these games, properties analogous to those derived by Kaplansky for the completely mixed matrix games, are established in this paper. These properties lead to the counter-intuitive conclusion that the controller need not know the law of motion in order to play optimally, but his opponent does not have this luxury.Item Algorithms for Robust Pole Assignment in Singular Systems(Institute of Electrical and Electronics Engineers Computer Society (IEEE Publishing), 1986) Kautsky, Jaroslav; Nichols, N.K.The solution of the pole assignment problem by feedback in singular systems is parameterized and conditions are given which guarantee the regularity and maximal degree of the closed loop pencil. A robustness measure is defined, and numerical procedures are described for selecting the free parameters in the feedback to give optimal robustness.Item The embedding of the traveling salesman problem in a Markov Decision Process(Institute of Electrical and Electronic Engineers, 1987) Filar, Jerzy A; Krass, DmitryIn this paper we derive a new LP-relaxation of the Traveling Salesman Problem (TSP, for short). This formulation comes from first embedding the TSP in a Markov Decision Process (MDP: for short), and from perturbing this MDP appropriately.Item Percentile objective criteria in limiting average Markov Control Problems(Institute of Electrical and Electronic Engineers, 1989) Filar, Jerzy A; Krass, Dmitry; Ross, Keith WInfinite horizon Markov Control Problems, or Markov Decision Processes (MDP's, for short), have been extensively studied since the 1950's. One of the most commonly considered versions is the so-called "limiting average reward" model. In this model the controller aims to maximize the expected value of the limit-average ("long-run average") of an infinite stream of single-stage rewards or outputs. There are now a number of good algorithms for computing optimal deterministic policies in the limiting average MDP's. In this paper we adopt the point of view that there are many natural situations where the controller is interested in finding a policy that will achieve a sufficiently high long-run average reward, that is, a target level with a sufficiently high probability, that is, a percentile.Item A history-based scheme for accelerating Prolog interpretation(Institute of Electrical and Electronics Engineers Computer Society (IEEE Publishing), 1989) Malhotra, Vishv Mohan; Van To, TangAn algorithm for improving the performance of a Prolog interpreter is introduced. The algorithm, unlike the intelligent backtracking schemes which improve the performance by avoiding redundant redos, avoids redundant calls. The algorithm identifies the redundant calls by maintaining a history of the program execution. The algorithm can be used in conjunction with an intelligent backtracking scheme for a further speed-up of the programs.Item Algorithms for singularly perturbed limiting average Markov Control Problems(Institute of Electrical and Electronic Engineers, 1990) Abbad, Mohammed; Filar, Jerzy A; Bielecki, Tomasz RThe authors consider a singularly perturbed Markov decision process (MDP) with the limiting average cost criterion. It is assumed that the underlying process is composed of n separate irreducible processes, and that the small perturbation is such that it 'unites' these processes into a single irreducible process. This structure corresponds to the Markov chains admitting strong and weak interactions. The authors introduce the formulation and some results given by Bielecki and Filar (1989) for the underlying control problem for the singularly perturbed MDP, the limit Markov control problem (limit MCP). It is demonstrated that the limit MCP can be solved by a suitably constructed linear program. An algorithm for solving the limit MCP based on the policy improvement method is constructed.Item Aggregation-disaggregation algorithm for e2-singularly perturbed limiting average Markov control problems(Institute of Electrical and Electronic Engineers, 1991) Abbad, Mohammed; Filar, Jerzy AIn this paper we consider a singular perturbation of order 2 for a Markov decision process with the limiting average reward criterion. We define a singular perturbation of order 2 in the following sense: we assume that the underlying process is composed of n separate irreducible processes, and that a small e-perturbation is such that it ”unites” these processes into m separate irreducible processes. Then another small e2-perturbation is such that it “unites” these latter processes into a single irreducible process. The present paper is organized as follows: In Section 2, we formulate the singular perturbation of order 2. In Section 3, we give explicitly the limit Markov Control Problem (limit MCP), that is entirely different from the original unperturbed MDP, which forms an appropriate asymptotic approximation to a whole family of perturbed problems. Thus only the single limit MCP needs to be solved.Item Perturbation theory for semi-Markov control problems(Institute of Electrical and Electronic Engineers, 1991) Abbad, Mohammed; Filar, Jerzy AIn earlier work, the authors considered the perturbation of systems undergoing Markov processes in which the times between two consecutive decision time points were equidistant. They now consider perturbations of processes for which the times between transition are random variables. These are called semi-Markov processes.Item A weighted Markov decision process(INFORMS, 1992) Krass, Dmitry; Filar, Jerzy A; Sinha, Sagnik SThe two most commonly considered reward criteria for Markov decision processes are the discounted reward and the long-term average reward. The first tends to "neglect" the future, concentrating on the short-term rewards, while the second one tends to do the opposite. We consider a new reward criterion consisting of the weighted combination of these two criteria, thereby allowing the decision maker to place more or less emphasis on the short-term versus the long-term rewards by varying their weights. The mathematical implications of the new criterion include: the deterministic stationary policies can be outperformed by the randomized stationary policies, which in turn can be outperformed by the nonstationary policies; an optimal policy might not exist. We present an iterative algorithm for computing an e-optimal nonstationary policy with a very simple structure.Item Perturbation and stability theory for Markov control problems(Institute of Electrical and Electronic Engineers, 1992) Abbad, Mohammed; Filar, Jerzy AA unified approach to the asymptotic analysis of a Markov decision process disturbed by an ε-additive perturbation is proposed. Irrespective of whether the perturbation is regular or singular, the underlying control problem that needs to be understood is the limit Markov control problem. The properties of this problem are the subject of this study.Item Architecture design of a fully asynchronous VLSI chip for DSP custom applications(Institute of Electrical and Electronics Engineers Computer Society (IEEE Publishing), 1992) Fan, Xingcha; Bergmann, Neil WA fully asynchronous, distributed VLSI architecture is introduced for dedicated real-time digital signal processing applications. The architecture is based on a data-driven computing model to allow maximum exploitation of the fine-grained concurrency. An asynchronous, self-time signaling protocol is used in the architecture to naturally match data-driven computing and circumvent the clock skew problem. After a brief description of the architecture, key issues of the architecture, such as the interconnection network, data identification, and operand matching are discussed. Finally, disadvantages of the architecture and future work are outlined.Item MultiView-Merlin: An experiment in tool integration(Institute of Electrical and Electronics Engineers Computer Society (IEEE Publishing), 1993) Marlin, Chris; Peuschel, Burkhard; McCarthy, Michael; Harvey, Jennifer GThe experiment described in this paper involved the integration of a process-centred software development environment (Merlin) and a multi-view integrated software development environment (MultiView). These two tools were developed separately from each other, with no expectation that they would ever be integrated into a single integrated software engineering environment. This paper first briefly presents the separate environments and then describes the technique used to integrate them. This technique centres on the development of an adaptor process to mediate between the environments. It was first necessary to identify the point at which to connect the two environments, and then to design and implement an appropriate process to pass commands between them. This work has resulted in enhancements to both of the individual tools and has created a combined environment which exploits the advantages of both of the original environments.