Computational Sciences Ph.D.
The Computational Sciences Ph.D. program is a multi-disciplinary, collaborative and innovative initiative that promotes conducting research in science and technology.
The program curriculum is designed around the intellectual skills needed in the rapidly changing character of research in the field and its applications in natural sciences.
The Computational Sciences Ph.D. program is an academic, research-oriented graduate program that emphasizes multidisciplinary training in innovative research in computational components and systems of computer science and its applications in natural science disciplines. The program is intended for science and engineering students who need extensive use of large-scale computation, computational methods, or algorithms for advanced computer design architectures in their doctoral studies. A firm knowledge of scientific discipline method theory and practice is essential.
This program starts in Early Fall, Early Spring, and Early Summer. Classes are taught on campus in an executive format one Saturday a month and weekly classes online.
The Ph.D. Computational Sciences Program will produce graduates who:
- Perform independent, competitive scientific research;
- Utilization of the scientific method;
- Realize computational solutions to real-world problems;
- Make contributions to the discipline through disseminated results;
- Adhere to the ethical and moral obligations in all professional activities; and,
- Promote quality of life through local and global computing systems.
Doctorate Program Admissions Requirements
Applicants for the Ph.D. in Computational Sciences must have a master’s degree in science or engineering disciplines with a strong background in applied mathematics, statistics, numerical analysis, simulation and modeling, and programming languages. A faculty review committee may select the candidate for an interview.
Majid Shaalan, Ph.D. Professor of Computer Science & Director of Computer Science Graduate Program
This program requires a total of 36 semester hours: 9 semester hours of doctoral Breadth courses, 6 semester hours of doctoral Depth courses, 3 semester hours of Research Symposium, 6 semester hours of Doctoral Research Seminar, and 12 semester hours of Doctoral Dissertation. The semester hour value of each course appears in parentheses ( ).
This course attempts to change the way students learn and think about the design, organization and hardware of a computing system architecture to meet goals and functional requirements of future technological developments, demystify computer architecture through an emphasis on cost-performance-energy trades-offs and good engineering design. This will help the student to build rigorous quantitative foundation of long-established scientific and engineering disciplines. A special emphasis will be put on demonstrating these concepts through the “Putting It All Together” approach at the end of the set of necessary modules. Modules include pipeline organizations and memory hierarchies of the ARM Cortex A8 processor, the Intel core i7 processor, the NVIDIA GTX-280 and GTX-480 GPUs, and one of the Google warehouse-scale computers, to apply the cost-performance-energy principles to this material, and memory is critical resource for the rest of the modules.
This course discusses and advocates a structured approach to parallel programming. This approach is based on a core set of common and composable patterns of parallel computation and data management with an emphasis on determinism and scalability. By using these patterns and also paying attention to a small number of factors in algorithm design (such as data locality), programs built using this approach have the potential to perform and scale well on a variety of different parallel computer architectures. A special emphasis will be put on both collective “data-parallel” patterns as well as structured “task-parallel” patterns such as pipelining and superscalar task graphs. The structured patter-based approach, like data-parallel models, addresses issues of both data access and parallel task distribution in a common framework. Optimization of data access is important for both many-core processors with shared memory systems and accelerators with their own memories not directly attached to the host processor. Extensive use of pertinent and practical examples from scientific computing will be made throughout. The programming languages used will be Python, Fortran, or C++. Both the shared and distributed paradigms of parallel computing will be covered via the OpenMP and MPI libraries.
Real-world problems entail a hierarchy of systems that interact in complex ways. This causes such complex problems not to lend themselves to easy solutions with computational methods like classical parametric machine learning. The complexity arises from three main causes: high-dimensionality, unknown function properties, and computationally expensive analysis and simulation. These challenges with the presence high volume/velocity streaming data severely aggravate the difficulty and become the bottleneck for any computational solution. This course helps the student to explore some advanced modeling and optimization methods that can help solve such problems. Deep Learning (DL) allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. DL has the ability to discover convoluted structure in large data sets by using say the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. A special emphasis will be put on how to build applications using this approach that have the potential to perform and scale well on a variety of different previously studied parallel computing systems. Extensive use of parallel programming models like CUDA, C, Python, OpenMP and may be Fortran will be to conduct weekly projects.
This course is about leading the student to explore some heavy research on a certain high-dimensional problem under the supervision of a research scientist in one of the computational sciences subdomains. The course outcome is expected to be the foundational part of a published research paper to be presented (later after augmented with other research work) in a research symposium. Special emphases put on how to build programs using this approach that have the potential to perform and scale well on a variety of different previously studied parallel computing systems. Extensive use of parallel programming models like CUDA, C, Python, OpenMP and may be Fortran will be to conduct weekly projects.
This is the second of the depth-level research explorations courses. The goal of this course is to continuous the realization efforts from course work of CISC 727. A published research paper on a computational solution in deep learning for the real-world problem selected in the prerequisite course is the expected outcome for this course. The paper is to be presented later after augmented with other research work in a research symposium. This paper should be a step toward choosing the research topic for the doctoral dissertation for the degree.
The course is of two parts: one, to allow the student to make progress on their research in a structured way and to help fulfill program requirements, and two, to present professionalization information crucial to success in the field. The course is organized largely around working on the research paper, with the goal of making it a conference-presentable and journal-publishable work.
This course is the first of the two Doctoral Research Seminar courses. The course provides the student with the theoretical background and practical application of various research methods that can be used in computational sciences. The course provides a look to the research process and literature review and study the correlation and experimental research methods and design. Students will analyze several existing research studies and design and conduct studies. The principal work in this course is the research and writing of a substantial paper in a field related to the Ph.D. dissertation of each student. The student is expected to have a research topic and primary source base identified and the topic approved by the dissertation adviser.
This course is the second of the two Doctoral Research Seminar courses. The course provides a deeper look to the research process, implementation methodology and research findings. The student will analyze several existing research studies and design and conduct studies. This course emphasizes advanced research goals and mastery of the relevant sub field. approved by the dissertation adviser.
This is an individual study course for the doctoral student that culminates in a Ph.D. Thesis. Content to be determined by the student and the student’s Doctoral Committee. The Computational Sciences thesis is an implementation of a serious experimental research that involves the formulation of a deductive model that makes novel and unforeseen predictions which should be then tested objectively and confirmed under conditions unfavorable to the hypothesis. In addition to a well written thesis, the student is required to deliver the computational solution in a specific domain. In support of their findings, the student is required to introduce a software package that meets the criteria of excellent software requirement. The thesis needs to show that the writer can produce their extended piece of work, in perfect English, and respects the standards of form and structure. May be repeated for credit.
Take the Next Step
Get More Information
Questions about our programs? Reach out to a member of our team and get personalized answers.
Create an account and start your free online application to Harrisburg University today.